False front facade in Bremen, Germany, by Mario Lurig on Flickr.
One of my colleagues on the Data Architect team, Patrick El-Azem, just released a new tool for the community which will be extremely useful for developers, architects, data scientists, and anyone else working with large datasets.
As Patrick outlines in his blog about the tool, there are many reasons you might need to work with a large custom dataset that is free from proprietary and/or confidential information. In my time in the partner ecosystem, I saw many great demo ideas stall at inception because it was too difficult and time-consuming to create the realistic-looking "dummy" data needed to populate the demo.
When you're in a situation that requires production-like data which is representative of reality but safe to use, where scrubbing an existing dataset isn't secure enough and the available public datasets don't fit the bill, Patrick's Synthetic Data File Generator (SDFG) will get you up and running quickly with a synthetic dataset built to your specifications.
If you've tried other tools for this purpose, such as mockaroo, you'll find that the SDFG provides more granular control and complexity, for example inter-related multiple file sets and dynamic fields.
To get started using the tool, read Patrick's blog, which includes a full walk-through, specs, prereqs, and even a feature roadmap; or head straight to GitHub for the download.
What could you do with 250,000,000 records today? The possibilities are endless!
nice
Posted by: Katrin Brouns | October 22, 2020 at 01:07 PM