Many of us have enough money in our pockets right now
to buy all the storage we will be able to fill for the next 5 years.
So having the storage capacity is no longer a problem.
Managing it is a problem (especially when the volume gets large).
How much data is there?
Tera Bytes (TBs) are Here
1 TB costs 1k$ to buy
1 TB costs ~300k$/year to own
Management and curation are the expensive part
Searching 1 TB takes hours
I’m Terrified by TeraBytes
I’m Petrified by PetaBytes
I’ll soon be Exafied by ExaBytes
We are here
I’m too old to ever be Zettafied by ZettaBytes
But you may be in your lifetime
You may even be Yottafied by YottaBytes
You may never be Googified by GoogiBytes
But the next generation may be?
How much information is there?
Soon everything can be
recorded and indexed.
Most of it will never be
seen by humans.
are key technologies
First Disk, in 1956
IBM 305 RAMAC
50 24” disks
(revolutions per minute)
milli-seconds (ms) access time
35k$/year to rent
Included computer &
(tubes not transistors)
10 years later
As We May Think, Vannevar Bush, 1945
“A memex is a device in which an
individual stores all his books, records,
and communications, and which is
mechanized so that it may be consulted
with exceeding speed and flexibility”
“yet if the user inserted 5000 pages of
material a day it would take him
hundreds of years to fill the repository,
so that he can enter material freely”
Can you fill a terabyte in a
a 300 KB JPEG image
a 1 MB Document
a 1 hour, 256 kb/s MP3
a 1 hour 1 MPEG video
On a Personal Terabyte,
How Will We Find Anything?
Need Queries, Indexing, Data Mining,
If you don’t use a DBMS, you will
implement one of your own!
Need for Data Mining, Machine Learning is
more important then ever!
Of the digital data in existence today,
80% is personal/individual
20% is Corporate/Governmental
We’re awash with data!
10 exabytes by 2010
~ 1019 Bytes
10 zettabytes by 2015
~ 1022 Bytes
WWW (and other text collections)
~ 1016 Bytes
Sensor data from sensors (including Micro & Nano -sensor networks)
15 petabytes by 2007
National Virtual Observatory (aggregated astronomical data)
~ 1013 Bytes
US EROS Data Center archives Earth Observing System (near Soiux Falls SD)
Remotely Sensed satellite and aerial imagery data
10 terabytes by 2004
10 yottabytes by 2020
~ 1025 Bytes
Genomic/Proteomic/Metabolomic data (microarrays, genechips, genome sequences)
10 gazillabytes by 2030
~ 1028 Bytes?
I made up these Name! Projected data sizes are
overrunning our ability to name their orders of magnitude!
Stock Market prediction data (prices + all the above?)
10 supragazillabytes by 2040 ~ 1031 Bytes?
Useful information must be teased out of these large volumes of raw data.
AND these are some of the 1/5th of "Corporate" or "Governmental" data collections. The other 4/5ths
of data sets are personnel!
Parkinson’s Law (for data)
Data expands to fill available storage
Disk-storage version of Moore’s Law
Available storage doubles every 9 months!
How do we get the information we need
from the massive volumes of data we will
Querying (for the information we know is there)
Data mining (for the answers to questions we
don't know to ask precisely).