Time Series Prediction Approaches

Time Series Journal

Subscribe to Time Series Journal: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get Time Series Journal: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Top Stories

Solr vs. Elasticsearch. Elasticsearch vs. Solr.  Which one is better? How are they different? Which one should you use? Before we start, check out two useful Cheat Sheets to guide you through both Solr and Elasticsearch and help boost your productivity and save time when you’re working with any of these two open-source search engines. Check out Solr Metrics API Cheat Sheet Check out Elasticsearch DevOps Cheat Sheet These two are the leading, competing open-source search engines known to anyone who has ever looked into (open-source) search.  They are both built around the core underlying search library – Lucene – but they are different.  Like everything, each of them has its set of strengths and weaknesses and each may be a better or worse fit depending on your needs and expectations. In the past, we’ve covered Solr and Elasticsearch differences in Solr Elasticsearch... (more)

It’s Never Obvious: About Percentiles

In an earlier blog, the evolution of performance metrics – for example, from load time to above the fold to speed index – was discussed. As much as this evolution is warranted in the wake of the dynamic application landscape and changing user expectations, the phenomenon also contributes to the metrics overload (discussed previously here). This makes systematic, automatic and robust data analysis paramount. Detecting sudden change (or trend shift) or detecting outages are example steps to this end. Some of the common statistics used for data analysis are: arithmetic mean, median, geometric mean and standard deviation. The proper use of the above is discussed here. It is pretty common in the Ops world to monitor multiple percentiles of a given metric. For instance, the 95th percentile (commonly referred to as 95p) of Document Complete is monitored. In addition, 99p (a... (more)

KDE 3.1 vs. GNOME 2.2: How GNOME became LAME

(LinuxWorld) — Judging from the comments about my article last week, many readers seem to have missed the point. I used installation experience to draw attention to both the negative and positive consequences of the different designs in GNOME and KDE. What should have tipped off most readers is the fact that the very things I complained about — the GNOME approach of scattering of configuration files, the imitation of the Windows registry, the inconsistency of the user interface, the lack of features in the user interface, the lack of features in Nautilus, etc. — have nothing to do with GNOME on Debian. Unless Debian alone has a special "crippled design" version of GNOME 2.2 that is based on an entirely different framework than GNOME 2.2 for every other distribution, then the issues I raised apply whether your installation of GNOME goes perfectly o... (more)

The Confusion Solution, Big Blue Goes Wireless

According to IBM, the B2E space is where the fastest and most readily measurable "wireless ROI" is to be obtained within the enterprise. Michel Mayer, general manager of IBM's pervasive computing division, calls it IBM's "sweet spot"; Dean Douglas, general manager of mobile e-business for IBM Global Services, says that IBM's 90,000 business partners around the world are demanding it; and Michael G. Maas, director of marketing at IBM's Wireless Solutions business, calls it "the biggest thing to happen at IBM since we moved beyond the mainframe." "It" of course is wireless capability, as exemplified by the explosion of devices connecting to the Internet in search of back-end systems and of each another. Mobile phones, combined PDA/handhelds, wireless e-mail devices - you name it, IBM has a piece of it...a piece of the wireless action. And it's a big piece. According t... (more)

Java Games Development - Part 2

Part 1 of this series appeared in the August issue of Java Developer's Journal (Vol. 8, issue 8). JDJ: I'd just like to pick up on that 85% portability goal Jeff mentioned earlier. I'm just going on assumptions, but I think if you were developing a title for the PS2, GameCube, and XBox you would attempt to make sure that only the graphics and audio functionality were platform-specific and make the rest of the game as portable as possible. Seventy-five to eighty-five percent portability would therefore seem to be an achievable goal in C/C++, in which case Java has just lost one of its advantages, has it not? Cas P: Usually I'm even more optimistic than Jeff about something here. I think I can achieve 100% portability. By focusing on a "pure Java platform" like the LWJGL (Lightweight Java Gaming Library), which, once you realize you're coding to the LWJGL Java API, not... (more)

A Nightclub in Your Pocket

4G will revolutionize wireless entertainment by allowing users to access content at broadband speeds. The killer apps for entertainment include gaming, books/magazines, gambling, video, and adult content. 4G wireless - wireless ad hoc peer-to-peer networking - eliminates the spoke-and-hub weakness of cellular architectures because the elimination of a single node does not disable the network. Simply put, if you can do it in your home or office while wired to the Internet, you can do it wirelessly in a 4G network. My son was playing Pokemon red version on his GameBoy the other day. Bored with that apparently, and bored with the other color versions of the Pokemon spectrum, and with no other kids within one meter to connect his GameBoy to via a cable, his journey with Pikachu ended for that day. But what if my son could battle against Ash, Misty, and Brock without a ca... (more)

It's Official: Welcome to the 'Technology Bounce Back'

All the myriad commentators who monitor Internet technologies and the i-Technology companies on the NASDAQ doubtless have their own private cluster of indicators that they use to take a weather-check on the overall state of the industry. For some, it's as simple as looking at the NASDAQ index level. This (wholly understandable) approach is the one adopted by SYS-CON's own Roger Strukhoff, who wrote recently: After going over 5000 at the height of the dot.com bubble, we all know that it plunged precipitously and consistently for the next 18 months. Any hope of a quick recovery was dashed by 9/11, and then a new flicker of hope was extinguished when war came in March 2003. Since then, the NASDAQ's most important numbers have been 2000, 2000, and 2000. The first of the three numbers represents the year of its peak, the second the level at which it settled, and the thi... (more)

Multi-Core and Massively Parallel Processors

As software developers we have enjoyed a long trend of consistent performance improvement from processor technology. In fact, for the last 20 years processor performance has consistently doubled about every two years or so. What would happen in a world where these performance improvements suddenly slowed dramatically or even stopped? Could we continue to build bigger and heavier, feature-rich software? Would it be time to pack up our compilers and go home? The truth is, single threaded performance improvement is likely to see a significant slowdown over the next one to three years. In some cases, single-thread performance may even drop. The long and sustained climb will slow dramatically. We call the cause behind this trend the CLIP level. C - Clock frequency increases have hit a thermal wall L - Latency of processor-to-memory requests continues as a key performance... (more)

Evolution of Web 3.0

Web 3.0 is a different way of building applications and interacting on the web. The core model of web 3.0 states that entire World Wide Web will be seen as a single database. Many tools are being developed through which interactivity between different websites with different data can be enhanced. Prediction is that Web 3.0 will ultimately be seen as web applications which are pieced together. There are a number of characteristics of these applications: they are relatively small, the data is in the cloud, they can run on any device- PC or mobile phone, they are fast and customizable. Furthermore, the applications are distributed virally: literally by social networks or by email. That's a very different application model than we've ever seen in computing. However, there is still considerable debate as to what the term Web 3.0 means, and what a suitable definition might... (more)

Aster Data Systems Announces MicroStrategy Certification, Alliance, and Reseller Agreement

LAS VEGAS, NV -- (Marketwire) -- 01/13/09 -- MicroStrategy World 2009 -- Aster Data Systems, a proven innovator of high-performance analytic database systems for frontline data warehousing, today announced that the MicroStrategy Business Intelligence Platform(TM) has been certified for the Aster nCluster(TM) analytic database. MicroStrategy is a leading worldwide provider of business intelligence (BI) software. Aster Data will be demonstrating the Aster nCluster analytic database with the MicroStrategy Business Intelligence Platform at MicroStrategy World 2009, January 13-16, 2009, booth #610. The MicroStrategy Business Intelligence Platform has been certified for the Aster nCluster analytic database, which will afford business managers easy access to advanced analytic capabilities such as time-series analysis, and pattern recognition, all of which can be set up an... (more)

Fujitsu Develops Industry's First Integrated Development Platform for Big Data

Slashes data processing development time by 80%; enables integrated development for large-scale stored data analysis and complex event processing Kawasaki, Japan, Aug 21, 2012 - (JCN Newswire) - Fujitsu Laboratories Limited today announced the development of the industry's first integrated big data development platform for processing large volumes of diverse time-series data. In recent years, massive amounts of diverse data - as represented by sensor data, human location data, and other kinds of time-series data - continue to grow at an explosive pace. This has prompted the development of parallel batch processing technologies such as Hadoop(1), as well as complex event processing technologies(2) for processing data in real time. However, because each processing technology has employed different types of development and execution environments, it has been difficult to... (more)