INFORMATION TECHNOLOGIES (BİLGİ TEKNOLOJİLERİ) - (İNGİLİZCE) - Chapter 8: New Trends in Information Technologies Özeti :

PAYLAŞ:

Chapter 8: New Trends in Information Technologies

Big Data (Big Information Big Problem)

Big Data, which is one of the most important research and application fields of today, has been intruded in daily life through the clouds and Internet of things (IoT) concepts. In rapidly developing and changing technology, tech devices are constantly being renewed, the size of personal data is increased and involved through different kinds of environment. The notion of “Big Data” refers to a set of data so bulky that it is difficult to treat it with conventional tools. The analysis of big data makes it possible first of all to better understand the inclinations of users, to better understand their modes of use of services and to refine the users. For example, Google Analytics offers companies the opportunity to improve the design of their website by analyzing visitors’ visits. These applications also have utility in the public sector. Analyzing masses of data also helps to better understand the feelings or needs of citizens. For Barack Obama’s re-election campaign in 2012, the councilors analyzed the messages locally on Twitter to adapt the president’s speech live in order to better understand citizens’ concerns. However, these analyzes have their limits in terms of representativeness of the population.

In order to build a fairly solid idea of the concept and to be able to make decisions about the choice of opportunities, it is necessary to deal with 5 fact sheets with the two main themes: the fundamentals and the technological Infrastructure.

First of all, we will explain the fundamentals. It is possible to describe Big Data in four words. These are volume, speed, variety (or heterogeneity), and value. The Big data is the ability to store and process very large amounts of data, in the order of the petabyte. FaceBook®, for example, manages a database on the order of 100 peta bytes which is equal to 100 millions of bytes of information. In digital era all mankind is involved with virtual world offered by Internet. Knowing the best prospects and customers has always been the dream of business ventures. With Big Data, the dream comes true. The big data is actually putting into action the tools of storage and massive data processing developed by Google® for its own needs, then standardized and made operational by the Apache foundation with the now famous «Hadoop». The idea involves being to parallelize on a large scale the treatments on very many inexpensive machines organized in clusters. Then, will mention about building the scientific background. The decision making process intended for managers and decision-makers in charge of a unit, Big Intelligence (BI) must provide them with all the necessary assistance to carry out the activities in their charge in the right direction and according to the expected performance criteria.

Then, we will mention the technological infrastructure. The power of BigData is based on a revolutionary technology which involves massively parallel processing,real-time data management, techniques to diminish the effects of system failures which is systematic redundancy of data in Big Data. In order to process big data, Hadoop is one of the most reliable tool.

Hadoop is an Open Source project run by Apache Software Foundation and Google File System based on the Map Reduce principle.

MapReduce is a Google Corp. product. Programs that adopt this model are automatically parallelized and executed on clusters of computers.

NoSQL is a relational database that have a very specific organizational philosophy, including the SQL query language, the transaction integrity principle (ACID), and the standardization laws.

Big Data requires unmatched hardware capacity, both for storage and for processor resources needed for processing. So, Cloud Computing technology is used for this purpose.

Internet of Things (IoT)

The name of the Internet of things (IoT) has been introduced by Kevin Ashton, who is a British technology pioneer, working on Radio Frequency Identification (RFID), and with this term, he imagined connecting the real world and internet with radio frequencies through a sensor system anywhere. The fundamental idea behind these terms in today’s technology, where we hear smart home and intelligent city terms more often, is to minimize the human effort by providing the products that will make life easier. For example, you can tell an intelligent voice assistant to read the news about topics you are interested in while cooking or bathing. Although these intelligent systems seem to be the concepts of near future, such examples have now begun to be seen around us. The IoT is a widespread network system where the infrastructure is basically formed by the device to the device communication technology and the physical objects in an environment are communicated with each other or with wider systems. The IoT consists of end devices connected with different network topologies and protocols. These end elements which possibly have sensors. Sensors are electronic devices that have the ability to detect almost any kind of physical change. Pressure, acceleration, sound, light electromagnetic propagation, humidity radioactive material detection, etc. The IoT is a complex notion but the main principle is building a net between “objects” and share data. The “object” defined by the concept of the IoT designates any kind of device that can contain a sensor which is embedded to a system and connected to a computer. There are two types of object that have been distinguished in the terminology of IoT. These are passive objects and active objects.

Passive objects usually use a tag (RFID chip, 2D barcode, QR Code). They have a small storage capacity (of the order of a kilobyte) enabling them to ensure an identification role. For example, RFID tags are so common in our environment. In the markets, the products are protected by anti-theft RFID tags. On highways and bridges the automatic ID systems like HGS or OGS uses RFID tags on the windshield of autos. RFIDs are commonly used in access badges to buildings and in the electronic keys of certain automobiles.

Active objects can be equipped with several sensors, have greater storage capacity, have a processing capacity or be able to communicate over a network. Those are the devices for collecting, storing, transmitting and processing data from the physical world. The most common hardware for those active objects are embedded systems. An embedded system is a complex system that integrates software and hardware designed together to provide specific functionality. It typically contains one or more microprocessors for executing a set of programs defined at design and stored in memories. Embedded systems are now used in various applications, such as transport (avionics, space, automotive, rail), in electrical and electronic devices (cameras, toys, TV sets, household appliances, audio systems, mobile phones); the distribution of energy, in automation, household applications etc.

There are plenty many application areas of IoT such as smart cities, smart home, and environmental solutions.

Cloud computing is one of the most popular terms in current trend in IT is the Cloud technology and Cloud computing. Although the concept seems revolutionary, it has existed for a long time. The term “Cloud” is a marketing term that is somewhat obscure to some, but which is explained very simply. The term cloud computing, or simply, cloud refers to a set of technologies and IT service delivery modes that encourage the use and supply of software, the ability to store and process large amounts of information over Internet. For example, the user can freely access his documents without worrying about the machine he uses. There are different types of cloud computing which can be classified according to the architecture of the “cloud” and the internal or external management of the data processing, both in relation to the service model offered to the customer. Each type of cloud has its unique characteristics that should be well evaluated by the companies and public authorities who wish to make use of them. Types of cloud are private cloud, public cloud, and hybrid cloud.

Private cloud is a computer infrastructure (network of connected computers to provide services) mainly dedicated to the needs of a single organization.

Public cloud is a computer infrastructure owned by a supplier specializing in providing services. They provide services and systems to users, companies and administrations through Internet delivery using software applications. Their processing capacity and data storage may vary

Hybrid cloud is a computer infrastructure with mixed characteristics, such as hybrid cloud characterized by solutions which involve the use of services provided by private infrastructure alongside services acquired from public clouds.

Generally speaking, we can say that we are using somehow a cloud service when we access the same data through different computers, smartphones or gaming platforms. The innovation brought by the cloud configurations regards the

  • Distribution in the network of services
  • The simple infrastructure scalability
  • Higher reliability and continuity of service
  • Delivery in a very short time of new computing and storage resources

The Cloud services and solutions has been increased at very high speed for the last 5 years. The offerings are multiplying and new players regularly come into play. Most giants in the IT sector, such as Google, Amazon, Apple and Microsoft have their own cloud service, and the number of users is constantly growing. The Cloud remains a very attractive concept as soon as you use several devices and you want to synchronize your data on them. However, given the increasing popularity of these offers, we gradually move towards a total outsourcing of our own data which brings its own risk of security. Here the most critical question has to be answered: Is it reliable to put the files and documents on an external server? In this situation the possible risks are issued especially in terms of access rights and file ownership. Not only that our data can be vulnerable, there are risks both from the point of view of security and privacy. Therefore, it is wise to read carefully the terms of service and manner before you create a personal profile for a storage space in the cloud services. The advantages of the cloud technologies:

  • Storage and Scalability
  • Backup and Disaster Recovery
  • Mobility
  • Cost effectiveness

The disadvantages of the cloud technologies:

  • Confidentiality and data security
  • Control and Reliability
  • Compatibility

Wearable Technologies

The first wearable electronically equipped wearable dates back to 1960, designed by Claude Shannon one of the greatest mathematicians of all time, and Edward Thorp a mathematics professor at MIT, who they had set out to be able to beat the dealer in games of chance provided using mathematical algorithms. The two mathematicians after long studied the physics of ball that spun the roulette, they created an algorithm that would allow to increase of chance of winning by as much as 44%. The algorithm was added to a microcomputer which was hidden in the shoe and connected to an audible microphone emitting a sound which cheats the roulette gambling. The results were amazing. Nowadays the wearable technology is no longer used in gambling, but is exploited for physical activity in the form of smartwatch, fitness tracker bracelets and other devices that can be worn. These devices can be connected to the smartphone via Bluetooth and by means of the various sensors, schedule the most appropriate training for anyone who is using.

The wearable technology is bundled with set of innovations that include both computers or devices that are truly related with augmented reality and virtual reality technologies. The market for wearable technologies is currently dominated by a plenty many number of devices such as google glasses, watches and wristbands which have ability of communication through applications with smart phones and tablets to measure sleep, walk, blood pressure, health care data etc. In recent years it is estimated that the sector has revenues of $3 billion.

Virtual and augmented reality provides consumers with a completely new way of experiencing content. Virtual reality devices could transform the way they are broadcast by offering users the ability to attend virtual events, concerts or academic conferences in real time. A viewer wearing an augmented reality device could watch a television show while simultaneously displaying related content (an experience similar to that of the “second screen” of mobile phone applications that provides additional information to the user). It would also be possible to call a search function or a dictionary while reading a book, or listening to music.

Finally, the ability to build cumulative data for a community or society as a whole will be extremely attractive to both private and public sectors. Governments will need to consider how to ensure their access to life data for public interest and public health initiatives.

The future of the portable technology sector, however, remains promising, as these concerns are counterbalanced by considerable potential. The fact that its growth is not faster could have a very simple explanation: consumers may not be ready for the many features offered by portable devices. Apple was working on the multi-touch interaction technique long before its iPad was created, but waited to get it on the market that consumers gain an instinctive understanding of the benefits that this technology could bring them. It is said that you must learn to walk before you can run; In the same way, we may need to understand the mechanisms of follow up, augmentation and learning before they really can be useful to us. Otherwise, we will lose all enthusiasm for these novelties. Just imagine the number of activity bracelets and cardiac monitors that already get dust at the bottom of the gymnastics bags of mature athletes.

Artificial Intelligence

Artificial Intelligence (AI) is a scientific discipline looking for problem-solving methods with high logical or algorithmic complexity. It is a new and rapidly evolving technology, first developed in 1950 by Alan Turing, the author of the famous Turing Test (first conversation between human and Artificial Intelligence). Artificial intelligence evolves constantly to try to imitate the intelligence of man. In general, we can say: the artificial intelligence is the science that deals with how to create intelligent machines, and found in the possibilities offered by information technology and the most practical way possible to achieve a similar result. According to the Wikipedia, Artificial Intelligence has been defined as: Intelligence exhibited by machines. In computer science, the field of AI research defines itself as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of success at some goal. As machines become increasingly capable, mental facilities once thought to require intelligence are removed from the definition. For instance, optical character recognition is no longer perceived as an example of “artificial intelligence”, having become a routine technology. Capabilities currently classified as AI include successfully understanding human speech, competing at a high level in strategic game systems (such as chess and Go), self-driving cars, intelligent routing in content delivery networks, and interpreting complex data. (Wikipedia)

The improvements in AI achieved in recent years are unprecedented: more and more powerful computers and the ability to have huge amounts of data, thanks to the Internet, have made it possible to create very elaborate software, which in the coming years could significantly change our relationship with computers, cars and more in the world. Some small pieces of these advances and innovations are already part of our lives, from personal assistants like Siri or Bixby on smartphones to cars that drive themselves, through sophisticated software like AlphaGo Google, which recently it learned to play the complicated Chinese board game “go” better than anyone else, beating world champion. The ultimate goal of the AI, as a science, is to create software and hardware able to achieve goals and solve problems in real life as a human would. Companies like Google, Facebook, Amazon, Uber, Tesla and other car manufacturers are investing a lot of resources and money to produce AI, at least in the broadest sense.

Machine learning is one of the core areas of artificial intelligence (AI) and deals with the implementation of systems and algorithms that are based on observations as data for the synthesis of new knowledge (inductive reasoning).

Artificial Neural Network is a mathematical model that emulates the biological neural networks of the brain where neurons are interconnected by synapses.

With artificial neural networks and machine learning, Google has for example improved his translator making it much more accurate machine translations from different languages, such as English, Spanish, French, Portuguese, Chinese, Japanese, Korean, and Turkish.

Using inferences that summarize previous experiences in solving chaotic problems are the most basic applications of AI to make predictions for later. For example, AI is frequently used in weather forecasting applications today, and the reliability of the results obtained is increasing steadily.

Virtual Reality and Augmented Reality

Virtual reality (VR) can be defined as the combination of hardware and software devices able to create, for the senses of one or more users, a simulated three-dimensional environment in which to move exactly as if they were in the real world. The VR, therefore, is conceived as a space created digitally accessible which comes into existence by special devices and electronic equipment (not only the viewer, but also gloves, boots etc.) that allow to interact with the virtual environment achieved through powerful software. Virtual space created through the software is perfectly adaptable to user needs: tracking the motion of the eyes or head, the prospect of objects and scenes is modeled depending on your point of view, thus offering an experience as immersive as possible and realistic. The first commercial application of VR is dated to 80’s. In 1982 the “Aspen Movie Map” which is VR device that allows you to explore the slopes of mountain Aspen has been introduced.

The simulation, obtained with the highest possible degree of realism, the viewer must engage the senses (sight, hearing, touch) which has the impression of becoming part of a real world (probably similar to a real environment) or imaginary. It can thus be considered as a genuine type of mental experience: the person actually believes to be in that world and to be able to interact, just like in the real world.

There are two main features of this virtual environment:

  • The real perception of being in that world. Feeling amplified by the use of special instruments and images
  • The ability to interact with body movements, head and limbs increasing the feeling of being able to take possession of that dimension.

In order to achieve certain results must be provided with a graphical user interface (representing the virtual world) and, in particular, three main variables should be handled. These are space, time, and interaction. Depending on the degree of immersion and involvement, they stand at this point three types of VR. These are Immersive Virtual Reality (IVR), Not Immersive Virtual Reality (Desktop VR ), and Augmented Reality (AR).

Immersive Virtual Reality (IVR) defines immersive when it is able to create a sense of sensory immersion in three-dimensional computer generated.

Not Immersive Virtual Reality (Desktop VR) is a kind of VR which the new setting is not perceived as real as it lacks the feeling of involvement in the first person.

Augmented Reality (AR) allows you to superimpose computer-generated images to the real ones, increasing the information content (think of the famous Robocop movie, where the hero drew the world’s information and the right people through data that were stacked to view). The augmented reality can be defined as a union between the real and the virtual image: the real image, in essence, are superimposed virtual details that complement each other, thus creating a unique environment. Precisely, in augmented reality, you can integrate the actual image with a series of virtual products such as videos and 3D animations, audio or multimedia elements, and much more. The Augmented reality is a technology that allows you to “multiply” our experience of reality through visual indicators.