top of page

My Items

I'm a title. ​Click here to edit me.

Last day of Engineering Life

Last day of Engineering Life

Hope it stays here forever >>>>>>> @IEM_ECE(2018-2022)

अधिक
Can Artificial intelligence take over our jobs in the future?

Can Artificial intelligence take over our jobs in the future?

Artificial intelligence has been developing immensely in the last few years. This technology has been one of the greatest inventions. But, with the advancement of this technology, there is a risk that the world is foreseeing. Many people have doubts about the impact that this development will have on the employment of regular people.
There is speculation and fear around the world that there will be some millions of unemployment and job crisis in the future. People fear losing their jobs to machines and artificial intelligence in no time. Now it is our time to analyze the whole socioeconomic condition around the world and how this can impact the employment sectors in the world. The jobs at risk Repetitive tasks can easily be mechanized, rendering some positions redundant over time. For example, technology and automation are increasingly replacing human labor in customer service/call center operations, document categorization, finding and retrieval, and material moderation. Intelligent machines that can navigate space securely, discover and transfer items such as goods, components, or tools and conduct intricate assembly procedures are replacing humans in duties related to the production line and factory operation. Even more complex activities, such as those involving real-time processing of many signals, data streams, and accumulated knowledge, demonstrate A.I.'s superiority. Autonomous cars, for example, may collect and understand data about the world and its dynamics in real-time, as well as decide and act by pre-determined optimization goals.
Jobs that AI may be capable of replacing It is speculated that many of the repetitive tasks can be easily replaced by AI with the help of machine learning and other advanced technologies. Here are some of the jobs that are under threat. Receptionist As the character Pam from The Office predicted, computerized phone and scheduling technologies can replace a lot of the traditional receptionist function, especially in current technology firms that lack office-wide network infrastructure or worldwide organizations. Couriers and Deliveries Drones and robots are already displacing couriers and delivery staff, and it's only a matter of time before the entire sector is mechanized. Simultaneously time, this sector is expected to grow by 5% by 2024, so it will not happen as quickly as we think. Salesperson As advertising shifts away from traditional platforms and toward internet and social media environments, people don't need to handle sales for marketers that want to obtain ad space. By providing free application program interfaces (APIs) and self-serve ad marketplaces , more social media networks are removing the salesperson and making it faster and easier for users to produce money, which is reflected in the industry's expected 3% reduction. Just as the advertising salespersons, retail salesperson jobs are at a stake as well. With services like self-checkout , companies are democratizing the customer experience, and today's client is much more internet-savvy and inclined to conduct their research and purchase . Proofreaders Proofreading software is commonly accessible , and HubSpot makes extensive use of it. From Microsoft Word's simple spelling and grammar check to Grammarly and the Hemingway Application, there are now various technologies that make it much easier to self-check your work. Telemarketers We've all already got robocalls advertising various products and services, but telemarketing job growth is expected to decline by 3% by 2024 . This is due in part to the following requirements for success. Unlike other sales occupations, telemarketers do not require a high level of cerebral or psychological aptitude to be successful. Because direct telephone open rates are typically less than 10%, this is an excellent option for automation. Bookkeeping Jobs in this industry are expected to decline by 8% by 2024 , and it's easy to understand why much bookkeeping is already being automated. It's no surprise that this profession has such a high probability because tools like QuickBooks, FreshBooks, and Microsoft Office already do the bookkeeping for you at a fraction of the expense of hiring someone. Compensation and Benefits Manager This is surprising given that job growth is predicted to increase by 7% by 2024. As businesses grow in size, human and paper-based software can create more hurdles, time delays, and costs, especially in global markets . When it comes to providing benefits to large groups of employees, automated benefits systems can save time and effort, and companies like Ultipro and Workday are already well-known. Computer Support Specialist With so much information on the internet, including directions, step-by-step tutorials, and hacks, it's no surprise that companies will rely more heavily on bots and automation in the future to manage support requests from employees and customers. Market Research Analyst Market research experts are important in the development of communications, content, and products, but autonomous AI and surveys are making it easier to obtain this data . GrowthBot, for example, can conduct market research on nearby firms and competitors with a single Slack query.
Conclusion Though the complete list of replaceable jobs is still unpredictable right now considering the rapid development in technologies, there are still many jobs that are irreplaceable by AIs to date. This includes the jobs of writers, editors, event planners, graphic designers, software developers, human resource managers, public relations managers, chief executives, sales managers, etc. These jobs are still safe from AI competitors because they require more human interaction, high emotional intelligence, creativity, and critical thinking.
While AI will be able to automate some of the more time-consuming tasks, it will not be able to fully replace the human emotions and behaviors that customers and audiences are familiar with. Some clients still prefer to speak with a live customer service professional rather than a bot when they have a problem. Regularly, a company may require a chief executive or managers with strong emotional intelligence and possibly other teamwork-oriented qualities. Similarly, AI may not be able to swiftly replace a creative job or service that requires employees to think outside the box or try new things. ________________________________________________________________________ To help their work, Newsmusk allows writers to use primary sources. White papers, government data, initial reporting, and interviews with industry experts are only a few examples. Where relevant, we also cite original research from other respected publishers. Source- Analytics Insight ________________________________________________________________________

अधिक
The Indian Army will be benefited from Artificial Intelligence and Air-based sensors for LAC

The Indian Army will be benefited from Artificial Intelligence and Air-based sensors for LAC

In the face of an increasingly aggressive PLA, the Army is stepping up surveillance capabilities in the Eastern Sector, from new Artificial Intelligence-enabled software to track Chinese patrol movements to integrating a range of ground and air-based sensors that look deep across the Line of Actual Control. As part of a multi-pronged strategy, road infrastructure is being dramatically improved, particularly in the sensitive Tawang area, with a network of bridges and tunnels that will reduce reliance on air assets to assist soldiers in all weather situations. Senior Army officials gave an inside look at new capabilities, saying that not only are all existing surveillance assets being deployed but that younger officers have been charged with developing bespoke systems specifically tailored to the needs of the Tibet border. Last year, area domination patrols and visits by senior PLA officials across the border increased in the Tawang sector, a pattern that was also witnessed in Uttarakhand and Sikkim. As previously noted, this has caused concern in some cases, such as the Barahoti incident in Uttarakhand in late August, when transgressing PLA forces caused infrastructure damage. Establishments like the division surveillance centre at Rupa, which receive real-time photographs and inputs of PLA movements along the LAC, are critical to the reaction to such provocations. To design a reaction strategy, the inputs - from UAVs, helicopter-based sensors, ground radars, and satellite feed - are gathered and analysed. The image that emerges in the command centre is eye-opening, from the number of intruding troops to the cars they drive and the infrastructure that is being built beyond the border. Officials claim that this results in a faster reaction time, which can help to reduce violations. "We are leveraging technology to improve our knowledge of the issue without increasing our deployment. Our main focus is sensor fusion; our ground and air-based sensors are being combined, and we're always working to increase our capabilities "According to Maj Gen Zubin Minwalla, commander of the 5 Mountain Division stationed in Rupa, Arunachal Pradesh, which is responsible for Tawang's defences. The Army is also developing unique methods in-house to aid in this increased surveillance. An AI-enabled programme that separates signatures picked up by battlefield surveillance radars is one initiative that is now being validated through trials by advanced soldiers. The programme classifies signals to determine if soldiers, vehicles, or animals are moving and transmits real-time updates to the command centre, allowing it to develop a reaction. Another device being developed is a portable surveillance system that can be deployed across the border that counts the number of transgressing soldiers and their method of transportation, passing the information to higher commanders for counter-action.

अधिक
TIME OVER FOR OLE GUNNAR SOLSKJAER?

TIME OVER FOR OLE GUNNAR SOLSKJAER?

@FabrizioRomano Last season, Manchester United secured 2nd position with excellent performances after the winter break and the signing of Cristiano Ronaldo. They were the favorites to win the title this season. They started the season well, sitting in pole position until they lost 1-0 to Aston Villa and suffered a shocking defeat against Young Boys in the Champions League. Manchester United lost 4-2 to Leicester City and suffered a humiliating 5-0 thrashing at the hands of Liverpool on Sunday. In the first half, Liverpool scored four goals, and Man Utd couldn't pull it back. Half-time substitute Paul Pogba received a red card, and Liverpool fans even taunted Man Utd with chants of "ole on the wheel." In the last five games, Man Utd won only one game, drew one, and lost three, leaving them in 7th position with 14 points and struggling with defense. Antonio Conte and Zinedine Zidane have been linked to Man Utd after the club's poor performance under Ole. Zidane has a higher chance of getting the job as he has an outstanding record in both league games and the Champions League. He has even expressed his willingness to take on the challenge and compete in European competitions.

अधिक
How can machine learning be both fair and accurate?

How can machine learning be both fair and accurate?

When it comes to utilising machine learning to make public policy decisions, researchers at Carnegie Mellon University are questioning a long-held belief that there is a trade-off between accuracy and fairness. Concerns have grown as the use of machine learning has grown in areas such as criminal justice, hiring, health care delivery, and social service interventions, raising questions about whether such applications introduce new or amplify existing inequities, particularly among racial minorities and people with low income. Adjustments are made to the data, labels, model training, scoring systems, and other parts of the machine learning system to defend against this bias. The underlying theoretical assumption is that the system will become less accurate as a result of these modifications. In new research just published in Nature Machine Intelligence, a CMU team hopes to refute that belief. Rayid Ghani, a professor in the School of Computer Science's Machine Learning Department and the Heinz College of Information Systems and Public Policy; Kit Rodolfa, a research scientist in ML; and Hemank Lamba, a post-doctoral researcher in SCS, tested that assumption in real-world applications and discovered that the trade-off was negligible in practice across a range of policy domains. "You can truly obtain both. You don't have to forgo precision to create fair and equal processes "Ghani said. "However, it does need the conscious creation of fair and equitable processes. Off-the-shelf solutions aren't going to cut it." Ghani and Rodolfa concentrated on circumstances in which in-demand resources are restricted and machine learning techniques are utilised to assist in resource allocation. The researchers looked at four systems: prioritising limited mental health care outreach based on a person's risk of returning to jail to reduce reincarceration; predicting serious safety violations to better deploy a city's limited housing inspectors; modelling the risk of students not graduating from high school on time to identify those who need additional support; and assisting teachers in reaching crowdfunding goals for classroom needs. In each case, the researchers discovered that models tuned for accuracy—a common strategy in machine learning—could accurately predict the desired results, but there were significant differences in intervention recommendations. When the researchers made tweaks to the models' outputs aimed at increasing fairness, they observed that discrepancies based on race, age, or income—depending on the situation—could be addressed without sacrificing accuracy. Ghani and Rodolfa believe that their findings will persuade other researchers and policymakers to reconsider using machine learning in decision-making. "We urge the artificial intelligence, computer science, and machine learning groups to stop assuming that accuracy and justice are mutually exclusive and instead start creating systems that optimise both," Rodolfa said. "We expect that policymakers will use machine learning as a decision-making tool to assist them to attain more egalitarian outcomes." reference- techexplore

अधिक
Chelsea and Erling Haaland Saga Still On

Chelsea and Erling Haaland Saga Still On

SOURCE:- #FrankKhalidUK After missing out on the Erling Haaland deal, Chelsea signed Romelu Lukaku in the summer. However, the club's pursuit of Haaland continues.  Last night, Chelsea won their Champions League group stage match against Malmo FF 4-0. But the victory was marred by injuries to Lukaku and Timo Werner. In the 23rd minute of the game, Lukaku got injured and had to be substituted with Kai Havertz. Then, in the 44th minute, Werner also had to be substituted with Callum Hudson-Odoi. After the match, in the press conference, the question about Erling Haaland was raised by Chelsea coach Thomas Tuchel after both strikers' injuries. In reply to that question, Thomas Tuchel said, " Of course we talk about him... But let us see what will happen in the next weeks."[Haaland playing with Lukaku], I have no problem talking about that..." It can be inferred from this statement that if Lukaku fails to perform well and score goals soon for Chelsea, as he has not scored in the last eight games, then Chelsea might pursue Haaland in the upcoming winter transfer window.

अधिक
VIRAT KOHLI TO STEP DOWN ON CAPTAINCY OF TEAM INDIA AFTER T20 WORLDCUP 2021.

VIRAT KOHLI TO STEP DOWN ON CAPTAINCY OF TEAM INDIA AFTER T20 WORLDCUP 2021.

@imVkohli KOHLI TO STEP DOWN ON CAPTAINCY OF TEAM INDIA AFTER T20 WORLDCUP 2021 Kohli is stepping down from the Indian T20 team captaincy after the 2021 T20 world cup. He gave a statement saying that it was an honor for him not just to play for the Indian cricket team but to leading in this T20 world cup also, but he also said that he is stepping down from captaincy from T20 by his own will. No one forced him, but he will give everything to team Indian whenever he gets the chance. The first time Kohli captained the India team in the T20 was on January 15, 2017, series against England when M.S Dhoni stepped down from both ODI and T20 captaincy. It is more likely that Rohit Sharma, who won 5 times the IPL trophy, has also been there for Kohli whenever Kohli left field during matches in T20.

अधिक
HOW TO START AN ARTIFICIAL INTELLIGENCE COMPANY

HOW TO START AN ARTIFICIAL INTELLIGENCE COMPANY

Ai will be a Billion dollar industry in the coming 10 years in India. Let's Talk about, starting an AI startup company. Many experts believe that artificial intelligence is the world's future. It's difficult to disagree with the rise of AI-enabled chatbots, voice assistants, and self-driving automobiles. How can you get started now that you have the opportunity to profit from this cutting-edge technology? At first appearance, it appears to be a difficult undertaking, but don't worry; we've got your back. This article contains some of the most useful advice for starting an AI startup. The most popular AI trends for small businesses Artificial intelligence was a popular topic in the computer sector a few years ago. Everyone wanted a piece of the action, but no one knew what they should do with it. That has changed dramatically, though, as the industry now has a much better understanding of what AI is – and isn't – capable of. As a result, some of the most successful AI-based company ideas in recent years include: Language translation and recognition preparing tax documents Stock prediction crowd-sourced information AI is currently utilized mostly in applications that require data and computers to recognise it, such as a CRM system. Human activity, on the other hand, can be monotonous, and it is expected to become increasingly so as computers finally exceed humans in practically every work. Automating repetitive manual tasks is where the jobs of the future will be found. Steps to take when starting an AI company For a successful launch, an AI firm needs the correct team, which includes programmers, software engineers, a data scientist, a marketing manager, and others. It's similar to any other company endeavor in that getting the appropriate people on board early on is critical to success. There's a lot more to it, so we recommend reading our guide to starting a small business, but assembling a team of valuable employees will be a top priority. The most crucial positions you'll need to fill right away are for basic data science functions, and building prototypes will take at least two to three employees. A designer for product design and UX, product managers, user experience experts, and other marketing support employees are additional positions for AI firms. After you've built your AI platform, the most crucial task will be marketing, but you'll need a solid product to promote. That means hiring competent, knowledgeable data scientists and engineers to train your AI platform to comprehend, learn, and improve its performance over time should be your first priority. Check out this link for more information on specific team positions for an AI startup. Obtaining money for your AI venture There are a lot of factors to consider when starting an AI company — data scientists aren't cheap. You'll require a lot more money for your business than the ordinary software startup. You'll need to create not merely a product that attracts customers and clients, but also a service that can assist them in putting it into action. When it comes to AI startup funding, you have a few options: The bootstrap method (self-funding) Self-funding offers the benefit of granting you complete control over your company. On the downside, it may expose you to the biggest financial risk in rare cases. Trying to attract venture capitalists Angel investors, sometimes known as venture capitalists, may be interested in assisting your company in exchange for a seat on the board of directors or a percentage of the company's stock. A comprehensive business strategy may be required to get financial investment. Getting a loan for a small business If you don't have enough money but want complete control over your company, a loan may be useful. Prepare a detailed business strategy for current banks and credit unions, including cost estimates and financial forecasts. You should look into some AI startup funding success stories to see how successful companies went about getting the money they needed to grow. Begin with a simple test project. This is my personal favorite method to start an innovation company. Start small. Before you invest hundreds of thousands of dollars in your company's strategy, try a pilot project to see whether you can keep going in the right direction or make some changes to improve your chances of success. Building an AI-based email system that can extract key information like dates and addresses from a message would be a small test project for AI. It's not groundbreaking technology, but it could help you launch an AI firm. Larger pilot projects may include employing AI to assist a product company in automatically recommending other items based on a person's browsing history, or evaluating the sentiment of your website to automatically generate positive and negative keywords to boost sales. These projects won't help you prove that you're on the right track before investing significant resources in your AI business if you can't come up with good predictions and insights based on your data. However, they could help you prove that you're on the right track before investing significant resources in your AI business. Marketing: Create a digital footprint by reaching out to customers. For a variety of reasons, marketing your AI startup to a customer base is difficult. The first step is to persuade customers that they genuinely need your goods or that they won't be able to buy it somewhere else. The second issue is brand advocacy in the digital environment. Most customers can't even describe what your business does or doesn't do, let alone the specifics. The first step is to create a communications plan that describes the channels you'll use, who you'll connect with, and how much information about your technology you're willing to share. Establishing your startup's digital marketing strategy can be difficult, but not impossible. Consider the following three strategies to assist you: • Create a skeleton for your AI startup, sometimes known as a digital footprint. Identify which data sets and analytics products your firm will need to design and execute a digital marketing strategy using analytics tools like Qualtrics, Drossos, or even Google Analytics. Examine your available data sets to determine how you can clear up any ambiguity and maintain clarity about what you're doing, who you're doing it with, and why. • Make a list of the channels you'll use to develop a marketing strategy. What channels have you already set up? What will your marketing plan entail in terms of expanding on your current products and services? • Establish an implementation strategy that lays out the steps you'll take to create, implement, and evaluate your digital marketing initiatives. Create measurable and accountable metrics to track your performance and growth over time by establishing your strategy. You'll create a framework for developing brand advocacy and customer acquisition by establishing a digital footprint, setting up processes to monitor and measure, and detailing how you'll evaluate your results and lead future efforts by constructing a digital footprint. Let's start a conversation about your idea in the comment section.... To help their work, Newsmusk allows writers to use primary sources. White papers, government data, initial reporting, and interviews with industry experts are only a few examples. Where relevant, we also cite original research from other respected publishers. Reference- Analytics Insight

अधिक
How to value time in your life?

How to value time in your life?

Do you know that all living souls are spending exactly the same period of time, that there is all of a homeless person sleeping below a bridge or on a park bean, that no one has the same time as the world's most productive industrialist, and we all have exactly the same amount of time. So it's what we do with time, and that is what matters? What can I hear, you know? I hear the clock tick, and time waits for no man. Life passes by and rapidly doesn't waste those years, don't waste these years, live them to struggle with everything you have that ticking clock.  So like the end of a hand day, you can put your head down, you can head down satisfied. It makes me crazy when people say that I don't have enough time to work out at the gym 45 minutes a day or to do something 45 minutes an hour to improve whether it's physical or intellectual. Imagine you need 1 hour a day about history how much you will learn after 365 hours? So it drives me crazy because we have 24 hours a day we sleep 6 hours a day. So there are still 18 hours. How you address it? Are you face it? You face it a single day of your life. Where you say all right, if you're fat and you have to lose weight, is patience. Right now, who are you? I'm fat, that's not something that I like accepting myself.  If you lose 3-4 pounds that is a huge accomplishment you have to live in your own world. You can't judge yourself. That is why social media is horrible. You can't judge yourself often of what you call competition that we have made up in our minds. Things about how people look, how people act, how intelligent people are. This is a race you're running alone. You want to improve! You want to get better! You want to get on a workout program or a clean diet or start a new business. You want to write a book or make a movie or build a house or a computer or an app. Where do you start? You start right here! When do you start? You start right now! Here are some suggestions, that may help you to value your time: Know your Time It is hard to stop procrastination or improve your productivity if you don't measure your time. Because you have to know where you are going first if you want to manage your time better. You don't have enough memory. Would you have an answer if I were to ask you what you were doing exactly one week ago? You go there. How know your time?   Keep an activity log. I often ask them to keep an activity log for two weeks before I even have a real session with users. You can exactly imagine an activity log — an hour after an hour record of what you do all day long. It doesn't matter which particular method you use for your activity log. The only important thing is that for at least two weeks you want to keep records. You prefer to have the recorded activities for a whole month. On my desk, I only have a pen and a notepad and I write down every hour the time and what I did during the last hour. Keeping the notebook visible is important, so you don't forget about it. Point out the Non - Productive work In fact, this is a simple step. I've only got one question for you: "Take all the recurring events one by one in your log. What if you were to stop doing them?" When the answer is: "Every hell is losing." Don't change anything. But if your reply is: "Nothing would happen."  You got gold. We're all doing ZERO returning activities. I call the time-wasters of those activities. Remove the Time Wasters Boom. So you're stopping wasting time. Know where your time goes, know where your time goes. Identify the critical tasks in your life from the trivial tasks. And reduce the tedious, time-consuming tasks. "It's easy?" Yes.  You regularly keep a log if you want to be a super-efficient person. For 365 days a year, you don't need to keep a log. Rather, do two three weeks a year stretches. This is enough to monitor your time and identify new wasters of time. Also, it makes you think of your daily routine as an additional advantage of such a simple exercise. Often we start to spend time and become habits. And it's difficult to break these bad habits if you don't know the ineffective behavior. This is one of the most powerful things that I have found to stop wasting time. So, stop thinking about it. Stop dreaming about it. Start doing it take that first step. And make it happen! Feel free to share your thoughts and suggestions in the comment section. To help their work, Newsmusk allows writers to use primary sources. White papers, government data, initial reporting, and interviews with industry experts are only a few examples. Where relevant, we also cite original research from other respected publishers. Source- Success

अधिक
What Is the Importance of Knowledge-Supervised Deep Learning?

What Is the Importance of Knowledge-Supervised Deep Learning?

In deep learning, graphs have proven to be the primary means of communication for neural networks. Researchers from all across the world are working on iterations and experiments to unlock AI's potential in terms of recognition, solving complex problems, and producing correct results. Machines have generally relied on inputs that are fed into a model to acquire particular outputs, as opposed to the recognition and observation developed by the human brain. Researchers, on the other hand, want to use graphs to allow neural networks function independently, with or without inputs. Since Leonhard Euler's development of graph theory in the 18th century, mathematicians, physicists, computer experts, and data scientists have used it as one of the most efficient tools for analytics. Knowledge graphs have been widely used by researchers and data scientists to organise structured knowledge on the internet. The knowledge graph is a method for combining data from several sources and feeding it into a neural network for processing. With structured inputs provided to any neural network, researchers believe AI can be trained to imagine the unseen through extrapolation. Unlike machine learning, which requires humans to train the parameters of any given model using data, deep learning allows the neural network to self-train utilising both structured and unstructured data. What is a knowledge graph's purpose? Knowledge graphs, unlike any other approach, have directed labelled graphs with well-defined label meanings. Any object, place, person, or company, for example, can operate as a node in these graphs, which have nodes, edges, and labels. It is the "interest relationship" between two nodes connected by edges. Labels define the meaning of node-to-node relationships. Employees at a company, for example, are nodes, and their respective "departmental managers" are edges; the relationship is defined with the label "colleagues." Data science researchers are aiming to make neural networks independent in executing tasks by deploying such nodes to edge networks in machine/deep learning. In 2012, Google implemented knowledge graphs in its search process, replacing PageRank, which relied on links to rank a page on the web for validity. According to Neo4j, the latency of a search operation on the web utilising the graph network is proportional to how much of the graph you wish to explore, not how much data is stored. Octavian, an open-source research group, employs neural networks to accomplish tasks on "knowledge graphs." Octavian's superhuman neural networks deal with specific structured information, and the neural architectures are engineered to suit the structure of the data they work well with. While it is often said that deep-learning performs well with unstructured data, the superhuman neural networks modelled by Octavian deal with specific structured information and the neural architectures are engineered to suit the structure of the data they work well with. In 2018, one of Octavian's data science engineers gave a session at the Connected Data London conference, comparing standard deep learning methods to graph learning. “How would we like machine learning on graphs to appear from 20,000 feet?” he asked to substantiate his argument. Because neural networks require training to execute any task, his research revealed that many machine learning approaches used on graphs had some fundamental flaws. While some solutions required converting the graph into a table and abandoning its structure, others did not function on graphs that were not visible. The Octavian researcher identified several "graph-data tasks" that demanded "graph-native implementations like Regression, Classification, and Embedding" in order to overcome these restrictions and perform deep learning on graphs. "RELATIONAL INDUCTIVE BIASES" The data structures that operate well with neural networks are the topic of this study of utilising deep learning on graphs. Images, for example, are in a structured format because they are either two-dimensional or three-dimensional, with pixels closer to each other being more relevant than those further apart. Sequences, on the other hand, have a one-dimensional structure in which elements that are close together are more significant to each other than those that are far apart. However, unlike images and sequences (for example, pixels and sequences), nodes in graphs do not have defined relationships. Octavian argues that the secret to getting the greatest outcomes in a deep learning model is to create neural network models that match the graphs. Google Brain, in collaboration with MIT and the University of Edinburgh, published a study on "Relational Inductive Biases" to test the correctness of results obtained through deep learning of graphs. The study developed a general approach for propagating datasets through a graph and suggested that by employing neural networks to learn six functions to "conduct aggregations and transforms within the structure of the graph," state-of-the-art performance can be attained on a variety of graphs applications. ‘Combinatorial generalisation' This is a term used to describe the process of combining two or more The united thesis presented by Google Brain, MIT, and The University of Edinburgh suggested that “combinatorial generalisation” is the most important goal for artificial intelligence in order to reach human-like abilities. The paper claims that structured representations and computations are the keys to equipping AI with human-like analysis capabilities. The graph network, according to the article, generalises and extends existing methodologies for neural networks that operate on graphs to give a simple interface for manipulating structured information and producing "structured behaviours." Participants in this study used deep learning on graphs to demonstrate that when working with graphs, neural network models produce more accurate outcomes. Google's graph of knowledge Google is a well-known example of the aforementioned studies. For the past eight years, it has relied on "knowledge graphs." Google's knowledge graph was launched on May 16, 2012, with the goal of improving the search experience. Google improved its path to generate search results by using this graph. Simply said, if you Google the name of a movie, such as ‘Batman: The Dark Knight,' you will get all conceivable results, including posters, videos, hoardings, commercials, and movie theatres screening the film. It's the deep learning knowledge graph that Google has been employing to improve its search engine, which receives billions of visitors every day. Google's X lab created a neural network with 1 billion connections made up of 16,000 computer processors, after which the artificial brain visited YouTube and looked for cat videos. Despite the fact that the neural network had no input, Google's artificial brain employed deep learning algorithms and knowledge graphs to complete one of the most common queries that even a human brain would make. In deep learning, graphs have proven to be the primary means of communication for neural networks. The network uses numerous hidden layers in such a learning process to give the best outcomes for any input. Google has achieved ground-breaking results with knowledge graphs, and in 2014 it acquired DeepMind to enhance its research and study of deep learning algorithms. Google Assistant, voice recognition on Facebook and in smartphones, Siri, facial recognition unlocking, fingerprint unlocking, and other advancements are examples of how AI has achieved recognition through analysis. Please Comment bellow your thoughts and You can start a community post on our website. Thanks for Reading. Reference- Analytics India Mag

अधिक
The Race of the Quantum

The Race of the Quantum

Quantum computing is developing and improving with each day as the quantum technology is developing. Many researchers and developer have joined hands with the biggest technical companies to improve, stabilize, and commercialize this technology. A Group of Chinese developer is reported to be leading the race of the quantum computing till July 2021, but Google, IBM, Intel and other quantum computer developers are also not far behind of the country in this race. Google, IBM and others developers have made their first wave of quantum computers, but these systems are still in the early stages and are not yet ready to be used in commercial applications. But it is too early to declare the forerunners of this race at this early stage. Conventionally in today’s computing, the information is stored in binary bits, which can be either a “0” or “1”. But in case of quantum computing the information can be stored as a combination of “0” and “1” as well as by binary only and this is termed as quantum bits or qubits. This combination helps the computer to perform more calculation at once at a lesser effort. But the technology is still developing and it may take almost a decade to commercially launch this technology yet. However, that’s not stopping companies, governments, R&D organizations and universities from developing the technology and pouring billions of dollars into the arena. If they are realized, quantum computers could accelerate the development of new chemistries, drugs and materials. The systems also could crack any encryption, which has made their development a top priority among several nations. And across the board, it could provide companies and countries with a competitive edge. “Quantum computing is at the forefront of national initiatives,” Amy Leong, senior vice president of FormFactor said, “There have been more than $20 billion in investments announced across 15 countries here. Geopolitical powerhouses like the U.S. and China are certainly leading the race to claim quantum supremacy, followed by a host of others from Europe and Asia.” The competition is heating up between nations and organizations alike. In a significant milestone, the University of Science and Technology of China (USTC) revealed what experts believe is the world's fastest quantum computing processor in June 2021, beating Google's 53-qubit device, which had held the unofficial record since 2019. The 66-qubit processor at USTC completed a complicated calculation in 1.2 hours that would have taken 8 years on today's supercomputers. “When I take a look at the first applications, we’re going to need several thousand, if not 100,000 qubits, to do something useful,” James Clarke, director of quantum hardware at Intel said in an interview . “If we’re at 50 to 60 qubits today, it’s going to be a while before we can get to 100,000 qubits. It’s going to be awhile before we can get to 1 million qubits, which would be necessary for cryptography.” In the meantime, there is a race within a race. Vendors are working on a dozen different types of qubits using a variety of technologies like ion trapping, silicon spin, and superconductivity. Each camp's vendors say that their technology is superior and will allow for the development of practical quantum computers. It's also too early to declare a winner in terms of technology. Nonetheless, the market appears to be promising. According to Hyperion Research, the quantum computer industry will expand from $320 million in 2020 to $830 million in 2024. The Race between the Classical and Quantum computing When seen as a timescale, the computing field has advanced tremendously. ENIAC, the first general-purpose electronic digital computer, was constructed by the University of Pennsylvania in 1945. ENIAC processed data at a rate of 5,000 additions per second using vacuum tubes. Electrons were controlled with the help of vacuum tubes. The 1950s saw the transition from vacuum tubes to the transistors. This development has also resulted in a more enabled and fast computers. Meanwhile, Control Data, now defunct, introduced the CDC 6600, the world's first supercomputer, in 1964. The 6600 had a 60-bit CPU with 2 MIPS of performance based on transistors. In today's world, the smart phone is faster than the earliest computers. The A14 CPU, which is built on TSMC's 5nm technology, is used in Apple's iPhone 12. The A14 has 11.8 billion transistors, a 6-core CPU, and a 16-core neural engine that can perform 11 trillion operations per second. Fugaku, the world's fastest supercomputer, maintained its position as the world's fastest supercomputer in 2021. Fugaku is based on Arm's A64FX CPU and was developed by Riken and Fujitsu. It has 7,630,848 cores and can perform 442 petaflops per second. A petaflop is a unit of computing power that executes one quadrillion floating-point operations per second. Fugaku is up and running, and it's being used for a variety of applications. In a paper presented at the 2021 Symposia on VLSI Technology and Circuits, Satoshi Matsuoka, director of Riken's Centre for Computational Science, said, "(Fugaku) embodies technologies realised for the first time in a major server general-purpose CPU, such as 7nm process technology, on-package integrated HBM2, terabyte-class streaming capabilities, and an on-die embedded high-performance network." “We are well into the petaflop computing era,” said Aki Fujimura, the CEO of D2S “There are many research computers around the globe that are approaching exascale computing (1,000 petaflops). We will have many exascale computers by the end of this decade.” Indeed, the biotechnology, defense, materials research, health, physics, and weather prediction industries all demand increased compute capacity to handle present and future problems. “We need to compute more at the same price. The problems are getting harder. The problems we serve are getting bigger and harder on top of that,” said Fujimura. While traditional computing will continue to advance, the quantum computing sector is racing to catch up. These new devices have the potential to outperform today's supercomputers, thus speeding up the development of new technology. Quantum computers are projected to be able to crack the world's most complicated algorithms in a reasonable amount of time in the future. Shor's algorithm, for example, is an integer factorization problem that can be used to break the commonly used RSA public-key cryptography method. Quantum computing, which was first proposed in the 1980s, has made significant progress over the years. Two systems have just attained “quantum supremacy.” This is the point at which quantum computers can perform tasks that a traditional computer cannot. Quantum computing is still in its infancy. Work is currently being done to improve these systems and identify practical uses for the technology. “All systems that exist today are primarily used to explore future quantum applications, including looking at variationally quantum algorithms for quantum chemistry, and quantum kernel estimation methods for machine learning,” according to IBM's head of quantum hardware system development, Jerry Chow. “The systems that are deployed today are also interesting from the standpoint of benchmarks and characterization of their own performance, and to understand underlying noise sources to improve future iterations of these systems. One other aspect is to explore the concept of quantum error correction.” Even if quantum computers reach their full capability, they will not be able to replace current computers. “Quantum computing is clearly an important future technology for some types of computing problems. Prime factorization is another task that quantum computing is known to be far superior at than classical computing,” said D2S’ Fujimura “In a way, quantum computing will augment classical computing for some specific difficult problems. On a larger scale, quantum computing will not replace classical computing. Classical computing is more appropriate for many of the tasks we need to compute.” Today's quantum computers are unique, resembling massive chandeliers. The processor and other components are protected from external noise and heat by a dilution refrigerator. The device is cooled between 10 and 15 millikelvin by the unit. The qubits are integrated into a processor in a quantum system. There are two types of qubit gates: one-qubit and two-qubit gates. Let's imagine you have a 16-qubit quantum processor. The qubits are placed in a four-by-four array in two dimensions. One-qubit gates could make up the first three rows (from top to bottom). Two-qubit gates may be seen on the last row. The roles of processing are intricate. In traditional computing, you input a number, the computer calculates the function, and then outputs the result. “If you have ‘n’ bits, you have 2n. That’s an exponentially large number of states, and you can only work on them one at a time. So, it’s exponential time or exponential in space,” i n a video presentation, William Oliver, a professor at the Massachusetts Institute of Technology (MIT), stated. “A quantum computer, on the other hand, can take those 2n different components and put them all into one superposition state simultaneously. And this is what underlies the exponential speed up that we see in a quantum computer.” “In order to double the power of a quantum computer, you only have to add one qubit. It’s exponential. In order for a quantum computer to keep up with a classical computer in terms of Moore’s Law, they only have to add one qubit every 24 months,” Moor Insights & Strategy analyst Paul Smith-Goodson agreed. In theory, everything works. Several key challenges are preventing quantum computing from reaching its full potential. First, noise causes qubits to lose their characteristics within 100 microseconds, according to IBM. That is why qubits must function in extremely cold temperatures. “Qubits are extremely sensitive to their environment,” said FormFactor’s Leong. “Quieting down the qubit environment in a very cold or cryogenic environment is critical.” Furthermore, noise introduces faults within the qubits. As a result, quantum computers need to be error-corrected. On top of that, quantum computers with thousands of qubits must be scaled up. It's a far cry from that figure. “We need to make qubits better than we’re making them today. And that’s across the field,” said Intel’s Clarke. “To me, the biggest challenge is how you wire them up. Every qubit requires its own wire and its control box. That works well when you have 50 or 60 qubits. It doesn’t work well when you have a million of them.” It's also crucial to produce high-yielding qubits. Metrology methods are being developed around the technology by Onto Innovation and others. “Right now, we’ve conducted measurements on a few wafers or coupons,” senior vice president Kevin Heidrich said, at Onto. “The key behind most of the foundational technologies in quantum is utilizing the manufacturing technologies developed for classical computing. However, many are tweaking the devices, designs and integrations to enable quantum/qubit devices. The key engagements we have are around enabling precise and characterized devices to enable various forms of quantum computing such as photonic or spin qubits. Our focus is to provide metrology solutions to enable our development partners to best characterize their early devices, including things like precise sidewall control, materials thickness, and interface quality.” Qubits Semiconductor Qubits According to the Quantum Computing Report, there are now 98 groups working on quantum computers and/or qubits. Ion trap, neutral atoms, photonics, silicon spin, superconducting, and topological qubits are all being developed by companies. Each variety is distinct, with its own set of benefits and drawbacks. It's too soon to say which technology is more advanced. “We really don’t know which technology is going to be the right technology to build a grand scheme fault tolerant machine. Companies have a five-year roadmap, leading to where they are going to have enough qubits to actually do something meaningful,” Smith-Goodson from Moor Insights & Strategy said. “(Regarding the installed base), IBM has a large number of machines. They have over 20 quantum computers and no one can match that. They have a large ecosystem built up around it. They have a lot of universities and companies that they’re working with.” The most progress has been made so far with superconducting qubits. D-Wave has risen to prominence in this category thanks to its use of quantum annealing, a technology that solves optimization problems. A quantum annealing system, for example, searches for the best of many possible combinations if you have a problem with many combinations. At least in part, these talents have been demonstrated. The majority of the activity is in the genuine quantum computer business, which uses supercomputing qubits. Many companies, including Google, IBM, Intel, MIT, Rigetti, USTC, and others, are creating products here. Josephson junctions are used to construct superconducting qubits. A Josephson junction is made up of two superconducting metals placed between two thin insulating layers. Electrons pair together and tunnel through the connection when it's turned on. IBM demonstrated a 3-qubit device in 2014. IBM now offers a quantum computer with 65 qubits for sale. According to the Quantum Computing Report, IBM led the industry in terms of overall qubit count in the superconducting area until recently. USTC now holds the unofficial record with 66 qubits. According to the Quantum Computing Report, IBM has 65 qubits, Google has 53, Intel has 49, and Rigetti has 32. “Qubits and quantum processors are the central part of quantum hardware,” said IBM’s Chow. “To build a quantum computer or a quantum computing system, we will need not only quantum hardware, but also control electronics, classical computing units, and software that runs quantum computing programs.” IBM offers Qiskit, an open-source quantum software development kit, in this regard. There goal is to have a broad developer community participation and establish a quantum ecosystem to provide quantum computers to people as critical tools in their research and business. Systems with thousands of qubits will be required by the industry, but suppliers have a long way to go in this area. However, the results are still optimistic. Google's Sycamore 53-qubit processor completed a calculation in 200 seconds in 2019. According to Google, completing the identical operation would take a supercomputer 10,000 years. The USTC of China then presented a paper on Zuchongzhi, a 66-qubit superconducting quantum processor, in June of 2021. USTC used 56 qubits in a calculation. It was 2 to 3 times faster than Google's 53-qubit processor at a task. “We expect this large-scale, high-performance quantum processor could enable us to pursue valuable quantum applications beyond classical computers in the near future,” Jian-Wei Pan, a professor of USTC said, in a paper. Other breakthroughs in superconducting qubits, aside from USTC's processor, Rigetti announced a multi-chip quantum processor that will enable an 80-qubit system by the end of the year. IBM will release Eagle, a 127-qubit quantum processor, by the end of the year. In 2022, IBM plans to release a 433-qubit CPU, followed by a 1,121-qubit device in 2023. Google discovered a method for lowering qubit error rates. By 2029, it also intends to construct a 1 million qubit CPU. Another potential technique is ion trap qubits. Atoms are at the heart of the quantum processor with ion trap. According to IonQ, a technology developer, the atoms are trapped, and then lasers handle everything from initial preparation to final readout. According to the Quantum Computing Report, IonQ leads with 32 qubits in ion trap, followed by AQT (24), Honeywell (10) and others. Sandia National Laboratories is working on QSCOUT, a quantum computer testbed based on ion trap qubits, in terms of research and development. The QSCOUT system is a three-qubit system. Sandia intends to eventually increase the system to 32 qubits. “Not only can users specify which gates (each circuit is made up of many gates) they want to apply and when, but they can also specify how the gate itself is implemented, as there are many ways to achieve the same result. These tools allow users to get into the weeds of the how the quantum computer works in practice to help us figure out the best way to build a better one,” said a physicist and the QSCOUT lead at Sandia. “Since we are a testbed system, the code running on our machine is generated by users, who have lots of ideas of what they might like to run on a quantum computer, Thirty-two qubits are still small enough that it can be fully simulated on a classical computer, so the point is not to do something that a classical computer cannot do. The main reasons for building the smaller system are: 1) study how to map problems onto a quantum computer the best way for best performance on a future larger system (quantum chemistry, quantum system simulations), and 2) learn techniques for making a quantum computer run better that can be applied to a bigger machine.” Clark said. Ion trap is experiencing a surge of interest, similar to the superconducting qubit industry. For example, Honeywell's quantum computing unit will be spun off and merged with Cambridge Quantum Computing. Honeywell also proved that quantum mistakes may be corrected in real time. Customers of IonQ can buy access to its quantum computers using Google's cloud services. Silicon Qubits Silicon spin qubits show promise as well. This technology is being developed by Leti, Intel, Imec, and others. According to the Quantum Computing Report, Intel appears to be in the lead with 26 qubits. Intel is working on a new way to make an electron transistor that can have spin up or spin down. "When you have two electrons close to each other, or two of these spin qubits, then you can start to perform operations," Intel's Clarke says. “Intel’s spin qubits are a million times smaller than some of the other qubit technologies,” said Intel's Clarke. “We’re going to need 100,000 to 1 million of them. When I envision what a quantum chip will look like in the future, it will look similar to one of our processors.” The spin qubits, or silicon spin, is a type of quantum computing. It uses the same processes and tools used in semiconductor fabs as well as some of the same materials. A lot of their innovation comes more from the materials that they're using rather than the patterning capability. Horse Ridge II, a second-generation cryogenic control chip, was released by Intel. The technology integrates control functions for quantum computer operations into the cryogenic refrigerator, reducing the complexity of quantum system control wiring. CEA-Leti has created an interposer that allows quantum computing devices to be integrated. Qubits and control chips are connected through the interposer. In a 300mm integrated process, Imec developed uniform spin qubit devices with configurable coupling. Cryoprobes have been created separately by Intel and FormFactor. At cryogenic temperatures, these systems characterize qubits. Conclusion Other than the Qubits there are photonics as well which uses the light particles is predicted to have a promising impact in the future. So it is uncertain which technology will rule the future era. But many big organizations and companies are counting on the Quantum Computing in the future. A more pressing concern is whether quantum computing can ever live up to its hype. Companies and countries, on the other hand, are placing significant bets on this technology. And, considering the progress made thus far, the present results and activity make it all worthwhile to keep an eye on. ________________________________________________________________________ To help their work, Newsmusk allows writers to use primary sources. White papers, government data, initial reporting, and interviews with industry experts are only a few examples. Where relevant, we also cite original research from other respected publishers. Source- Semiconductor Engineering ________________________________________________________________________

अधिक
3 THREATS TO DIGITAL TRANSFORMATION: WEAPONIZED AI, AUTOMATED HACKING, AND DEEP FAKES

3 THREATS TO DIGITAL TRANSFORMATION: WEAPONIZED AI, AUTOMATED HACKING, AND DEEP FAKES

The three threats to digital transformation are listed below. The attack surface created by digital transformation is growing at an exponential rate, opening up new opportunities for cybercriminals. New technologies such as automated hacking, deepfakes, and weaponized AI are being added to their arsenal, in addition to their ever-expanding arsenal of malicious software and zero-day threats. Let's take a look at how these instruments are posing a threat to the globe today. AUTOMATED HACKING What are the real-world applications of automated hacking, and how might they affect your company? Hackers utilise applications like Shodan to generate a complete list of web servers, surveillance cameras, webcams, and printers that are linked to the internet. Automated hacking methods were used in Sweden, for example, to locate public cameras near a harbour. They may use that images to monitor and detect submarines entering and exiting the port. They were able to determine how long the ships had been on the move, their range, and their destinations. This does not require a team of IT specialists to finish and can be done by anyone. Even if your company doesn't rent submarines, security cameras and networked printers are likely to be installed at the front door. These devices can be recognised and accessed from afar. It doesn't matter who comes into your office or meets with you; the data belongs to you. As previously indicated, cyber-attacks are increasingly targeting specific individuals. Spear phishing is the term for this type of attack. Cybercriminals are increasingly seeking to persuade their victims to donate money rather than simply hoping that naive recipients will click on the phishing email. Fake accounts, email accounts, websites, branding, and communication styles are established to imitate a third party or a company executive. When a high-ranking CxO is targeted, it's known as "whaling." Reconnaissance is the first step in developing a convincing message for cybercriminals. What kind of clientele does the target company have, how many employees does it employ, do they use a specific email template, and what are its flaws? Instead of manually searching through publicly available data, they use automated resources. As a result, their approach is more thorough and faster, with higher success rates. DEEPFAKE Deepfake is a phrase made up of the terms "deep" and "fake." It combines the concepts of machine learning and deep learning with the concept of a non-existent entity. Deep fakes are computer-generated images and noises made using machine-learning algorithms. Using deep fake technology, a deepfake maker manipulates material to replace a genuine person's picture, voice, or both with similar artificial likenesses or sounds. Deepfake technology is a more advanced sort of photo-editing technology that allows for easy manipulation of pictures. In terms of how it manipulates visual and audio content, Deepfake technology, on the other hand, goes considerably further. It has the ability to create individuals who do not exist. It can also make it appear as if real people are speaking and acting in ways they aren't. As a result, deepfake technology might be used to spread fake news. CORPORATE SCAM Organizations are concerned about a variety of deepfake-based scams, such as those that use deepfake audio to make it appear as though the person on the other end of the line is a higher-up, such as a CEO demanding money from an employee. Extortion scams are a type of scam. Identity fraud is a sort of identity theft in which criminals employ deep fake technologies to commit crimes like financial fraud. In many of these scams, an audio deepfake is used. Audio deepfakes create "voice skins" or "clones" that allow them to impersonate a famous person. If you suspect the voice on the other end of the line is a partner or customer seeking money, it's a good idea to do your homework. Manipulation of social media Persuasive manipulations in social media posts have the potential to mislead and enrage internet-connected people. Deepfakes is a service that makes fake news appear legitimate in the eyes of the media. Deepfakes are widely used to generate strong emotions on social networking sites. Consider a chaotic Twitter account that takes aim at all things political and makes ludicrous claims in the hopes of causing a disturbance. Is there any connection between the profile and a real person? Possibly not. The profile image for the Twitter account may have been created entirely from scratch. It's possible that it does not belong to a real person. If that's the case, the convincing films they're disseminating on Twitter are very certainly phoney. This type of deepfake has been outlawed on social media platforms like Twitter and Facebook. WEAPONISED AI Because of AI and automation, bad actors may be able to carry out more attacks at a faster pace, requiring security staff to stay alert. To add fuel to the fire, because everything is happening in real time and we're seeing rapid advancement, there's not much time to decide whether or not to launch your own AI defences. Cyber attackers, like their victims, confront financial realities: finding and exploiting zero-day threats can cost upwards of six figures; generating new threats and malware takes time and money, and renting Malware as a Service tools from the dark web takes time and money. They, like everyone else, want to get the most bang for their buck, which means enhancing the efficiency and efficacy of the instruments they use while spending the least amount of money, time, and effort possible. Cybercriminals could use AI and machine learning to create malware that can self-seek for flaws and calculate which modules will be the most effective without identifying themselves to their C2 server through constant communication. Advanced persistent threats (APTs) or a variety of payloads have previously been used in multi-vector attacks. By comprehending targeted systems on its own, AI improves the efficacy of these technologies, allowing attacks to be laser targeted rather than the slower, scattershot strategy that might inform a victim that they are being attacked. Conclusion You must know what you must protect and how you must protect it. How big is your attack surface on the internet? What weaknesses are exposed? Automated solutions that identify and evaluate your digital footprint, not just your own websites and digital products, but also those of third-party suppliers, can help you prevent attacks. All are linked to your brand and have the ability to severely harm your reputation if they are hacked by cyber-criminals. Reference- Analytics Insights

अधिक
bottom of page