software testing Recently Published Documents

Total documents.

  • Latest Documents
  • Most Cited Documents
  • Contributed Authors
  • Related Sources
  • Related Keywords

Combining Learning and Engagement Strategies in a Software Testing Learning Environment

There continues to be an increase in enrollments in various computing programs at academic institutions due to many job opportunities available in the information, communication, and technology sectors. This enrollment surge has presented several challenges in many Computer Science (CS), Information Technology (IT), and Software Engineering (SE) programs at universities and colleges. One such challenge is that many instructors in CS/IT/SE programs continue to use learning approaches that are not learner centered and therefore are not adequately preparing students to be proficient in the ever-changing computing industry. To mitigate this challenge, instructors need to use evidence-based pedagogical approaches, e.g., active learning, to improve student learning and engagement in the classroom and equip students with the skills necessary to be lifelong learners. This article presents an approach that combines learning and engagement strategies (LESs) in learning environments using different teaching modalities to improve student learning and engagement. We describe how LESs are integrated into face-to-face (F2F) and online class activities. The LESs currently used are collaborative learning , gamification , problem-based learning , and social interaction . We describe an approach used to quantify each LES used during class activities based on a set of characteristics for LESs and the traditional lecture-style pedagogical approaches. To demonstrate the impact of using LESs in F2F class activities, we report on a study conducted over seven semesters in a software testing class at a large urban minority serving institution. The study uses a posttest-only study design, the scores of two midterm exams, and approximate class times dedicated to each LES and traditional lecture style to quantify their usage in a face-to-face software testing class. The study results showed that increasing the time dedicated to collaborative learning, gamification, and social interaction and decreasing the traditional lecture-style approach resulted in a statistically significant improvement in student learning, as reflected in the exam scores.

Enhancing Search-based Testing with Testability Transformations for Existing APIs

Search-based software testing (SBST) has been shown to be an effective technique to generate test cases automatically. Its effectiveness strongly depends on the guidance of the fitness function. Unfortunately, a common issue in SBST is the so-called flag problem , where the fitness landscape presents a plateau that provides no guidance to the search. In this article, we provide a series of novel testability transformations aimed at providing guidance in the context of commonly used API calls (e.g., strings that need to be converted into valid date/time objects). We also provide specific transformations aimed at helping the testing of REST Web Services. We implemented our novel techniques as an extension to EvoMaster , an SBST tool that generates system-level test cases. Experiments on nine open-source REST web services, as well as an industrial web service, show that our novel techniques improve performance significantly.

A Survey of Flaky Tests

Tests that fail inconsistently, without changes to the code under test, are described as flaky . Flaky tests do not give a clear indication of the presence of software bugs and thus limit the reliability of the test suites that contain them. A recent survey of software developers found that 59% claimed to deal with flaky tests on a monthly, weekly, or daily basis. As well as being detrimental to developers, flaky tests have also been shown to limit the applicability of useful techniques in software testing research. In general, one can think of flaky tests as being a threat to the validity of any methodology that assumes the outcome of a test only depends on the source code it covers. In this article, we systematically survey the body of literature relevant to flaky test research, amounting to 76 papers. We split our analysis into four parts: addressing the causes of flaky tests, their costs and consequences, detection strategies, and approaches for their mitigation and repair. Our findings and their implications have consequences for how the software-testing community deals with test flakiness, pertinent to practitioners and of interest to those wanting to familiarize themselves with the research area.

Test Suite Optimization Using Firefly and Genetic Algorithm

Software testing is essential for providing error-free software. It is a well-known fact that software testing is responsible for at least 50% of the total development cost. Therefore, it is necessary to automate and optimize the testing processes. Search-based software engineering is a discipline mainly focussed on automation and optimization of various software engineering processes including software testing. In this article, a novel approach of hybrid firefly and a genetic algorithm is applied for test data generation and selection in regression testing environment. A case study is used along with an empirical evaluation for the proposed approach. Results show that the hybrid approach performs well on various parameters that have been selected in the experiments.

Machine Learning Model to Predict Automated Testing Adoption

Software testing is an activity conducted to test the software under test. It has two approaches: manual testing and automation testing. Automation testing is an approach of software testing in which programming scripts are written to automate the process of testing. There are some software development projects under development phase for which automated testing is suitable to use and other requires manual testing. It depends on factors like project requirements nature, team which is working on the project, technology on which software is developing and intended audience that may influence the suitability of automated testing for certain software development project. In this paper we have developed machine learning model for prediction of automated testing adoption. We have used chi-square test for finding factors’ correlation and PART classifier for model development. Accuracy of our proposed model is 93.1624%.

Metaheuristic Techniques for Test Case Generation

The primary objective of software testing is to locate bugs as many as possible in software by using an optimum set of test cases. Optimum set of test cases are obtained by selection procedure which can be viewed as an optimization problem. So metaheuristic optimizing (searching) techniques have been immensely used to automate software testing task. The application of metaheuristic searching techniques in software testing is termed as Search Based Testing. Non-redundant, reliable and optimized test cases can be generated by the search based testing with less effort and time. This article presents a systematic review on several meta heuristic techniques like Genetic Algorithms, Particle Swarm optimization, Ant Colony Optimization, Bee Colony optimization, Cuckoo Searches, Tabu Searches and some modified version of these algorithms used for test case generation. The authors also provide one framework, showing the advantages, limitations and future scope or gap of these research works which will help in further research on these works.

Software Testing Under Agile, Scrum, and DevOps

The adoption of agility at a large scale often requires the integration of agile and non-agile development practices into hybrid software development and delivery environment. This chapter addresses software testing related issues for Agile software application development. Currently, the umbrella of Agile methodologies (e.g. Scrum, Extreme Programming, Development and Operations – i.e., DevOps) have become the preferred tools for modern software development. These methodologies emphasize iterative and incremental development, where both the requirements and solutions evolve through the collaboration between cross-functional teams. The success of such practices relies on the quality result of each stage of development, obtained through rigorous testing. This chapter introduces the principles of software testing within the context of Scrum/DevOps based software development lifecycle.

Quality Assurance Issues for Big Data Applications in Supply Chain Management

Heterogeneous data types, widely distributed data sources, huge data volumes, and large-scale business-alliance partners describe typical global supply chain operational environments. Mobile and wireless technologies are putting an extra layer of data source in this technology-enriched supply chain operation. This environment also needs to provide access to data anywhere, anytime to its end-users. This new type of data set originating from the global retail supply chain is commonly known as big data because of its huge volume, resulting from the velocity with which it arrives in the global retail business environment. Such environments empower and necessitate decision makers to act or react quicker to all decision tasks. Academics and practitioners are researching and building the next generation of big-data-based application software systems. This new generation of software applications is based on complex data analysis algorithms (i.e., on data that does not adhere to standard relational data models). The traditional software testing methods are insufficient for big-data-based applications. Testing big-data-based applications is one of the biggest challenges faced by modern software design and development communities because of lack of knowledge on what to test and how much data to test. Big-data-based applications developers have been facing a daunting task in defining the best strategies for structured and unstructured data validation, setting up an optimal test environment, and working with non-relational databases testing approaches. This chapter focuses on big-data-based software testing and quality-assurance-related issues in the context of Hadoop, an open source framework. It includes discussion about several challenges with respect to massively parallel data generation from multiple sources, testing methods for validation of pre-Hadoop processing, software application quality factors, and some of the software testing mechanisms for this new breed of applications

Use of Qualitative Research to Generate a Function for Finding the Unit Cost of Software Test Cases

In this article, we demonstrate a novel use of case research to generate an empirical function through qualitative generalization. This innovative technique applies interpretive case analysis to the problem of defining and generalizing an empirical cost function for test cases through qualitative interaction with an industry cohort of subject matter experts involved in software testing at leading technology companies. While the technique is fully generalizable, this article demonstrates this technique with an example taken from the important field of software testing. The huge amount of software development conducted in today's world makes taking its cost into account imperative. While software testing is a critical aspect of the software development process, little attention has been paid to the cost of testing code, and specifically to the cost of test cases, in comparison to the cost of developing code. Our research fills the gap by providing a function for estimating the cost of test cases.

Framework for Reusable Test Case Generation in Software Systems Testing

Agile methodologies have become the preferred choice for modern software development. These methods focus on iterative and incremental development, where both requirements and solutions develop through collaboration among cross-functional software development teams. The success of a software system is based on the quality result of each stage of development with proper test practice. A software test ontology should represent the required software test knowledge in the context of the software tester. Reusing test cases is an effective way to improve the testing of software. The workload of a software tester for test-case generation can be improved, previous software testing experience can be shared, and test efficiency can be increased by automating software testing. In this chapter, the authors introduce a software testing framework (STF) that uses rule-based reasoning (RBR), case-based reasoning (CBR), and ontology-based semantic similarity assessment to retrieve the test cases from the case library. Finally, experimental results are used to illustrate some of the features of the framework.

Export Citation Format

Share document.

Help | Advanced Search

Computer Science > Software Engineering

Title: artificial intelligence in software testing : impact, problems, challenges and prospect.

Abstract: Artificial Intelligence (AI) is making a significant impact in multiple areas like medical, military, industrial, domestic, law, arts as AI is capable to perform several roles such as managing smart factories, driving autonomous vehicles, creating accurate weather forecasts, detecting cancer and personal assistants, etc. Software testing is the process of putting the software to test for some abnormal behaviour of the software. Software testing is a tedious, laborious and most time-consuming process. Automation tools have been developed that help to automate some activities of the testing process to enhance quality and timely delivery. Over time with the inclusion of continuous integration and continuous delivery (CI/CD) pipeline, automation tools are becoming less effective. The testing community is turning to AI to fill the gap as AI is able to check the code for bugs and errors without any human intervention and in a much faster way than humans. In this study, we aim to recognize the impact of AI technologies on various software testing activities or facets in the STLC. Further, the study aims to recognize and explain some of the biggest challenges software testers face while applying AI to testing. The paper also proposes some key contributions of AI in the future to the domain of software testing.

Submission history

Access paper:.

  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

DBLP - CS Bibliography

Bibtex formatted citation.

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

Object-Oriented Software Testing: A Review

  • Conference paper
  • First Online: 21 April 2022
  • Cite this conference paper

recent research paper on software testing

  • Ali Raza 13 ,
  • Babar Shah 14 ,
  • Madnia Ashraf 13 , 15 &
  • Muhammad Ilyas 13  

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 350))

691 Accesses

Object-oriented (OO) software systems present specific challenges to the testing teams. As the object-oriented software contains the OO methodology and its different components, it is hard for the testing teams to test the software with arbitrary software components and the chance of errors could be increased. So different techniques, models, and methods researchers identified to tackle these challenges. In this paper, we are going to analyze and study the OO software testing. For handling challenges in OO software testing, different techniques and methods are proposed like UML diagrams, evolutionary testing, genetic algorithms, black-box testing, and white-box testing. The methodology used for research is literature review (LR) of the recent decay.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Fredericks EM, Hariri RH (2016) Extending search-based software testing techniques to big data applications. In 2016 IEEE/ACM 9th international workshop on search-based software testing (SBST). IEEE, pp 41–42

Google Scholar  

Adnan SM, Ilyas M, Razzaq S, Maqbool F, Wakeel M, Adnan SM (2020) Code smell detection and refactoring using AST visitor. Tech J 25(01):59–65

Jain A, Patwa S (2016) Effect of analysis and design phase factors on testing of Object oriented software. In: 2016 3rd International conference on computing for sustainable global development (INDIACom). IEEE, pp 3695–3700

He W, Zhao R, Zhu Q (2015) Integrating evolutionary testing with reinforcement learning for automated test generation of object-oriented software. Chin J Electron 24(1):38–45

Article   Google Scholar  

Wu CS, Huang CH, Lee YT (2013) The test path generation from state-based polymorphic interaction graph for object-oriented software. In 2013 10th International conference on information technology: new generations. IEEE, pp 323–330

Singh NP, Mishra R, Debbarma MK, Sachan S (2011) The review: lifecycle of object-oriented software testing. In: 2011 3rd international conference on electronics computer technology, vol 3. IEEE, pp 52–56

Labiche Y (2011) Integration testing object-oriented software systems: An experiment-driven research approach. In: 2011 24th Canadian conference on electrical and computer engineering (CCECE). IEEE, pp 000652–000655

Augsornsri P, Suwannasart T (2013). Design of a tool for checking integration testing coverage of object-oriented software. In: 2013 International conference on information science and applications (ICISA). IEEE pp 1–4

Chen HY, Tse TH (2013) Equality to equals and unequals: a revisit of the equivalence and nonequivalence criteria in class-level testing of object-oriented software. IEEE Trans Software Eng 39(11):1549–1563

Panigrahi CR, Mall R (2014) A heuristic-based regression test case prioritization approach for object-oriented programs. Innov Syst Softw Eng 10(3):155–163

Eski S, Buzluca F (2011). An empirical study on object-oriented metrics and software evolution in order to reduce testing costs by predicting change prone classes. In 2011 IEEE fourth international conference on software testing, verification and validation workshops. IEEE, pp 566–571

Suri PR, Singhani H (2015) Object Oriented Software Testability (OOSTe) Metrics Analysis. Int J Comput Appl Technol Res 4(5):359–367

Lee D, Lee J, Choi W, Lee BS, Han C (1997) A new integrated software development environment based on SDL, MSC, and CHILL for large-scale switching systems. ETRI J 18(4):265–286

Dubey SK, Rana A (2011) Assessment of maintainability metrics for object oriented software system. ACM SIGSOFT Softw Eng Notes 36(5):1–7

Sneed HM, Verhoef C, Sneed SH (2013) Reusing existing object-oriented code as web services in a SOA. In: 2013 IEEE 7th international symposium on the maintenance and evolution of service-oriented and cloud-based systems. IEEE, pp 31–39

Chen HY, Tse TH, Chen TY (2001) TACCLE: a methodology for object-oriented software testing at the class and cluster levels. ACM Trans Softw Eng Methodol (TOSEM) 10(1):56–109

Kung D, Gao J, Hsia P, Toyoshima Y, Chen C, Kim YS, Song YK (1995) Developing an object-oriented software testing and maintenance environment. Commun ACM 38(10):75–87

Binder RV (1996) Testing object-oriented software: a survey. Softw Testing Verification Reliab 6(3–4):125–252

Barbey S, Strohmeier A (1970) The problematics of testing object-oriented software. WIT Trans Inf Commun Technol 9

Whittaker JA (2000) What is software testing? And why is it so hard? IEEE Softw 17(1):70–79

McGregor JD, Sykes DA (2001) A practical guide to testing object-oriented software. Addison-Wesley Professional

Runeson P (2006) A survey of unit testing practices. IEEE Softw 23(4):22–29

Onita C, Dhaliwal J (2011) Alignment within the corporate IT unit: an analysis of software testing and development. Eur J Inf Syst 20(1):48–68

Naik K, Tripathy P (2011) Software testing and quality assurance: theory and practice. Wiley

Wappler S, Lammermann F (2005) Using evolutionary algorithms for the unit testing of object-oriented software. In Proceedings of the 7th annual conference on Genetic and evolutionary computation, pp 1053–1060

Kuhn DR, Wallace DR, Gallo AM (2004) Software fault interactions and implications for software testing. IEEE Trans Softw Eng 30(6):418–421

Jorgensen PC (2013) Software testing: a craftsman’s approach. Auerbach Publications

Bertolino A (2007) Software testing research: Achievements, challenges, dreams. In: Future of software engineering (FOSE’07). IEEE, pp 85–103

Gelperin D, Hetzel B (1988) The growth of software testing. Commun ACM 31(6):687–695

Al-Obeidat F, Rocha A, Khan MS, Maqbool F, Razzaq S (2021) Parallel’ tensor factorization for relational learning. Neural Comput Appl 1–10

Razzaq S, Maqbool F, Hussain A (2016) Modified cat swarm optimization for clustering. In: International conference on brain inspired cognitive systems. Springer, Cham, pp 161–170

Maqbool F, Bashir S, Baig AR (2006) E-MAP: efficiently mining asynchronous periodic patterns. Int J Comput Sci Netw Secur 6(8A):174–179

Zhang X, Yu L, Hou X (2016) A method of metamorphic relations constructing for object-oriented software testing. In 2016 17th IEEE/ACIS international conference on software engineering, artificial intelligence, networking and parallel/distributed computing (SNPD). IEEE, pp 399–406

Drave I, Hillemacher S, Greifenberg T, Kriebel S, Kusmenko E, Markthaler M, Orth P, Salman KS, Richenhagen J, Rumpe B, Schulze C, Wortmann A (2019) SMArDT modeling for automotive software testing. Softw: Pract Exp 49(2):301–328

Gao K (2021) Simulated software testing process and its optimization considering heterogeneous debuggers and release time. IEEE Access 9:38649–38659

Böhme M (2019) Assurances in software testing: a roadmap. In: 2019 IEEE/ACM 41st international conference on software engineering: new ideas and emerging results (ICSE-NIER). IEEE, pp 5–8

Alferidah SK, Ahmed S (2020). Automated software testing tools. In: 2020 international conference on computing and information technology (ICCIT-1441). IEEE, pp 1–4

Alyahya S, Alsayyari M (2020) Towards better crowd sourced software testing process. Int J Cooper Inform Syst 29(01–02):2040009

Dadkhah M, Araban S, Paydar S (2020) A systematic literature review on semantic web enabled software testing. J Syst Softw 162:110485

Durelli VH, Durelli RS, Borges SS, Endo AT, Eler MM, Dias DR, Guimarães, MP (2019) Machine learning applied to software testing: A systematic mapping study. IEEE Trans Reliab 68(3):1189–1212

Ciupa I, Leitner A, Oriol M, Meyer B (2007) Experimental assessment of random testing for object-oriented software. In Proceedings of the 2007 international symposium on Software testing and analysis, pp 84–94

Wappler S, Wegener J (2006) Evolutionary unit testing of object oriented software using strongly-typed genetic programming. In Proceedings of the 8th annual conference on Genetic and evolutionary computation, pp 1925–1932

Download references

Author information

Authors and affiliations.

Department of Computer Science and IT, University of Sargodha, Sargodha, Pakistan

Ali Raza, Madnia Ashraf & Muhammad Ilyas

College of Technological Innovation, Zayed University, Abu Dhabi, UAE

Punjab Information Technology Board, Lahore, Pakistan

Madnia Ashraf

You can also search for this author in PubMed   Google Scholar

Editor information

Editors and affiliations.

School of Mathematical and Computer Science, Heriot-Watt University, Dubai, United Arab Emirates

Abrar Ullah

Institute of Management Sciences, Center of Excellence in Information Technology, Peshawar, Pakistan

Sajid Anwar

Lisbon School of Economics and Management, University of Lisbon, Portugal, Portugal

Álvaro Rocha

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Cite this paper.

Raza, A., Shah, B., Ashraf, M., Ilyas, M. (2022). Object-Oriented Software Testing: A Review. In: Ullah, A., Anwar, S., Rocha, Á., Gill, S. (eds) Proceedings of International Conference on Information Technology and Applications. Lecture Notes in Networks and Systems, vol 350. Springer, Singapore. https://doi.org/10.1007/978-981-16-7618-5_40

Download citation

DOI : https://doi.org/10.1007/978-981-16-7618-5_40

Published : 21 April 2022

Publisher Name : Springer, Singapore

Print ISBN : 978-981-16-7617-8

Online ISBN : 978-981-16-7618-5

eBook Packages : Intelligent Technologies and Robotics Intelligent Technologies and Robotics (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Our approach

  • Responsibility
  • Infrastructure
  • Try Meta AI

RECOMMENDED READS

  • 5 Steps to Getting Started with Llama 2
  • The Llama Ecosystem: Past, Present, and Future
  • Introducing Code Llama, a state-of-the-art large language model for coding
  • Meta and Microsoft Introduce the Next Generation of Llama
  • Today, we’re introducing Meta Llama 3, the next generation of our state-of-the-art open source large language model.
  • Llama 3 models will soon be available on AWS, Databricks, Google Cloud, Hugging Face, Kaggle, IBM WatsonX, Microsoft Azure, NVIDIA NIM, and Snowflake, and with support from hardware platforms offered by AMD, AWS, Dell, Intel, NVIDIA, and Qualcomm.
  • We’re dedicated to developing Llama 3 in a responsible way, and we’re offering various resources to help others use it responsibly as well. This includes introducing new trust and safety tools with Llama Guard 2, Code Shield, and CyberSec Eval 2.
  • In the coming months, we expect to introduce new capabilities, longer context windows, additional model sizes, and enhanced performance, and we’ll share the Llama 3 research paper.
  • Meta AI, built with Llama 3 technology, is now one of the world’s leading AI assistants that can boost your intelligence and lighten your load—helping you learn, get things done, create content, and connect to make the most out of every moment. You can try Meta AI here .

Today, we’re excited to share the first two models of the next generation of Llama, Meta Llama 3, available for broad use. This release features pretrained and instruction-fine-tuned language models with 8B and 70B parameters that can support a broad range of use cases. This next generation of Llama demonstrates state-of-the-art performance on a wide range of industry benchmarks and offers new capabilities, including improved reasoning. We believe these are the best open source models of their class, period. In support of our longstanding open approach, we’re putting Llama 3 in the hands of the community. We want to kickstart the next wave of innovation in AI across the stack—from applications to developer tools to evals to inference optimizations and more. We can’t wait to see what you build and look forward to your feedback.

Our goals for Llama 3

With Llama 3, we set out to build the best open models that are on par with the best proprietary models available today. We wanted to address developer feedback to increase the overall helpfulness of Llama 3 and are doing so while continuing to play a leading role on responsible use and deployment of LLMs. We are embracing the open source ethos of releasing early and often to enable the community to get access to these models while they are still in development. The text-based models we are releasing today are the first in the Llama 3 collection of models. Our goal in the near future is to make Llama 3 multilingual and multimodal, have longer context, and continue to improve overall performance across core LLM capabilities such as reasoning and coding.

State-of-the-art performance

Our new 8B and 70B parameter Llama 3 models are a major leap over Llama 2 and establish a new state-of-the-art for LLM models at those scales. Thanks to improvements in pretraining and post-training, our pretrained and instruction-fine-tuned models are the best models existing today at the 8B and 70B parameter scale. Improvements in our post-training procedures substantially reduced false refusal rates, improved alignment, and increased diversity in model responses. We also saw greatly improved capabilities like reasoning, code generation, and instruction following making Llama 3 more steerable.

recent research paper on software testing

*Please see evaluation details for setting and parameters with which these evaluations are calculated.

In the development of Llama 3, we looked at model performance on standard benchmarks and also sought to optimize for performance for real-world scenarios. To this end, we developed a new high-quality human evaluation set. This evaluation set contains 1,800 prompts that cover 12 key use cases: asking for advice, brainstorming, classification, closed question answering, coding, creative writing, extraction, inhabiting a character/persona, open question answering, reasoning, rewriting, and summarization. To prevent accidental overfitting of our models on this evaluation set, even our own modeling teams do not have access to it. The chart below shows aggregated results of our human evaluations across of these categories and prompts against Claude Sonnet, Mistral Medium, and GPT-3.5.

recent research paper on software testing

Preference rankings by human annotators based on this evaluation set highlight the strong performance of our 70B instruction-following model compared to competing models of comparable size in real-world scenarios.

Our pretrained model also establishes a new state-of-the-art for LLM models at those scales.

recent research paper on software testing

To develop a great language model, we believe it’s important to innovate, scale, and optimize for simplicity. We adopted this design philosophy throughout the Llama 3 project with a focus on four key ingredients: the model architecture, the pretraining data, scaling up pretraining, and instruction fine-tuning.

Model architecture

In line with our design philosophy, we opted for a relatively standard decoder-only transformer architecture in Llama 3. Compared to Llama 2, we made several key improvements. Llama 3 uses a tokenizer with a vocabulary of 128K tokens that encodes language much more efficiently, which leads to substantially improved model performance. To improve the inference efficiency of Llama 3 models, we’ve adopted grouped query attention (GQA) across both the 8B and 70B sizes. We trained the models on sequences of 8,192 tokens, using a mask to ensure self-attention does not cross document boundaries.

Training data

To train the best language model, the curation of a large, high-quality training dataset is paramount. In line with our design principles, we invested heavily in pretraining data. Llama 3 is pretrained on over 15T tokens that were all collected from publicly available sources. Our training dataset is seven times larger than that used for Llama 2, and it includes four times more code. To prepare for upcoming multilingual use cases, over 5% of the Llama 3 pretraining dataset consists of high-quality non-English data that covers over 30 languages. However, we do not expect the same level of performance in these languages as in English.

To ensure Llama 3 is trained on data of the highest quality, we developed a series of data-filtering pipelines. These pipelines include using heuristic filters, NSFW filters, semantic deduplication approaches, and text classifiers to predict data quality. We found that previous generations of Llama are surprisingly good at identifying high-quality data, hence we used Llama 2 to generate the training data for the text-quality classifiers that are powering Llama 3.

We also performed extensive experiments to evaluate the best ways of mixing data from different sources in our final pretraining dataset. These experiments enabled us to select a data mix that ensures that Llama 3 performs well across use cases including trivia questions, STEM, coding, historical knowledge, etc.

Scaling up pretraining

To effectively leverage our pretraining data in Llama 3 models, we put substantial effort into scaling up pretraining. Specifically, we have developed a series of detailed scaling laws for downstream benchmark evaluations. These scaling laws enable us to select an optimal data mix and to make informed decisions on how to best use our training compute. Importantly, scaling laws allow us to predict the performance of our largest models on key tasks (for example, code generation as evaluated on the HumanEval benchmark—see above) before we actually train the models. This helps us ensure strong performance of our final models across a variety of use cases and capabilities.

We made several new observations on scaling behavior during the development of Llama 3. For example, while the Chinchilla-optimal amount of training compute for an 8B parameter model corresponds to ~200B tokens, we found that model performance continues to improve even after the model is trained on two orders of magnitude more data. Both our 8B and 70B parameter models continued to improve log-linearly after we trained them on up to 15T tokens. Larger models can match the performance of these smaller models with less training compute, but smaller models are generally preferred because they are much more efficient during inference.

To train our largest Llama 3 models, we combined three types of parallelization: data parallelization, model parallelization, and pipeline parallelization. Our most efficient implementation achieves a compute utilization of over 400 TFLOPS per GPU when trained on 16K GPUs simultaneously. We performed training runs on two custom-built 24K GPU clusters . To maximize GPU uptime, we developed an advanced new training stack that automates error detection, handling, and maintenance. We also greatly improved our hardware reliability and detection mechanisms for silent data corruption, and we developed new scalable storage systems that reduce overheads of checkpointing and rollback. Those improvements resulted in an overall effective training time of more than 95%. Combined, these improvements increased the efficiency of Llama 3 training by ~three times compared to Llama 2.

Instruction fine-tuning

To fully unlock the potential of our pretrained models in chat use cases, we innovated on our approach to instruction-tuning as well. Our approach to post-training is a combination of supervised fine-tuning (SFT), rejection sampling, proximal policy optimization (PPO), and direct preference optimization (DPO). The quality of the prompts that are used in SFT and the preference rankings that are used in PPO and DPO has an outsized influence on the performance of aligned models. Some of our biggest improvements in model quality came from carefully curating this data and performing multiple rounds of quality assurance on annotations provided by human annotators.

Learning from preference rankings via PPO and DPO also greatly improved the performance of Llama 3 on reasoning and coding tasks. We found that if you ask a model a reasoning question that it struggles to answer, the model will sometimes produce the right reasoning trace: The model knows how to produce the right answer, but it does not know how to select it. Training on preference rankings enables the model to learn how to select it.

Building with Llama 3

Our vision is to enable developers to customize Llama 3 to support relevant use cases and to make it easier to adopt best practices and improve the open ecosystem. With this release, we’re providing new trust and safety tools including updated components with both Llama Guard 2 and Cybersec Eval 2, and the introduction of Code Shield—an inference time guardrail for filtering insecure code produced by LLMs.

We’ve also co-developed Llama 3 with torchtune , the new PyTorch-native library for easily authoring, fine-tuning, and experimenting with LLMs. torchtune provides memory efficient and hackable training recipes written entirely in PyTorch. The library is integrated with popular platforms such as Hugging Face, Weights & Biases, and EleutherAI and even supports Executorch for enabling efficient inference to be run on a wide variety of mobile and edge devices. For everything from prompt engineering to using Llama 3 with LangChain we have a comprehensive getting started guide and takes you from downloading Llama 3 all the way to deployment at scale within your generative AI application.

A system-level approach to responsibility

We have designed Llama 3 models to be maximally helpful while ensuring an industry leading approach to responsibly deploying them. To achieve this, we have adopted a new, system-level approach to the responsible development and deployment of Llama. We envision Llama models as part of a broader system that puts the developer in the driver’s seat. Llama models will serve as a foundational piece of a system that developers design with their unique end goals in mind.

recent research paper on software testing

Instruction fine-tuning also plays a major role in ensuring the safety of our models. Our instruction-fine-tuned models have been red-teamed (tested) for safety through internal and external efforts. ​​Our red teaming approach leverages human experts and automation methods to generate adversarial prompts that try to elicit problematic responses. For instance, we apply comprehensive testing to assess risks of misuse related to Chemical, Biological, Cyber Security, and other risk areas. All of these efforts are iterative and used to inform safety fine-tuning of the models being released. You can read more about our efforts in the model card .

Llama Guard models are meant to be a foundation for prompt and response safety and can easily be fine-tuned to create a new taxonomy depending on application needs. As a starting point, the new Llama Guard 2 uses the recently announced MLCommons taxonomy, in an effort to support the emergence of industry standards in this important area. Additionally, CyberSecEval 2 expands on its predecessor by adding measures of an LLM’s propensity to allow for abuse of its code interpreter, offensive cybersecurity capabilities, and susceptibility to prompt injection attacks (learn more in our technical paper ). Finally, we’re introducing Code Shield which adds support for inference-time filtering of insecure code produced by LLMs. This offers mitigation of risks around insecure code suggestions, code interpreter abuse prevention, and secure command execution.

With the speed at which the generative AI space is moving, we believe an open approach is an important way to bring the ecosystem together and mitigate these potential harms. As part of that, we’re updating our Responsible Use Guide (RUG) that provides a comprehensive guide to responsible development with LLMs. As we outlined in the RUG, we recommend that all inputs and outputs be checked and filtered in accordance with content guidelines appropriate to the application. Additionally, many cloud service providers offer content moderation APIs and other tools for responsible deployment, and we encourage developers to also consider using these options.

Deploying Llama 3 at scale

Llama 3 will soon be available on all major platforms including cloud providers, model API providers, and much more. Llama 3 will be everywhere .

Our benchmarks show the tokenizer offers improved token efficiency, yielding up to 15% fewer tokens compared to Llama 2. Also, Group Query Attention (GQA) now has been added to Llama 3 8B as well. As a result, we observed that despite the model having 1B more parameters compared to Llama 2 7B, the improved tokenizer efficiency and GQA contribute to maintaining the inference efficiency on par with Llama 2 7B.

For examples of how to leverage all of these capabilities, check out Llama Recipes which contains all of our open source code that can be leveraged for everything from fine-tuning to deployment to model evaluation.

What’s next for Llama 3?

The Llama 3 8B and 70B models mark the beginning of what we plan to release for Llama 3. And there’s a lot more to come.

Our largest models are over 400B parameters and, while these models are still training, our team is excited about how they’re trending. Over the coming months, we’ll release multiple models with new capabilities including multimodality, the ability to converse in multiple languages, a much longer context window, and stronger overall capabilities. We will also publish a detailed research paper once we are done training Llama 3.

To give you a sneak preview for where these models are today as they continue training, we thought we could share some snapshots of how our largest LLM model is trending. Please note that this data is based on an early checkpoint of Llama 3 that is still training and these capabilities are not supported as part of the models released today.

recent research paper on software testing

We’re committed to the continued growth and development of an open AI ecosystem for releasing our models responsibly. We have long believed that openness leads to better, safer products, faster innovation, and a healthier overall market. This is good for Meta, and it is good for society. We’re taking a community-first approach with Llama 3, and starting today, these models are available on the leading cloud, hosting, and hardware platforms with many more to come.

Try Meta Llama 3 today

We’ve integrated our latest models into Meta AI, which we believe is the world’s leading AI assistant. It’s now built with Llama 3 technology and it’s available in more countries across our apps.

You can use Meta AI on Facebook, Instagram, WhatsApp, Messenger, and the web to get things done, learn, create, and connect with the things that matter to you. You can read more about the Meta AI experience here .

Visit the Llama 3 website to download the models and reference the Getting Started Guide for the latest list of all available platforms.

You’ll also soon be able to test multimodal Meta AI on our Ray-Ban Meta smart glasses.

As always, we look forward to seeing all the amazing products and experiences you will build with Meta Llama 3.

Our latest updates delivered to your inbox

Subscribe to our newsletter to keep up with Meta AI news, events, research breakthroughs, and more.

Join us in the pursuit of what’s possible with AI.

recent research paper on software testing

Product experiences

Foundational models

Latest news

Meta © 2024

Software and Drivers

  • PRESS RELEASES
  • PRESS BLOGS

Press Releases

Hp's next gen antivirus given perfect score in independent test.

Researchers at AV-TEST name HP Wolf Pro Security's NGAV a 'Top Product' for Corporate Endpoint Protection

April 25, 2024

PALO ALTO, Calif., April 25, 2024 - A  comprehensive new report  from  AV-TEST , a leading independent IT security research institute, has confirmed that HP Wolf Pro Security’s Next Generation Antivirus (NGAV) is one of the best  Windows Antivirus Software solutions for Business Users  on the market. Wolf Pro Security’s NGAV was recognized as a “top product” for corporate endpoint protection, receiving a perfect rating across all testing categories:

  • Protection: Assessing the ability of the antivirus to safeguard against malware and various cyber threats.
  • Performance: Measuring the speed and efficiency of the antivirus software across different systems.
  • Usability: Tracking false alarms and assessing the overall user experience, particularly concerning internet usage.

HP’s top rated NGAV is available as part of  HP Wolf Pro Security,  a comprehensive suite of solutions offering robust PC protection with minimal complexity – making it an efficient and cost-effective option for SMB and Midmarket customers. Additional features include:

  • Threat Containment: Leveraging isolation technology, this unique feature shields users from phishing attempts, zero-day exploits, ransomware, and unknown malware attacks.
  • Secure Browser: Preventing malicious links from compromising PC security, ensuring safe browsing experiences.
  • Credential Protection: Safeguarding login credentials from potential threats posed by malicious websites.

AV-Test also analyzed HP Wolf Pro Security as part of its latest  Advanced Threat Protection  assessment. The institute awarded the product a perfect score within this testing category, recognizing it as an “advanced” product for Endpoint Protection. The test simulates 10 real-life attack scenarios where typical malware detection may fail, but advanced protection capabilities in the product would stop the attack.

Dr. Ian Pratt, Global Head of Security for Personal Systems at HP Inc., comments:

“We were delighted to see AV-TEST’s independent report, verifying that HP’s NGAV performs at the highest level. Small and mid-sized organizations are targeted by increasingly sophisticated threats, especially as attacks are being supercharged by AI, which can have a devastating impact on firms operating with limited resources.

“By offering NGAV as part of the HP Wolf Pro Security suite, our layered approach helps organizations mitigate security risks and streamline IT operations, while providing a user-friendly experience. This means they can guard against a wider range of cyber threats.”

recent research paper on software testing

HP Inc. is a global technology leader and creator of solutions that enable people to bring their ideas to life and connect to the things that matter most. Operating in more than 170 countries, HP delivers a wide range of innovative and sustainable devices, services and subscriptions for personal computing, printing, 3D printing, hybrid work, gaming, and more. For more information, please visit  http://www.hp.com .

About the AV-TEST Institute

The AV-TEST GmbH is the independent research institute for IT security from Germany. For more than 15 years, the security experts from Magdeburg have guaranteed quality-assuring comparison and individual tests of virtually all internationally relevant IT security products. In this, the institute operates with absolute transparency and regularly makes its latest tests and current research findings available to the public free of charge on its website. By doing so, AV-TEST helps manufacturers towards product optimization, supports members of the media in publications and provides advice to users in product selection. Moreover, the institute assists industry associations, companies and government institutions on issues of IT security and develops security concepts for them.

Over 30 select security specialists, one of the largest collections of digital malware samples in the world, its own research department, as well as intensive collaboration with other scientific institutions guarantee tests on an internationally recognized level and at the current state of the art. AV-TEST utilizes analysis systems developed in-house for its tests, thus guaranteeing test results uninfluenced by third parties and reproducible at all times for all standard operating systems and platforms.

Thanks to many years of expertise, intensive research and laboratory conditions kept up-to-date, AV-TEST guarantees the highest quality standards of tested and certified IT security products. In addition to traditional virus research, AV-TEST is also active in the fields of security of IoT and eHealth products, applications for mobile devices, as well as in the field of data security of applications and services.

Media Contacts

©Copyright 2023 HP Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the expresswarranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.

Select Your Country/Region and Language

  • América Central
  • Canada - Français
  • Puerto Rico
  • United States
  • Asia Pacific
  • Hong Kong SAR
  • New Zealand
  • Philippines
  • 中國香港 - 繁體中文
  • Česká republika
  • Deutschland
  • Magyarország
  • Middle East
  • Saudi Arabia
  • South Africa
  • Switzerland
  • United Kingdom
  • الشرق الأوسط
  • المملكة العربية السعودية

HP Worldwide

  • Investor relations
  • Sustainable impact
  • Diversity, Equity and Inclusion
  • Press center
  • HP Store Newsletter
  • HP Printables Newsletter
  • Ways to buy
  • Shop online
  • Call an HP rep
  • Find a reseller
  • Enterprise store
  • Public sector purchasing
  • Download drivers
  • Support & troubleshooting
  • Register your product
  • Authorized service providers
  • Check repair status
  • Fraud alert
  • Security Center
  • HP Partners
  • HP Amplify Partner Program
  • HP Partner Portal
  • Stay connected
  • Product recycling |
  • Accessibility |
  • CA Supply Chains Act |
  • Use of cookies |
  • Your privacy choices |
  • Terms of use |
  • Limited warranty statement |
  • Terms & conditions of sales & service |

©2024 HP Development Company, L.P. The information contained herein is subject to change without notice.

Smithsonian

New Collections: Paul J. Smith Papers

Detail of grayscale image of many people gathered in a room, surrounding empty boxes stacked haphazardly.

This entry is part of an ongoing series highlighting new collections. The Archives of American Art collects primary source materials—original letters, writings, preliminary sketches, scrapbooks, photographs, financial records, and the like—that have significant research value for the study of art in the United States. The following essay was originally published in the Spring 2024 issue (vol. 63, no. 1) of the Archives of American Art Journal. More information about the journal can be found at  https://www.journals.uchicago.edu/toc/aaa/current .

Grayscale image of many people gathered in a room , surrounding empty boxes stacked haphazardly.

Measuring a massive 75 linear feet, the papers of museum curator and director Paul J. Smith (1931–2020) document his long and illustrious career through biographical material, extensive personal and professional correspondence, audio recordings and interview transcripts, photographic material (including photographs of artists taken by Smith), and voluminous research files relating to exhibitions, publications, and other projects. 

In addition to such landmark exhibitions as Objects: USA (1969) and Craft Today: Poetry of the Physical (1986), Smith’s papers document nearly all his curatorial endeavors, from his earliest years creating innovative window displays for Buffalo’s Flint & Kent department store, through his three-decade tenure at the Museum of Contemporary Crafts (later known as the American Craft Museum and then the Museum of Arts and Design) in New York, to his subsequent independent projects. Particularly from the mid-1960s through the mid-1970s, Smith’s exhibition files reveal a visionary approach that extended well beyond the traditional bounds of “craft,” engaging with experimental forms of artistic and material culture that were emerging at the time. In conjunction with the 1966 exhibition The Object in the Open Air , for example, Phyllis Yampolsky and Dean Fleming staged a happening-like event involving hundreds of individuals collectively painting 105 yards of canvas stretched between trees and poles in New York’s Central Park. 

Grayscale image of a man in a paint-covered jumpsuit making holes in large scrims of paper in front of an audience.

The following year, the exhibition Made from Paper opened with James Lee Byars’s performative sculpture The Giant Soluble Man , an ambitious project in which an enormous expanse of water-soluble paper covered the entirety of 53rd Street between Fifth and Sixth Avenues. On Byars’s signal, two Department of Sanitation street-cleaning trucks drove over the sculpture, spraying it with water and causing it to dissolve. In a later writing included in the papers, Smith recalled, “We took great risks with Byars’[s] events, in allowing the artist to realize what might have been termed an impossible idea.” Photographs in Smith’s papers reveal an additional—and until now, all but forgotten—performance the night before, entitled “Flux Masters of the Rear Guard,” by a group of young artists associated with the movement Fluxus .

Postcard with a foot collaged with torn pieces of papers, and writing in black ink. There is  a postmark of September 19, 1977 from Quakertown, New York, and a cancelled stamp with an image of the capital building that says RIGHT OF THE PEOPLE TO PEACABLY ASSEMBLE.

Smith maintained friendships and close associations with many artists, and his papers include hundreds of letters and postcards—many of them illustrated—from such noted figures as Byars, ceramicists Robert Arneson and Clayton Bailey , metalsmith and jeweler Robert Ebendorf , sculptor and fiber artist Sheila Hicks , textile designer and author Jack Lenor Larsen , and weaver Alice Kagawa Parrott . Of note are more than a dozen pieces of correspondence from glass artist Dale Chihuly (including one 1976 letter written on the back of a menu from the Gloucestershire Royal Hospital, where Chihuly was recovering from an automobile accident), and mail art in the form of small collages from pioneering fiber artist Lenore Tawney .

In 1975, Smith began recording interviews and documenting artists involved with studio craft. Among the archived tapes are largely untranscribed interviews with Parrott, Florence Eastmead, Toshiko Takaezu , Peter Voulkos . and Margret Craver Withers , and Smith also made thousands of photographs of artists in their homes, studios, and at events. These recordings and photographs show how Smith touched virtually every corner of the studio craft movement. Through their impressive depth and innumerable connections to other collections, his papers promise to continue that connective practice into the future.

Jacob Proctor is the Gilbert and Ann Kinney New York Collector at the Archives of American Art. 

Add new comment

  • No HTML tags allowed.
  • Web page addresses and email addresses turn into links automatically.
  • Lines and paragraphs break automatically.

Get Involved

Internship, fellowship, and volunteer opportunities provide students and lifelong learners with the ability to contribute to the study and preservation of visual arts records in America .

Archives of American Art in the Smithsonian Transcription Center

You can help make digitized historical documents more findable and useful by transcribing their text .

Visit the Archives of American Art project page in the Smithsonian Transcription Center now.

Terra Foundation Center for Digital Collections

A virtual repository of a substantial cross-section of the Archives' most significant collections.

Visit the Terra Foundation Center for Digital Collections

Research on software testing techniques and software automation testing tools

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

IMAGES

  1. (PDF) Recent Research Papers

    recent research paper on software testing

  2. Software Testing Thesis Paper

    recent research paper on software testing

  3. Software Testing Paper

    recent research paper on software testing

  4. 7 Best Research Paper Writing Software in 2022

    recent research paper on software testing

  5. (PDF) The First Paper in Bioinformatics?

    recent research paper on software testing

  6. (PDF) Review Paper on Various Software Testing Techniques & Strategies

    recent research paper on software testing

VIDEO

  1. 22413 software engineering question paper 2022/23 TY-CO 4th sem MSBTE Board exam question paper

  2. Skeleton-of-Thought: Building a New Template from Scratch

  3. Research Paper (Software Security Specifications and Design)

  4. Software Testing University Question Paper Dec-2023 |ST Que Paper |Software Testing |ST

  5. The Importance of Software Testing and Quality Assurance

  6. AI-powered research assistant to help you understand research papers better

COMMENTS

  1. PDF Artificial Intelligence in Software Testing : Impact, Problems

    Artificial Intelligence is gradually changing the landscape of software engineering in general [5] and software testing in particular [6] both in research and industry as well. In the last two decades, AI has been found to have made a considerable impact on the way we are approach-ing software testing.

  2. software testing Latest Research Papers

    In this chapter, the authors introduce a software testing framework (STF) that uses rule-based reasoning (RBR), case-based reasoning (CBR), and ontology-based semantic similarity assessment to retrieve the test cases from the case library. Finally, experimental results are used to illustrate some of the features of the framework.

  3. Mapping the structure and evolution of software testing research over

    Research in software testing is growing and rapidly-evolving. Based on the keywords assigned to publications, we seek to identify predominant research topics and understand how they are connected and have evolved. We have applied co-word analysis to characterize the topology of software testing research over four decades of research publications.

  4. Artificial Intelligence in Software Testing: A Systematic Review

    Software testing is a crucial component of software development. With the increasing complexity of software systems, traditional manual testing methods are becoming less feasible. Artificial Intelligence (AI) has emerged as a promising approach to software testing in recent years. This review paper aims to provide an in-depth understanding of the current state of software testing using AI. The ...

  5. Machine Learning Applied to Software Testing: A Systematic Mapping

    Also, ML has been used to evaluate test oracle construction and to predict the cost of testing-related activities. The results of this paper outline the ML algorithms that are most commonly used to automate software-testing activities, helping researchers to understand the current state of research concerning ML applied to software testing.

  6. Software Testing Research Challenges: An Industrial Perspective

    There have been rapid recent developments in automated software test design, repair and program improvement. Advances in artificial intelligence also have great potential impact to tackle software testing research problems. In this paper we highlight open research problems and challenges from an industrial perspective. This perspective draws on our experience at Meta Platforms, which has been ...

  7. Reviewing Software Testing Models and Optimization Techniques: An

    This paper provides a comprehensive review of various software testing models and optimization techniques available in the literature, emphasizing their performance analysis and related research ...

  8. 1 Software Testing with Large Language Models: Survey, Landscape, and

    Software Testing with Large Language Models: Survey, Landscape, and Vision ... from both the software testing and LLMs perspectives. The paper presents a detailed discussion of the software testing tasks for ... Since it is a new emerging field, there are many research opportunities, including exploring LLMs in an early stage of

  9. [2201.05371] Artificial Intelligence in Software Testing : Impact

    Artificial Intelligence (AI) is making a significant impact in multiple areas like medical, military, industrial, domestic, law, arts as AI is capable to perform several roles such as managing smart factories, driving autonomous vehicles, creating accurate weather forecasts, detecting cancer and personal assistants, etc. Software testing is the process of putting the software to test for some ...

  10. A Decade of Intelligent Software Testing Research: A Bibliometric Analysis

    It gets harder and harder to guarantee the quality of software systems due to their increasing complexity and fast development. Because it helps spot errors and gaps during the first phases of software development, software testing is one of the most crucial stages of software engineering. Software testing used to be done manually, which is a time-consuming, imprecise procedure that comes with ...

  11. Software Testing Techniques: A Literature Review

    Keywords— Testing Methodologies, Software Testing Life Cycle, Testing Frameworks, Automation Testing, Test Driven Development, Test optimization, Quality Metrics Read more Conference Paper

  12. A Comprehensive Bibliometric Assessment on Software Testing ...

    The research study provides a comprehensive bibliometric assessment in the field of Software Testing (ST). The dynamic evolution in the field of ST is evident from the publication rate over the last six years. The research study is carried out to provide insight into the field of ST from various research bibliometric aspects. Our methodological approach includes dividing the six-year time ...

  13. Software Testing: Issues and Challenges of Artificial ...

    In recent years, there has been an increase in popularity for applications tha. ... numerous software testing challenges occur. This paper aims to recognize and to explain some of the biggest challenges that software testers face in dealing with AI/ML applications. For future research, this study has key implications. ...

  14. (PDF) Software Testing Techniques New Trends

    The three basic steps in the software testing are Unit testing, Integration testing and System testing. Each of these steps is. either tested by the software d eveloper or the quality. assurance ...

  15. Object-Oriented Software Testing: A Review

    The rest of this research paper consists of object-oriented software testing introduction, challenges and issue of testing, technique, method, cost challenges, and conclusion. ... Tonella proposed a method for classes' evolutionary testing (ET) in the latest research. It is used for optimal test parameter search that having the combinations ...

  16. A survey on software test automation return on investment, in

    Executives in the commercial IT industry strive to automate all test cases to save time, get repeatability, identify defects early, increase quality, and cut cost.1 The nature of software testing activity is a support function, and hence, it is treated as cost center. Therefore, the inclination to automate all test suites to minimize the cost is widely common.2 100% automation is rarely ...

  17. Software-testing education: A systematic literature mapping

    3.1. Goal and research questions. The goal of this study is to classify and summarize reported experience and evidence, as well as research topics and research questions, in the area of software testing education. By doing so, we seek to provide a holistic view to the body of knowledge on software testing education.

  18. Evolution of Software Testing Strategies and Trends: Semantic Content

    Abstract: From the early days of computer systems to the present, software testing has been considered as a crucial process that directly affects the quality and reliability of software-oriented products and services. Accordingly, there is a huge amount of literature regarding the improvement of software testing approaches. However, there are limited reviews that show the whole picture of the ...

  19. Artificial Intelligence Applied to Software Testing: A Literature Review

    Abstract — In the last few years Artificial Intelligence (AI) algorithms and Machine Learning (ML) approaches have bee n. successfully applied in real-world scenarios like commerce, industry and ...

  20. Latest Research and Development on Software Testing Techniques and

    In this paper, testing techniques and tools have been described and some typical latest researches have been summarized. Software Testing is a process of finding errors while executing a program so that we get a zero defect software. It is aimed at evaluating the capability or usability of a program. Software testing is an important means of accessing quality of software.

  21. Introducing Meta Llama 3: The most capable openly available LLM to date

    This includes introducing new trust and safety tools with Llama Guard 2, Code Shield, and CyberSec Eval 2. In the coming months, we expect to introduce new capabilities, longer context windows, additional model sizes, and enhanced performance, and we'll share the Llama 3 research paper.

  22. HP's Next Gen Antivirus Given Perfect Score In Independent Test

    PALO ALTO, Calif., April 25, 2024 - A comprehensive new report from AV-TEST, a leading independent IT security research institute, has confirmed that HP Wolf Pro Security's Next Generation Antivirus (NGAV) is one of the best Windows Antivirus Software solutions for Business Users on the market. Wolf Pro Security's NGAV was recognized as a "top product" for corporate endpoint protection ...

  23. Apple's iPhone AI Plans Explained In New Research Paper

    Update: Saturday April 13: Along with the academic paper pointing towards an AI-upgrade for Siri, backend code discovered by Nicolás Álvarez points to new server-side tools for individual iPhones.

  24. Software Testing Techniques: A Literature Review

    With the growing complexity of today's software applications injunction with the increasing competitive pressure has pushed the quality assurance of developed software towards new heights. Software testing is an inevitable part of the Software Development Lifecycle, and keeping in line with its criticality in the pre and post development process makes it something that should be catered with ...

  25. Design of highly functional genome editors by modeling the ...

    Gene editing has the potential to solve fundamental challenges in agriculture, biotechnology, and human health. CRISPR-based gene editors derived from microbes, while powerful, often show significant functional tradeoffs when ported into non-native environments, such as human cells. Artificial intelligence (AI) enabled design provides a powerful alternative with potential to bypass ...

  26. 608 PDFs

    Explore the latest full-text research PDFs, articles, conference papers, preprints and more on SOFTWARE QUALITY ASSURANCE. Find methods information, sources, references or conduct a literature ...

  27. Blog

    Paul J. Smith Papers, Archives of American Art, Smithsonian Intitution. The following year, the exhibition Made from Paper opened with James Lee Byars's performative sculpture The Giant Soluble Man, an ambitious project in which an enormous expanse of water-soluble paper covered the entirety of 53rd Street between Fifth and Sixth Avenues. On ...

  28. Research on software testing techniques and software automation testing

    Abstract: Software Testing is a process, which involves, executing of a software program/application and finding all errors or bugs in that program/application so that the result will be a defect-free software. Quality of any software can only be known through means of testing (software testing). Through the advancement of technology around the world, there increased the number of verification ...