Elemen Manajemen Personalia

Berikut ini adalah unsur-unsur Manajemen Personalia:

  1. Organisasi- Organisasi dikatakan sebagai kerangka kerja dari banyak kegiatan yang terjadi dalam pandangan tujuan yang tersedia dalam suatu perhatian. Suatu organisasi dapat disebut sebagai kerangka fisik dari berbagai kegiatan yang saling terkait. Mulai dari perencanaan tenaga kerja hingga pemeliharaan karyawan, semua aktivitas berlangsung dalam kerangka ini. Sifat organisasi tergantung pada tujuannya. Tujuan perhatian bisnis adalah menghasilkan keuntungan. Klub, rumah sakit, sekolah, dll. tujuan mereka adalah layanan. Tujuan dari konsultasi adalah memberikan nasihat yang baik. Oleh karena itu, itu adalah struktur organisasi di mana pencapaian tujuan suatu perusahaan bergantung. Dalam manajemen personalia, seorang manajer harus memahami pentingnya struktur organisasi.
    1. Pekerjaan fisik
    2. Pekerjaan kreatif
    3. Pekerjaan kemahiran
    4. Pekerjaan intelektual
    5. Pekerjaan konsultan
    6. Pekerjaan teknisJob Elemen kedua, yaitu, pekerjaan memberitahu kita kegiatan yang akan dilakukan dalam organisasi. Dikatakan bahwa tujuan suatu perusahaan hanya dapat dicapai melalui departemen fungsional di dalamnya. Oleh karena itu, melihat ukuran organisasi saat ini, sifat kegiatan berubah. Selain tiga departemen utama, departemen personalia dan penelitian adalah tambahan baru. Berbagai jenis pekerjaan yang tersedia adalah:
  2. Orang- Elemen terakhir dan terpenting dalam manajemen personalia adalah orang. Dalam suatu struktur organisasi yang tujuan utamanya adalah untuk mencapai tujuan, maka keberadaan tenaga kerja menjadi hal yang vital. Oleh karena itu, untuk mencapai tujuan departemen, berbagai jenis orang dengan keterampilan yang berbeda ditunjuk. Orang membentuk elemen yang paling penting karena:
    1. Struktur organisasi tidak ada artinya tanpanya.
    2. Ini membantu untuk mencapai tujuan perusahaan.
    3. Ini membantu dalam menjaga area fungsional.
    4. Ini membantu dalam mencapai tujuan departemen fungsional.
    5. Mereka membuat keprihatinan operasional.
    6. Mereka memberi kehidupan pada organisasi fisik.

    Berbagai jenis orang yang umumnya diperlukan dalam suatu keprihatinan adalah:

    1. Orang yang sehat secara fisik
    2. orang-orang kreatif
    3. Intelektual
    4. Orang-orang teknis
    5. Orang yang mahir dan terampil and

Dalam manajemen personalia, seorang manajer personalia harus memahami hubungan ketiga elemen tersebut dan kepentingannya dalam organisasi. Dia harus memahami pada dasarnya tiga hubungan: –

  1. Hubungan antara organisasi dan pekerjaan
  2. Hubungan antara pekerjaan dan orang
  3. Hubungan antara orang dan organisasi.

Hubungan antara organisasi dan pekerjaan membantu membuat pekerjaan menjadi efektif dan signifikan. Hubungan antara pekerjaan dan orang-orang membuat pekerjaan itu sendiri penting. Hubungan antara orang-orang dan organisasi memberikan pentingnya struktur organisasi dan peran orang-orang di dalamnya.

Sumber :

Artikel Terkait : Tujuan Manajemen Partisipatif


Asphalt and Hydraulic Concrete Mix Design

Hydraulic Concrete

The study of the performances of the Hydraulic Concrete Archean of Man gneiss aggregates with the addition of filler to replace the basalt of Kasila group in the asphalt and concrete mix design of southern Sierra Leone is presented in this document. The goal is to compare the results of the asphalt and concrete Hydraulic Concrete mix design with gneiss and basalt aggregate. The applied methods and design used are 1) Volumetric design and Marshall method for the asphalt, 2) French Dreux-Gorisse Method for the concrete. We added 2% of gneissic filler and 2% portland cement type 42.5 R to the asphalt hot mix with the gneiss aggregates to follow the criteria variation. The Marshall, the diametric compression and the Duriez tests require us to perform four different types of mix design. The four mix designs meet the requirements but F2 and F4 give the best mechanical properties. F2 (gneiss + 2% filler) and F4 (basalt) have many similarities from which we can conclude their interchangeability. F2 gives 5255 of optimal bitumen content. In regards to hydraulic concrete, the results of the compressive strength test (cement content 350 kg CMI 42.5 R/m3) with the gneiss and basalt aggregates are respectively 40 MPa and 45 MPa at 28 days curing: these values are greater than 35 MPa required by the technical specifications. The use of the Super Fluid ® Thermoplast 120 admixture, to increase the concrete compressive strength, is justified by the requirement of a minimum of 80% Rc28 at 24 hours. For both types of concrete, we have at 24 hours, 34 and 35 MPa which are higher than the minimum of 32 MPa (in 24 h). These results meet the requirements of the technical specifications.

A good road network with good infrastructure is essential to create a suitable environment for economic development. In West Africa, some economically strategic areas are still isolated due to poor road conditions.

As part of the Mano River cooperation between Liberia, Sierra Leone and Guinea, it is planned to link Monrovia (Liberia) and Conakry (Guinea) via BO (Southern Sierra Leone).

In order to connect Liberia and southern Sierra Leone, the European Development Fund has financed the Bandajuma-Mano River section, which is 103 km long.

However, the Bandajuma-Mano river project crosses the gneiss of the Archean Domain of Man [1].

It is in this context that research is being conducted on gneiss as a substitute for the long-used basalt.

To meet the objectives of this study, the following will be carried out:

a) The geological overview will provide a presentation of the local geology of southern Sierra Leone.

b) Asphalt mix Design in addition to the Marshall tests, the water sensitivity will be evaluated by the Duriez test. Using a mathematical approach, elastic modulus values will be calculated to assess the behavior of asphalt mix design with the compaction level.

c) The Concrete mix design with gneiss aggregates will allow the determination of its compatibility with Portland cement and its performance compared with basalt aggregates.


Sieving Error from Dry Separating Silt Soils

Silt SoilsThe dry-separation method is an alternative to the Silt Soils wet-preparation in the current European Standard for the determination of particle size distributions by the sieving of soils. Due Silt Soils to the risk of error, dry-separation is cautioned against in the standard; however, there is no additional guidance as to when it is unsuitable nor for the magnitude of error that it may introduce. This study investigates the dry-separation method as an alternative by comparing with the conventional method of Wet-preparation in terms of particle-size distributions of eight cohesionless sand-gravel soils with varying amounts of nonplastic fines. The findings indicate a gradually increasing sieving error for fractions at minus 0.5 mm with the amount of fines in the soil, and depending on the fines content of the soil, dry-separation introduced errors upwards of 45% in silt-sand-gravel soils. An empirical best-fit formula is proposed for the estimation of the error using the dry-preparation method on this type of soil. Furthermore, to avoid sieving errors, the results suggest that the dry-separation method should not be used for silt-sand-gravel soils exceeding 2% silt size fractions.

The process of obtaining the particle size distribution (i.e., the gradation) of a soil incorporates several sequential steps that are usually comprised of an initial weighing, an initial oven-drying, a second weighing, washing (removal of the fines), i.e., particles finer than 0.063 mm according to the current European Standard [1] or 0.075 mm according to the American equivalent [2], a second round of oven-drying, then a third stage of weighing, and finally the sieving of the remaining fractions of the soil. The sieving is usually performed by shaking the soil through a stack of sieves of different size opening. The sample results can thereafter be determined by the weight in terms of the size ranges. The final product, the particle-size distribution curve, is used in geotechnics for many purposes, i.e., analysis, design, prospecting, and for determining engineering properties [2], to name a few. The oven-drying stage is the most time-consuming step; at 110˚C ± 5˚C it typically requires 24 hours to complete.

When the process includes removing the fines by washing, it is called wet-preparation [1]. Dry-separation, on the other hand, is an alternative method to wet-preparation in the European Standard (but not in [2]) that allows one to bypass the washing stage and continue straight to the sieving stage. In the following discussion, these methods will be abbreviated as “wet-prep” and “dry-sep”, respectively. Naturally, a dry-sep method will save processing time; however, ref. [1] cautions against it by stating that “Wet preparation is preferred for soils with particles smaller than 0.063 mm, as use of dry-separation method may introduce significant errors”. However, no further guidance is given as to when the dry-sep method is unsuitable nor to the magnitude of the error that it may introduce if it is used inappropriately. Since it is less time-consuming, the dry-sep approach is advantageous when there are time- or economic constraints and in special cases, such as when the original soil must be preserved. However, sieving errors may arise for other reasons, i.e., sieve overloading (or under), errors due to particle properties and shape [3] or due to the formation of fine-particle aggregates that may lump together [4].

In this paper, particle-size distributions from the dry-sep method are compared to those of the conventional wet-prep method. Eight nonplastic silt-sand-gravel soils with varying amounts of fines are studied. It will be shown that the sieving error caused by using the dry-sep method increases with the amount of fines, generally resulting in errors in the minus 0.5-mm range, which may produce a notable underestimation of the finer fractions of the soil (e.g., the fines content).


Seismic Mitigation Designs for Reinforced Concrete Buildings According

Concrete Buildings

The Earthquake can be Concrete Buildings as a natural phenomenon or a disaster based on the seismic response of structures during a severe earthquake that plays a vital role in the extent of structural damage and resulting injuries and losses. It is Concrete Buildings necessary to predict the performance of the existing structures and structures at the design stage when it subjected to an earthquake load. Also, it is needed to predict the repair cost required for the rehabilitation of the existing buildings that is insufficient in seismic resistance, and the construction cost and the expected repairing cost for the structures at the design stage that designed to have a ductile behavior with acceptable cracks. This study aims to propose a method for seismic performance evaluation for existing and new structures depending on the width of cracks resulted from the seismic exposure. Also, it assesses the effect of building performance during earthquakes on its life cycle cost. FEMA 356 criteria were used to predict the building responses due to seismic hazard. A case study of seven-story reinforced concrete building designed by four design approaches and then analyzed by static nonlinear pushover analysis to predict its response and performance during earthquake events using Sap 2000 software. The first design approach is to design the building to resist gravity loads only by using ECP code. The second one is to design the building to resist gravity loads and seismic loads by using static linear analysis according to ECP code. The third one is to design the building to resist gravity loads and seismic loads by using static linear analysis according to the regulations of the Egyptian Society of Earthquake Engineering (ESEE). Finally the fourth one is to design the building as the second approach but with ground acceleration greater by five times than it or by using ductility factor R = 1. The methodology followed in this study provides initial guidelines, and steps required to assess the seismic performance and the cost associated with using a variety of design methods for reinforced concrete structures resisting earthquakes, selecting the retrofitting strategies that would be indicated to repair the structure after an earthquake.

The last earthquake events in various world areas and the resulting harms, especially human fatalities, have shown that the structures cannot withstand the earthquake loads. The large damages caused by the earthquake happened in Cairo in 1992 showed that at the construction time, the structures were designed to sustain only vertical loads and had ineffective horizontal load resistance. That expresses that, there are low ductility elements, shear resistance, and steel confinement in the plastic hinge zone that was founded in columns and beam column connections. So it is urgent to assess the seismic performance of existing structures and to constantly refresh the seismic codes for the design of the new structures.

The design of structures for seismic load resistance forced in the Egyptian design codes that motivated the Ministry of Housing and Buildings to regularly update the Egyptian codes provisions to consider the earthquake loads effect. After October 1992, a set of Egyptian codes has been released to avoid building failure and to control significant damages in structural elements. Earthquake analysis has many considerations that have been formed using the performance assessment of existing structures that have been subjected to a severe earthquake. To get a well-engineered structure, it must satisfy the seismic performance requirements that include the careful attention in analysis, design, reinforcement detailing, and good construction. The successful integration of analysis, design, and construction achieves the safety of the structure.

Krawinkler et al. [1], used the pushover analysis method to assess the building performance to get the inter-story drifts that take into account the changes in stiffness and strength, that can be used for the evaluation of P-∆ effect, determinate the effect of strength deterioration of elements on the behavior of the whole structure, get the sequence of failure of structure members and identify the weakness points in the structural members.

Maske [2], uses the nonlinear static pushover analysis, which is considered a common method for assessment of seismic performance for the new and existing structures. To discriminate the weakness zones in the building and then choose if it can be retrofitted or rehabilitated according to its level of damage. He performed the pushover analysis on multistoried frame structures by using SAP2000 software. He analyzed two framed structures with 5 and 12 floors, respectively. The results concluded from his study display that the behavior of properly reinforcement detailed reinforced concrete frame building is adequate as concluded by the capacity curve with demand curve intersection and the plastic hinges distribution in the structural members.

To perform the performance-based design, one must develop the evaluation method of the seismic resistant performance for the reinforced concrete structural members. The performance limit states are classified into three limit states, serviceability limit state, safety limit states, and damage control limit states. Each state is defined by the damages of the structural members. The yielding of reinforcing steel bars and the width of crack are used as the index of the damages. As the result of the plastic nonlinear frame analysis based on the performance-based design process method, the crack width of each member is calculated at each step [3].

Igarashi [4], developed an approach for assessment of seismic damage in reinforced concrete members which is important for exact selection of the most suitable repairing technique for structures damaged and affected by earthquakes risk. He presents the concepts and outlines of damage assessment steps of ductile reinforced concrete structural members. The suggested analytical models assess the width of crack, the length of crack, and the area of concrete that spalled in ductile column and beam. These models are planned to be applied to pushover analysis of framed structure in practical seismic design.


An ice cream production scheduling problem

An ice cream production scheduling problem in a hybrid flow shop model, the scheduling problem that comes from an ice cream manufacturing company. This production system can be modelled as a three stage nowait hybrid flow shop with batch dependent setup costs. To contribute reducing the gap between theory and practice we have considered the real constraints and the criteria used by planners. The problem considered has been formulated as a mixed integer programming. Further, two competitive heuristic procedures have been developed and one of them will be proposed to schedule in the ice cream factory.

The first research papers about hybrid flow shop appear in the 70’s. Salvador (1973) was one of the pioneer papers published on hybrid flow shop with more than two stages. The main motivation for this article was to obtain a programming procedure in a nylon polymerization factory. Although some authors, from this moment on, were concerned with the study of such systems, it was at the end of 80’s when hybrid flow shop systems began to have a real interest to researchers. This interest is caused by the increasing use of this configuration in our industry due to its flexibility. Even so, most of the published papers consider the programming problem in this environment from a theoretical point of view, and very few deal with real cases. According to the state of the art from Vignier, Billaut, and Proust (1999), only Narastmhan and Panwalkar (1984), Proust and Grunenberguer (1995), Paul (1979) and Sherali, Sarin and Kodialam (1990) are concerned on industrial applications. Subsequent to the publication to this state of the art, Wong, Chan and Ip (2001) propose a genetic algorithm to schedule spreading cutting and sewing operations in an apparel manufacture. Göthe-Lundgren, Lundgren and Persson (2002) solve the programming problem in an oil refinery company using mixed integer programming. Jin, Ohno, Ito and Elmaghraby (2002) develop a genetic algorithm to schedule orders in a printed circuit board assembly line. Lin and Liao (2003) propose a heuristic procedure to schedule one day’s orders in a label stickers manufacturing company to minimize the weighted maximal tardiness. Bertel and Billaut (2004) treat the processing checks system as a three-stage hybrid flow shop with recirculation and propose a heuristic procedure to minimize the weighted number of tardy jobs. Lee, Kim and Choi (2004) analyze the production scheduling problem in a leadframes manufacturing plant. The authors propose a bottleneck-focused heuristic procedure to minimize total tardiness of a given set of jobs. Ruiz and Maroto (2006) studied the scheduling problem in a ceramic tiles manufacturing and developed a genetic algorithm that performs very competitively. Ruiz, Serifoglu and Urlings (2008) trying to get closer to the real flow shop scheduling environment, investigated the effect of including realistic considerations, characteristic and constraints, on problem difficulty.

Conscious that an important gap between theory and practice still exists, we visited different types of factories in our surroundings to identify what productive systems can be formulated as hybrid flow shop and to detect, not only the most important constraints that have effects on the scheduling problem but also the criteria used by the planners. It has been possible to verify that different types of manufacturing systems, very different to each other, can be formulated as hybrid flow shop to develop efficient scheduling procedures. Between them, we included the manufacturing system on a labels factory, on an acrylic sheets factory, on a cocoa powder form factory, on an active pharmaceutical ingredients (API) factory, on a cold cuts factory or on an ice cream factory. Some special constraints have been detected on each manufacturing system (Ribas, 2007), but also some constraints that are common to all of them, in particular the effect of setup times. In this paper we have considered the characteristics found in the ice cream factory.

The rest of the paper is organized as follows: Section 2 analyzes the ice cream production system; Section 3 develops a mathematical model using mixed integer programming (MIP). Section 4 proposes a heuristic procedure, Section 5 shows the results obtained in the computational experience and Section 6 concludes.


Simulation-enhanced lean design process

Simulation-enhanced lean design process, A traditional lean transformation process does not validate the future state before implementation, relying instead on a series of iterations to modify the system until performance is satisfactory. An enhanced lean process that includes future state validation before implementation is presented. Simulation modeling and experimentation is proposed as the primary validation tool. Simulation modeling and experimentation extends value stream mapping to include time, the behavior of individual entities, structural variability, random variability, and component interaction effects. Experiments to analyze the model and draw conclusions about whether the lean transformation effectively addresses the current state gap can be conducted. Industrial applications of the enhanced lean process show it effectiveness.

Lean concepts for system transformation have become ubiquitous (Learnsigma 2007). However, lean concepts do not address one significant issue: providing evidence that a system transformation will meet measurable performance objectives before implementation. This lack of validation increases the risk the transformed system will not meet the performance objectives. The various existing lean processes address this deficiency by emphasizing their iterative nature: simply repeating all or a part of the process, including implementation, until the objectives are achieved. This approach is inherently oppositional to lean concepts as it unnecessarily extends the time and thus increases the cost of completing the transformation to a lean system.

Ferrin, Muller, and Muthler (2005) provide a perspective for addressing this lean deficiency: Simulation is uniquely able to support achieving a corporate goal of finding a correct, or at least a very good, solution that meets system design and operation requirements before implementation. Thus, these authors conclude that simulation provides a more powerful tool (a 6σ capable tool) than those commonly used in a lean process.

The objective of this paper is to develop an enhanced process for lean system transformation that includes kanban sizing, physical layout, and quantification of other parameters such that the risk of system performance objectives not being met by the first transformation activities is low. Developing such a process requires future state validation which can be accomplished by integrating simulation modeling and experimentation into a lean transformation process. Simulation is used to provide quantitative validation evidence that system requirements and objectives will be met by the first system transformation. Industrial applications are presented to demonstrate the effectiveness of the new framework.



The cumulative normal distribution

The cumulative normal distribution for A logistic approximation, This paper develops a logistic approximation to the cumulative normal distribution. Although the literature contains a vast collection of approximate functions for the normal distribution, they are very complicated, not very accurate, or valid for only a limited range. This paper proposes an enhanced approximate function. When comparing the proposed function to other approximations studied in the literature, it can be observed that the proposed logistic approximation has a simpler functional form and that it gives higher accuracy, with the maximum error of less than 0.00014 for the entire range. This is, to the best of the authors’ knowledge, the lowest level of error reported in the literature. The proposed logistic approximate function may be appealing to researchers, practitioners and educators given its functional simplicity and mathematical accuracy.

The most important continuous probability distribution used in engineering and science is perhaps the normal distribution. The normal distribution reasonably describes many phenomena that occur in nature. In addition, errors in measurements are extremely well approximated with the normal distribution. In 1733, DeMoivre developed the mathematical equation of the normal curve it provided a basis on which much of the theory of inductive statistics is founded. The normal distribution is often referred to as the Gaussian distribution, in honor of Karl Friedrich Gauss, who also derived its equation from a study of error in repeated measurements of an unknown quantity. The normal distribution finds numerous applications as a limiting distribution. Under certain conditions, the normal distribution provides a good approximation to binomial and hypergeometric distributions. In addition, it appears that the limiting distribution of sample averages is normal. This provides a broad base for statistical inference that proves very valuable in estimation and hypothesis testing. If a random variable X is normally distributed with mean μ and variance σ2, its probability density function is defined as

Unfortunately, there is no closed-form solution available for the above integral, and the values are usually found from the tables of the cumulative normal distribution. From a practical point of view, however, the standard normal distribution table only provides the cumulative probabilities associated with certain discrete z-values. When the z-value of interest is not available from the table, which frequently happens, practitioners often guess its probability by means of a linear interpolation using two adjacent z-values, or rely on statistical software.

In order to rectify this practical inconvenience, a number of approximate functions for a cumulative normal distribution have been reported in the research community. The literature review indicates, however, that they are mathematically complicated, do not have much accuracy, and lack validity when the entire range of z-values is considered. In order to address these shortcomings, this paper develops a logistic approximate function for the cumulative normal distribution. The mathematical form of the proposed function is much simpler than the majority of other approximate functions studied in the literature. In fact, probabilities can be even obtained by using a calculator. Further, the accuracy of the proposed function is higher than with the other approximate functions.

The remainder of the paper is organized as follows. In section 2, the existing literature on approximations to a cumulative normal distribution is discussed. Section 3 first discusses a logistic distribution and notes the similarities and differences between the logistic and normal distributions are noted. Section 3 then proposes the modified logistic approximate function by numerically identifying polynomial regression coefficients in such a way that the absolute maximum deviation between the cumulative distribution and the modified logistic function is minimized. Section 4 evaluates the accuracy of the proposed approximate function, and section 5 discusses and concludes about the results of this paper.


an effective lean manufacturing implementation

An effective lean manufacturing implementation, many companies are implementing lean manufacturing concept in order to remain competitive and sustainable, however, not many of them are successful in the process due to various reasons. Communication is an important aspect of lean process in order to successfully implement lean manufacturing. This paper determines the roles of communication process in ensuring a successful implementation of leanness in manufacturing companies. All the information of lean manufacturing practices and roles of communication in the implementation were compiled from related journals, books and websites. A study was conducted in an aerospace manufacturing in Malaysia. A five-point scale questionnaire is used as the study instrument. These questionnaires were distributed to 45 employees working in a kitting department and to 8 top management people. The results indicate that the degree of leanness were moderate.

Interest in the concept of lean production or lean manufacturing has grown and gained attention in the literature and in practice (Soriano-Meier et al., 2002 and Karlsson et al., 1996). Many organizations have employed lean manufacturing practices to improve competitiveness during the economic slowdown periods (Worley et al., 2006). According to Bhasin et al. (2006), less than 10 per cent of United Kingdom organizations have accomplished lean manufacturing implementation successfully. A number of variables may have impacts on lean implementation, and management support plays an important role in a lean manufacturing implementation (Worley et al., 2006). However, since lean implementation involves employees at all levels, there is a need for a good communication process to enable a smooth flow of the process. One of the main challenges of communication is to ensure that the changes are being readily accepted and implemented by everyone at all levels.

Karlsson et al. (1996) stated that lean should be seen as a direction and the focus lies on the change in the determinants. The determinants that are able to reflect changes in an effort to become lean had been identified by Karlsson et al. (1997). It is essential to note that lean productions viewed a complex organizational principle that requires major changes in a company (Mathaisel et al., 2000). Hence, there is a positive relationship between investments in the supporting manufacturing infrastructure and actual changes towards lean manufacturing (Soriano-Meier et al., 2002).


Hello world!

Welcome to BLOG MAHASISSWA UNIVERSITAS MEDAN AREA. This is your first post. Edit or delete it, then start writing!