Apr 12 18:53:12.976798 kernel: Linux version 5.15.154-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Apr 12 17:19:00 -00 2024 Apr 12 18:53:12.976821 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=189121f7c8c0a24098d3bb1e040d34611f7c276be43815ff7fe409fce185edaf Apr 12 18:53:12.976830 kernel: BIOS-provided physical RAM map: Apr 12 18:53:12.976835 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 12 18:53:12.976841 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 12 18:53:12.976846 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 12 18:53:12.976853 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Apr 12 18:53:12.976859 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Apr 12 18:53:12.976866 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 12 18:53:12.976872 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 12 18:53:12.976877 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 12 18:53:12.976883 kernel: NX (Execute Disable) protection: active Apr 12 18:53:12.976920 kernel: SMBIOS 2.8 present. Apr 12 18:53:12.976927 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 12 18:53:12.976936 kernel: Hypervisor detected: KVM Apr 12 18:53:12.976942 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 12 18:53:12.976948 kernel: kvm-clock: cpu 0, msr 14191001, primary cpu clock Apr 12 18:53:12.976954 kernel: kvm-clock: using sched offset of 2675229846 cycles Apr 12 18:53:12.976961 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 12 18:53:12.976967 kernel: tsc: Detected 2794.748 MHz processor Apr 12 18:53:12.976973 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 12 18:53:12.976980 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 12 18:53:12.976989 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Apr 12 18:53:12.976997 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 12 18:53:12.977003 kernel: Using GB pages for direct mapping Apr 12 18:53:12.977010 kernel: ACPI: Early table checksum verification disabled Apr 12 18:53:12.977016 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Apr 12 18:53:12.977022 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:53:12.977028 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:53:12.977034 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:53:12.977040 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 12 18:53:12.977047 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:53:12.977054 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:53:12.977060 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:53:12.977067 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Apr 12 18:53:12.982197 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Apr 12 18:53:12.982212 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 12 18:53:12.982219 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Apr 12 18:53:12.982225 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Apr 12 18:53:12.982232 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Apr 12 18:53:12.982244 kernel: No NUMA configuration found Apr 12 18:53:12.982251 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Apr 12 18:53:12.982257 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Apr 12 18:53:12.982264 kernel: Zone ranges: Apr 12 18:53:12.982292 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 12 18:53:12.982299 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Apr 12 18:53:12.982307 kernel: Normal empty Apr 12 18:53:12.982314 kernel: Movable zone start for each node Apr 12 18:53:12.982320 kernel: Early memory node ranges Apr 12 18:53:12.982327 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 12 18:53:12.982334 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Apr 12 18:53:12.982340 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Apr 12 18:53:12.982347 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 12 18:53:12.982353 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 12 18:53:12.982372 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Apr 12 18:53:12.982381 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 12 18:53:12.982388 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 12 18:53:12.982395 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 12 18:53:12.982405 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 12 18:53:12.982411 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 12 18:53:12.982418 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 12 18:53:12.982425 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 12 18:53:12.982444 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 12 18:53:12.982450 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 12 18:53:12.982459 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 12 18:53:12.982465 kernel: TSC deadline timer available Apr 12 18:53:12.982472 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 12 18:53:12.982491 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 12 18:53:12.982510 kernel: kvm-guest: setup PV sched yield Apr 12 18:53:12.982519 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Apr 12 18:53:12.982526 kernel: Booting paravirtualized kernel on KVM Apr 12 18:53:12.982533 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 12 18:53:12.982540 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Apr 12 18:53:12.982745 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Apr 12 18:53:12.982754 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Apr 12 18:53:12.982761 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 12 18:53:12.982767 kernel: kvm-guest: setup async PF for cpu 0 Apr 12 18:53:12.982774 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Apr 12 18:53:12.982780 kernel: kvm-guest: PV spinlocks enabled Apr 12 18:53:12.982800 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 12 18:53:12.982807 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Apr 12 18:53:12.982813 kernel: Policy zone: DMA32 Apr 12 18:53:12.982821 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=189121f7c8c0a24098d3bb1e040d34611f7c276be43815ff7fe409fce185edaf Apr 12 18:53:12.982831 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 12 18:53:12.982850 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 12 18:53:12.982858 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 12 18:53:12.982864 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 12 18:53:12.982871 kernel: Memory: 2436704K/2571756K available (12294K kernel code, 2275K rwdata, 13708K rodata, 47440K init, 4148K bss, 134792K reserved, 0K cma-reserved) Apr 12 18:53:12.982878 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 12 18:53:12.982916 kernel: ftrace: allocating 34508 entries in 135 pages Apr 12 18:53:12.982927 kernel: ftrace: allocated 135 pages with 4 groups Apr 12 18:53:12.982934 kernel: rcu: Hierarchical RCU implementation. Apr 12 18:53:12.982941 kernel: rcu: RCU event tracing is enabled. Apr 12 18:53:12.982957 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 12 18:53:12.982968 kernel: Rude variant of Tasks RCU enabled. Apr 12 18:53:12.982975 kernel: Tracing variant of Tasks RCU enabled. Apr 12 18:53:12.982982 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 12 18:53:12.983023 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 12 18:53:12.983035 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 12 18:53:12.983042 kernel: random: crng init done Apr 12 18:53:12.983051 kernel: Console: colour VGA+ 80x25 Apr 12 18:53:12.983227 kernel: printk: console [ttyS0] enabled Apr 12 18:53:12.983235 kernel: ACPI: Core revision 20210730 Apr 12 18:53:12.983242 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 12 18:53:12.983249 kernel: APIC: Switch to symmetric I/O mode setup Apr 12 18:53:12.983255 kernel: x2apic enabled Apr 12 18:53:12.983262 kernel: Switched APIC routing to physical x2apic. Apr 12 18:53:12.983289 kernel: kvm-guest: setup PV IPIs Apr 12 18:53:12.983297 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 12 18:53:12.983306 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 12 18:53:12.983312 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Apr 12 18:53:12.983319 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 12 18:53:12.983326 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 12 18:53:12.983332 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 12 18:53:12.983349 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 12 18:53:12.983365 kernel: Spectre V2 : Mitigation: Retpolines Apr 12 18:53:12.983372 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 12 18:53:12.983381 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 12 18:53:12.983395 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Apr 12 18:53:12.983414 kernel: RETBleed: Mitigation: untrained return thunk Apr 12 18:53:12.983424 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 12 18:53:12.983431 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Apr 12 18:53:12.983438 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 12 18:53:12.983445 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 12 18:53:12.983458 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 12 18:53:12.983471 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 12 18:53:12.983478 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Apr 12 18:53:12.983487 kernel: Freeing SMP alternatives memory: 32K Apr 12 18:53:12.983512 kernel: pid_max: default: 32768 minimum: 301 Apr 12 18:53:12.983524 kernel: LSM: Security Framework initializing Apr 12 18:53:12.983531 kernel: SELinux: Initializing. Apr 12 18:53:12.983538 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 12 18:53:12.983545 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 12 18:53:12.983552 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Apr 12 18:53:12.983662 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 12 18:53:12.983669 kernel: ... version: 0 Apr 12 18:53:12.983676 kernel: ... bit width: 48 Apr 12 18:53:12.983683 kernel: ... generic registers: 6 Apr 12 18:53:12.983690 kernel: ... value mask: 0000ffffffffffff Apr 12 18:53:12.983697 kernel: ... max period: 00007fffffffffff Apr 12 18:53:12.983704 kernel: ... fixed-purpose events: 0 Apr 12 18:53:12.983711 kernel: ... event mask: 000000000000003f Apr 12 18:53:12.983718 kernel: signal: max sigframe size: 1776 Apr 12 18:53:12.983727 kernel: rcu: Hierarchical SRCU implementation. Apr 12 18:53:12.983733 kernel: smp: Bringing up secondary CPUs ... Apr 12 18:53:12.983740 kernel: x86: Booting SMP configuration: Apr 12 18:53:12.983747 kernel: .... node #0, CPUs: #1 Apr 12 18:53:12.983754 kernel: kvm-clock: cpu 1, msr 14191041, secondary cpu clock Apr 12 18:53:12.983761 kernel: kvm-guest: setup async PF for cpu 1 Apr 12 18:53:12.983768 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Apr 12 18:53:12.983775 kernel: #2 Apr 12 18:53:12.983782 kernel: kvm-clock: cpu 2, msr 14191081, secondary cpu clock Apr 12 18:53:12.983789 kernel: kvm-guest: setup async PF for cpu 2 Apr 12 18:53:12.983797 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Apr 12 18:53:12.983804 kernel: #3 Apr 12 18:53:12.983811 kernel: kvm-clock: cpu 3, msr 141910c1, secondary cpu clock Apr 12 18:53:12.983818 kernel: kvm-guest: setup async PF for cpu 3 Apr 12 18:53:12.983825 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Apr 12 18:53:12.983832 kernel: smp: Brought up 1 node, 4 CPUs Apr 12 18:53:12.983838 kernel: smpboot: Max logical packages: 1 Apr 12 18:53:12.983845 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Apr 12 18:53:12.983852 kernel: devtmpfs: initialized Apr 12 18:53:12.983863 kernel: x86/mm: Memory block size: 128MB Apr 12 18:53:12.983871 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 12 18:53:12.983878 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 12 18:53:12.983885 kernel: pinctrl core: initialized pinctrl subsystem Apr 12 18:53:12.983901 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 12 18:53:12.983909 kernel: audit: initializing netlink subsys (disabled) Apr 12 18:53:12.983916 kernel: audit: type=2000 audit(1712947992.505:1): state=initialized audit_enabled=0 res=1 Apr 12 18:53:12.983922 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 12 18:53:12.983929 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 12 18:53:12.983938 kernel: cpuidle: using governor menu Apr 12 18:53:12.983945 kernel: ACPI: bus type PCI registered Apr 12 18:53:12.983952 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 12 18:53:12.983959 kernel: dca service started, version 1.12.1 Apr 12 18:53:12.983966 kernel: PCI: Using configuration type 1 for base access Apr 12 18:53:12.983973 kernel: PCI: Using configuration type 1 for extended access Apr 12 18:53:12.983980 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 12 18:53:12.983986 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Apr 12 18:53:12.983996 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Apr 12 18:53:12.984020 kernel: ACPI: Added _OSI(Module Device) Apr 12 18:53:12.984040 kernel: ACPI: Added _OSI(Processor Device) Apr 12 18:53:12.984048 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 12 18:53:12.984055 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 12 18:53:12.984062 kernel: ACPI: Added _OSI(Linux-Dell-Video) Apr 12 18:53:12.984069 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Apr 12 18:53:12.984076 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Apr 12 18:53:12.984083 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 12 18:53:12.984090 kernel: ACPI: Interpreter enabled Apr 12 18:53:12.984099 kernel: ACPI: PM: (supports S0 S3 S5) Apr 12 18:53:12.984106 kernel: ACPI: Using IOAPIC for interrupt routing Apr 12 18:53:12.984113 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 12 18:53:12.984120 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Apr 12 18:53:12.984127 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 12 18:53:12.984260 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 12 18:53:12.984282 kernel: acpiphp: Slot [3] registered Apr 12 18:53:12.984289 kernel: acpiphp: Slot [4] registered Apr 12 18:53:12.984298 kernel: acpiphp: Slot [5] registered Apr 12 18:53:12.984305 kernel: acpiphp: Slot [6] registered Apr 12 18:53:12.984312 kernel: acpiphp: Slot [7] registered Apr 12 18:53:12.984319 kernel: acpiphp: Slot [8] registered Apr 12 18:53:12.984325 kernel: acpiphp: Slot [9] registered Apr 12 18:53:12.984332 kernel: acpiphp: Slot [10] registered Apr 12 18:53:12.984340 kernel: acpiphp: Slot [11] registered Apr 12 18:53:12.984346 kernel: acpiphp: Slot [12] registered Apr 12 18:53:12.984355 kernel: acpiphp: Slot [13] registered Apr 12 18:53:12.984363 kernel: acpiphp: Slot [14] registered Apr 12 18:53:12.984372 kernel: acpiphp: Slot [15] registered Apr 12 18:53:12.984381 kernel: acpiphp: Slot [16] registered Apr 12 18:53:12.984387 kernel: acpiphp: Slot [17] registered Apr 12 18:53:12.984394 kernel: acpiphp: Slot [18] registered Apr 12 18:53:12.984401 kernel: acpiphp: Slot [19] registered Apr 12 18:53:12.984408 kernel: acpiphp: Slot [20] registered Apr 12 18:53:12.984415 kernel: acpiphp: Slot [21] registered Apr 12 18:53:12.984422 kernel: acpiphp: Slot [22] registered Apr 12 18:53:12.984428 kernel: acpiphp: Slot [23] registered Apr 12 18:53:12.984437 kernel: acpiphp: Slot [24] registered Apr 12 18:53:12.984444 kernel: acpiphp: Slot [25] registered Apr 12 18:53:12.984450 kernel: acpiphp: Slot [26] registered Apr 12 18:53:12.984457 kernel: acpiphp: Slot [27] registered Apr 12 18:53:12.984464 kernel: acpiphp: Slot [28] registered Apr 12 18:53:12.984471 kernel: acpiphp: Slot [29] registered Apr 12 18:53:12.984478 kernel: acpiphp: Slot [30] registered Apr 12 18:53:12.984485 kernel: acpiphp: Slot [31] registered Apr 12 18:53:12.984491 kernel: PCI host bridge to bus 0000:00 Apr 12 18:53:12.984742 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 12 18:53:12.984836 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 12 18:53:12.984927 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 12 18:53:12.984994 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Apr 12 18:53:12.985059 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Apr 12 18:53:12.985123 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 12 18:53:12.985208 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 12 18:53:12.985309 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Apr 12 18:53:12.985390 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Apr 12 18:53:12.985463 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Apr 12 18:53:12.985535 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Apr 12 18:53:12.985608 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Apr 12 18:53:12.985681 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Apr 12 18:53:12.985756 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Apr 12 18:53:12.985837 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Apr 12 18:53:12.985984 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Apr 12 18:53:12.986059 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Apr 12 18:53:12.986164 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Apr 12 18:53:12.986238 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 12 18:53:12.986328 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 12 18:53:12.986404 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 12 18:53:12.986478 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 12 18:53:12.986557 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Apr 12 18:53:12.986631 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Apr 12 18:53:12.986709 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 12 18:53:12.986782 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 12 18:53:12.986862 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Apr 12 18:53:12.986954 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Apr 12 18:53:12.987030 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 12 18:53:12.987104 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 12 18:53:12.987225 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Apr 12 18:53:12.987327 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Apr 12 18:53:12.988393 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 12 18:53:12.988654 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 12 18:53:12.988738 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 12 18:53:12.988748 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 12 18:53:12.988755 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 12 18:53:12.988762 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 12 18:53:12.988769 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 12 18:53:12.988776 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 12 18:53:12.988784 kernel: iommu: Default domain type: Translated Apr 12 18:53:12.988791 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 12 18:53:12.988864 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Apr 12 18:53:12.988965 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 12 18:53:12.989039 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Apr 12 18:53:12.989049 kernel: vgaarb: loaded Apr 12 18:53:12.989056 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 12 18:53:12.989063 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 12 18:53:12.989070 kernel: PTP clock support registered Apr 12 18:53:12.989077 kernel: PCI: Using ACPI for IRQ routing Apr 12 18:53:12.989084 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 12 18:53:12.989094 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 12 18:53:12.989101 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Apr 12 18:53:12.989108 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 12 18:53:12.989115 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 12 18:53:12.989123 kernel: clocksource: Switched to clocksource kvm-clock Apr 12 18:53:12.989130 kernel: VFS: Disk quotas dquot_6.6.0 Apr 12 18:53:12.989137 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 12 18:53:12.989144 kernel: pnp: PnP ACPI init Apr 12 18:53:12.989227 kernel: pnp 00:02: [dma 2] Apr 12 18:53:12.989239 kernel: pnp: PnP ACPI: found 6 devices Apr 12 18:53:12.989247 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 12 18:53:12.989254 kernel: NET: Registered PF_INET protocol family Apr 12 18:53:12.989261 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 12 18:53:12.989277 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 12 18:53:12.989284 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 12 18:53:12.989291 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 12 18:53:12.989299 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Apr 12 18:53:12.989307 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 12 18:53:12.989315 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 12 18:53:12.989322 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 12 18:53:12.989329 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 12 18:53:12.989336 kernel: NET: Registered PF_XDP protocol family Apr 12 18:53:12.989406 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 12 18:53:12.989473 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 12 18:53:12.989540 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 12 18:53:12.989624 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Apr 12 18:53:12.989759 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Apr 12 18:53:12.989900 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Apr 12 18:53:12.989979 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 12 18:53:12.990053 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Apr 12 18:53:12.990063 kernel: PCI: CLS 0 bytes, default 64 Apr 12 18:53:12.990071 kernel: Initialise system trusted keyrings Apr 12 18:53:12.990078 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 12 18:53:12.990085 kernel: Key type asymmetric registered Apr 12 18:53:12.990095 kernel: Asymmetric key parser 'x509' registered Apr 12 18:53:12.990102 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Apr 12 18:53:12.990109 kernel: io scheduler mq-deadline registered Apr 12 18:53:12.990116 kernel: io scheduler kyber registered Apr 12 18:53:12.990123 kernel: io scheduler bfq registered Apr 12 18:53:12.990130 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 12 18:53:12.990138 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Apr 12 18:53:12.990145 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Apr 12 18:53:12.990152 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Apr 12 18:53:12.990161 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 12 18:53:12.990168 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 12 18:53:12.990176 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 12 18:53:12.990183 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 12 18:53:12.990190 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 12 18:53:12.990274 kernel: rtc_cmos 00:05: RTC can wake from S4 Apr 12 18:53:12.990285 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 12 18:53:12.990353 kernel: rtc_cmos 00:05: registered as rtc0 Apr 12 18:53:12.990423 kernel: rtc_cmos 00:05: setting system clock to 2024-04-12T18:53:12 UTC (1712947992) Apr 12 18:53:12.990492 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 12 18:53:12.990501 kernel: NET: Registered PF_INET6 protocol family Apr 12 18:53:12.990508 kernel: Segment Routing with IPv6 Apr 12 18:53:12.990515 kernel: In-situ OAM (IOAM) with IPv6 Apr 12 18:53:12.990523 kernel: NET: Registered PF_PACKET protocol family Apr 12 18:53:12.990530 kernel: Key type dns_resolver registered Apr 12 18:53:12.990537 kernel: IPI shorthand broadcast: enabled Apr 12 18:53:12.990544 kernel: sched_clock: Marking stable (460054522, 101238868)->(657797360, -96503970) Apr 12 18:53:12.990553 kernel: registered taskstats version 1 Apr 12 18:53:12.990560 kernel: Loading compiled-in X.509 certificates Apr 12 18:53:12.990568 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.154-flatcar: 1fa140a38fc6bd27c8b56127e4d1eb4f665c7ec4' Apr 12 18:53:12.990575 kernel: Key type .fscrypt registered Apr 12 18:53:12.990582 kernel: Key type fscrypt-provisioning registered Apr 12 18:53:12.990589 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 12 18:53:12.990596 kernel: ima: Allocated hash algorithm: sha1 Apr 12 18:53:12.990603 kernel: ima: No architecture policies found Apr 12 18:53:12.990611 kernel: Freeing unused kernel image (initmem) memory: 47440K Apr 12 18:53:12.990618 kernel: Write protecting the kernel read-only data: 28672k Apr 12 18:53:12.990626 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Apr 12 18:53:12.990633 kernel: Freeing unused kernel image (rodata/data gap) memory: 628K Apr 12 18:53:12.990640 kernel: Run /init as init process Apr 12 18:53:12.990647 kernel: with arguments: Apr 12 18:53:12.990654 kernel: /init Apr 12 18:53:12.990661 kernel: with environment: Apr 12 18:53:12.990679 kernel: HOME=/ Apr 12 18:53:12.990687 kernel: TERM=linux Apr 12 18:53:12.990696 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 12 18:53:12.990705 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Apr 12 18:53:12.990715 systemd[1]: Detected virtualization kvm. Apr 12 18:53:12.990723 systemd[1]: Detected architecture x86-64. Apr 12 18:53:12.990731 systemd[1]: Running in initrd. Apr 12 18:53:12.990738 systemd[1]: No hostname configured, using default hostname. Apr 12 18:53:12.990746 systemd[1]: Hostname set to . Apr 12 18:53:12.990801 systemd[1]: Initializing machine ID from VM UUID. Apr 12 18:53:12.990810 systemd[1]: Queued start job for default target initrd.target. Apr 12 18:53:12.990817 systemd[1]: Started systemd-ask-password-console.path. Apr 12 18:53:12.990840 systemd[1]: Reached target cryptsetup.target. Apr 12 18:53:12.990848 systemd[1]: Reached target paths.target. Apr 12 18:53:12.990855 systemd[1]: Reached target slices.target. Apr 12 18:53:12.990863 systemd[1]: Reached target swap.target. Apr 12 18:53:12.990871 systemd[1]: Reached target timers.target. Apr 12 18:53:12.991001 systemd[1]: Listening on iscsid.socket. Apr 12 18:53:12.991011 systemd[1]: Listening on iscsiuio.socket. Apr 12 18:53:12.991019 systemd[1]: Listening on systemd-journald-audit.socket. Apr 12 18:53:12.991040 systemd[1]: Listening on systemd-journald-dev-log.socket. Apr 12 18:53:12.991048 systemd[1]: Listening on systemd-journald.socket. Apr 12 18:53:12.991056 systemd[1]: Listening on systemd-networkd.socket. Apr 12 18:53:12.991064 systemd[1]: Listening on systemd-udevd-control.socket. Apr 12 18:53:12.991072 systemd[1]: Listening on systemd-udevd-kernel.socket. Apr 12 18:53:12.991081 systemd[1]: Reached target sockets.target. Apr 12 18:53:12.991099 systemd[1]: Starting kmod-static-nodes.service... Apr 12 18:53:12.991107 systemd[1]: Finished network-cleanup.service. Apr 12 18:53:12.991115 systemd[1]: Starting systemd-fsck-usr.service... Apr 12 18:53:12.991122 systemd[1]: Starting systemd-journald.service... Apr 12 18:53:12.991130 systemd[1]: Starting systemd-modules-load.service... Apr 12 18:53:12.991140 systemd[1]: Starting systemd-resolved.service... Apr 12 18:53:12.991256 systemd[1]: Starting systemd-vconsole-setup.service... Apr 12 18:53:12.991265 systemd[1]: Finished kmod-static-nodes.service. Apr 12 18:53:12.991279 systemd[1]: Finished systemd-fsck-usr.service. Apr 12 18:53:12.991288 kernel: audit: type=1130 audit(1712947992.970:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:12.991296 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Apr 12 18:53:12.991304 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Apr 12 18:53:12.991315 systemd-journald[197]: Journal started Apr 12 18:53:12.991357 systemd-journald[197]: Runtime Journal (/run/log/journal/1468baae763146e3b947534e71661b6e) is 6.0M, max 48.5M, 42.5M free. Apr 12 18:53:12.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:12.981604 systemd-modules-load[198]: Inserted module 'overlay' Apr 12 18:53:13.018900 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 12 18:53:13.018916 systemd[1]: Started systemd-journald.service. Apr 12 18:53:13.018931 kernel: Bridge firewalling registered Apr 12 18:53:13.018940 kernel: audit: type=1130 audit(1712947993.012:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:13.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:12.992204 systemd-resolved[199]: Positive Trust Anchors: Apr 12 18:53:13.023081 kernel: audit: type=1130 audit(1712947993.018:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:13.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:12.992223 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 12 18:53:12.992252 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Apr 12 18:53:12.994682 systemd-resolved[199]: Defaulting to hostname 'linux'. Apr 12 18:53:13.018849 systemd-modules-load[198]: Inserted module 'br_netfilter' Apr 12 18:53:13.019803 systemd[1]: Started systemd-resolved.service. Apr 12 18:53:13.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:13.033134 systemd[1]: Finished systemd-vconsole-setup.service. Apr 12 18:53:13.037503 kernel: audit: type=1130 audit(1712947993.032:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:13.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:13.037628 systemd[1]: Reached target nss-lookup.target. Apr 12 18:53:13.041950 kernel: audit: type=1130 audit(1712947993.036:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:13.042581 systemd[1]: Starting dracut-cmdline-ask.service... Apr 12 18:53:13.052927 kernel: SCSI subsystem initialized Apr 12 18:53:13.056842 systemd[1]: Finished dracut-cmdline-ask.service. Apr 12 18:53:13.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:13.059152 systemd[1]: Starting dracut-cmdline.service... Apr 12 18:53:13.062751 kernel: audit: type=1130 audit(1712947993.057:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:13.066141 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 12 18:53:13.066164 kernel: device-mapper: uevent: version 1.0.3 Apr 12 18:53:13.067725 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Apr 12 18:53:13.068506 dracut-cmdline[216]: dracut-dracut-053 Apr 12 18:53:13.070772 systemd-modules-load[198]: Inserted module 'dm_multipath' Apr 12 18:53:13.071829 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=189121f7c8c0a24098d3bb1e040d34611f7c276be43815ff7fe409fce185edaf Apr 12 18:53:13.072211 systemd[1]: Finished systemd-modules-load.service. Apr 12 18:53:13.081962 kernel: audit: type=1130 audit(1712947993.076:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:13.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:13.080615 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:53:13.087691 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:53:13.092017 kernel: audit: type=1130 audit(1712947993.087:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:13.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:13.141918 kernel: Loading iSCSI transport class v2.0-870. Apr 12 18:53:13.157917 kernel: iscsi: registered transport (tcp) Apr 12 18:53:13.179045 kernel: iscsi: registered transport (qla4xxx) Apr 12 18:53:13.179068 kernel: QLogic iSCSI HBA Driver Apr 12 18:53:13.207643 systemd[1]: Finished dracut-cmdline.service. Apr 12 18:53:13.212862 kernel: audit: type=1130 audit(1712947993.207:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:13.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:13.209431 systemd[1]: Starting dracut-pre-udev.service... Apr 12 18:53:13.255918 kernel: raid6: avx2x4 gen() 29330 MB/s Apr 12 18:53:13.272917 kernel: raid6: avx2x4 xor() 7807 MB/s Apr 12 18:53:13.289912 kernel: raid6: avx2x2 gen() 31293 MB/s Apr 12 18:53:13.306924 kernel: raid6: avx2x2 xor() 18948 MB/s Apr 12 18:53:13.323913 kernel: raid6: avx2x1 gen() 26293 MB/s Apr 12 18:53:13.340917 kernel: raid6: avx2x1 xor() 15180 MB/s Apr 12 18:53:13.357917 kernel: raid6: sse2x4 gen() 14264 MB/s Apr 12 18:53:13.374918 kernel: raid6: sse2x4 xor() 5543 MB/s Apr 12 18:53:13.391915 kernel: raid6: sse2x2 gen() 10863 MB/s Apr 12 18:53:13.408917 kernel: raid6: sse2x2 xor() 6774 MB/s Apr 12 18:53:13.425914 kernel: raid6: sse2x1 gen() 8619 MB/s Apr 12 18:53:13.443408 kernel: raid6: sse2x1 xor() 7588 MB/s Apr 12 18:53:13.443418 kernel: raid6: using algorithm avx2x2 gen() 31293 MB/s Apr 12 18:53:13.443426 kernel: raid6: .... xor() 18948 MB/s, rmw enabled Apr 12 18:53:13.444122 kernel: raid6: using avx2x2 recovery algorithm Apr 12 18:53:13.456915 kernel: xor: automatically using best checksumming function avx Apr 12 18:53:13.544969 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Apr 12 18:53:13.554561 systemd[1]: Finished dracut-pre-udev.service. Apr 12 18:53:13.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:13.555000 audit: BPF prog-id=7 op=LOAD Apr 12 18:53:13.555000 audit: BPF prog-id=8 op=LOAD Apr 12 18:53:13.556933 systemd[1]: Starting systemd-udevd.service... Apr 12 18:53:13.569397 systemd-udevd[401]: Using default interface naming scheme 'v252'. Apr 12 18:53:13.573448 systemd[1]: Started systemd-udevd.service. Apr 12 18:53:13.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:13.575505 systemd[1]: Starting dracut-pre-trigger.service... Apr 12 18:53:13.591029 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Apr 12 18:53:13.620909 systemd[1]: Finished dracut-pre-trigger.service. Apr 12 18:53:13.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:13.623948 systemd[1]: Starting systemd-udev-trigger.service... Apr 12 18:53:13.664115 systemd[1]: Finished systemd-udev-trigger.service. Apr 12 18:53:13.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:13.705923 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 12 18:53:13.707584 kernel: cryptd: max_cpu_qlen set to 1000 Apr 12 18:53:13.729719 kernel: libata version 3.00 loaded. Apr 12 18:53:13.729774 kernel: AVX2 version of gcm_enc/dec engaged. Apr 12 18:53:13.729788 kernel: AES CTR mode by8 optimization enabled Apr 12 18:53:13.730998 kernel: ata_piix 0000:00:01.1: version 2.13 Apr 12 18:53:13.731176 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 12 18:53:13.731204 kernel: GPT:9289727 != 19775487 Apr 12 18:53:13.731216 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 12 18:53:13.731228 kernel: GPT:9289727 != 19775487 Apr 12 18:53:13.731257 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 12 18:53:13.731270 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 12 18:53:13.735921 kernel: scsi host0: ata_piix Apr 12 18:53:13.736080 kernel: scsi host1: ata_piix Apr 12 18:53:13.736268 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Apr 12 18:53:13.736290 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Apr 12 18:53:13.757137 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Apr 12 18:53:13.803602 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (446) Apr 12 18:53:13.804420 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Apr 12 18:53:13.804676 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Apr 12 18:53:13.815013 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Apr 12 18:53:13.820749 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Apr 12 18:53:13.822571 systemd[1]: Starting disk-uuid.service... Apr 12 18:53:13.832366 disk-uuid[517]: Primary Header is updated. Apr 12 18:53:13.832366 disk-uuid[517]: Secondary Entries is updated. Apr 12 18:53:13.832366 disk-uuid[517]: Secondary Header is updated. Apr 12 18:53:13.836915 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 12 18:53:13.841911 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 12 18:53:13.844923 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 12 18:53:13.894936 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 12 18:53:13.894990 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 12 18:53:13.935816 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 12 18:53:13.936183 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 12 18:53:13.953922 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Apr 12 18:53:14.840942 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 12 18:53:14.841427 disk-uuid[518]: The operation has completed successfully. Apr 12 18:53:14.868151 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 12 18:53:14.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:14.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:14.868238 systemd[1]: Finished disk-uuid.service. Apr 12 18:53:14.874401 systemd[1]: Starting verity-setup.service... Apr 12 18:53:14.886924 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 12 18:53:14.904501 systemd[1]: Found device dev-mapper-usr.device. Apr 12 18:53:14.905804 systemd[1]: Mounting sysusr-usr.mount... Apr 12 18:53:14.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:14.906973 systemd[1]: Finished verity-setup.service. Apr 12 18:53:14.994917 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Apr 12 18:53:14.995127 systemd[1]: Mounted sysusr-usr.mount. Apr 12 18:53:14.996027 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Apr 12 18:53:14.996711 systemd[1]: Starting ignition-setup.service... Apr 12 18:53:14.999528 systemd[1]: Starting parse-ip-for-networkd.service... Apr 12 18:53:15.008094 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 12 18:53:15.008133 kernel: BTRFS info (device vda6): using free space tree Apr 12 18:53:15.008150 kernel: BTRFS info (device vda6): has skinny extents Apr 12 18:53:15.017071 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 12 18:53:15.025908 systemd[1]: Finished ignition-setup.service. Apr 12 18:53:15.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:15.027546 systemd[1]: Starting ignition-fetch-offline.service... Apr 12 18:53:15.059577 systemd[1]: Finished parse-ip-for-networkd.service. Apr 12 18:53:15.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:15.061000 audit: BPF prog-id=9 op=LOAD Apr 12 18:53:15.062587 systemd[1]: Starting systemd-networkd.service... Apr 12 18:53:15.092286 systemd-networkd[709]: lo: Link UP Apr 12 18:53:15.092302 systemd-networkd[709]: lo: Gained carrier Apr 12 18:53:15.094636 systemd-networkd[709]: Enumeration completed Apr 12 18:53:15.094727 systemd[1]: Started systemd-networkd.service. Apr 12 18:53:15.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:15.096672 systemd[1]: Reached target network.target. Apr 12 18:53:15.098549 systemd-networkd[709]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 12 18:53:15.098917 systemd[1]: Starting iscsiuio.service... Apr 12 18:53:15.103435 systemd-networkd[709]: eth0: Link UP Apr 12 18:53:15.103443 systemd-networkd[709]: eth0: Gained carrier Apr 12 18:53:15.107845 ignition[644]: Ignition 2.14.0 Apr 12 18:53:15.107856 ignition[644]: Stage: fetch-offline Apr 12 18:53:15.108002 ignition[644]: no configs at "/usr/lib/ignition/base.d" Apr 12 18:53:15.108014 ignition[644]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:53:15.108151 ignition[644]: parsed url from cmdline: "" Apr 12 18:53:15.108155 ignition[644]: no config URL provided Apr 12 18:53:15.108161 ignition[644]: reading system config file "/usr/lib/ignition/user.ign" Apr 12 18:53:15.108169 ignition[644]: no config at "/usr/lib/ignition/user.ign" Apr 12 18:53:15.108221 ignition[644]: op(1): [started] loading QEMU firmware config module Apr 12 18:53:15.108227 ignition[644]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 12 18:53:15.115236 ignition[644]: op(1): [finished] loading QEMU firmware config module Apr 12 18:53:15.126120 systemd[1]: Started iscsiuio.service. Apr 12 18:53:15.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:15.127866 systemd[1]: Starting iscsid.service... Apr 12 18:53:15.131403 iscsid[716]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Apr 12 18:53:15.131403 iscsid[716]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Apr 12 18:53:15.131403 iscsid[716]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Apr 12 18:53:15.131403 iscsid[716]: If using hardware iscsi like qla4xxx this message can be ignored. Apr 12 18:53:15.131403 iscsid[716]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Apr 12 18:53:15.131403 iscsid[716]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Apr 12 18:53:15.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:15.134762 systemd[1]: Started iscsid.service. Apr 12 18:53:15.148930 systemd[1]: Starting dracut-initqueue.service... Apr 12 18:53:15.159041 systemd[1]: Finished dracut-initqueue.service. Apr 12 18:53:15.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:15.160200 systemd[1]: Reached target remote-fs-pre.target. Apr 12 18:53:15.162777 systemd[1]: Reached target remote-cryptsetup.target. Apr 12 18:53:15.164728 systemd[1]: Reached target remote-fs.target. Apr 12 18:53:15.166328 systemd[1]: Starting dracut-pre-mount.service... Apr 12 18:53:15.173572 systemd[1]: Finished dracut-pre-mount.service. Apr 12 18:53:15.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:15.218006 ignition[644]: parsing config with SHA512: 77939788a31f1e5a1242746b2f3cc5661367498a9c2e79d20315fcf72ece04103c928093c0db2811f9d282de4de189396a09f6a151f493b0f0fe12fef7a05d08 Apr 12 18:53:15.233001 systemd-networkd[709]: eth0: DHCPv4 address 10.0.0.108/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 12 18:53:15.270595 unknown[644]: fetched base config from "system" Apr 12 18:53:15.270610 unknown[644]: fetched user config from "qemu" Apr 12 18:53:15.271365 ignition[644]: fetch-offline: fetch-offline passed Apr 12 18:53:15.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:15.272502 systemd[1]: Finished ignition-fetch-offline.service. Apr 12 18:53:15.271458 ignition[644]: Ignition finished successfully Apr 12 18:53:15.273336 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 12 18:53:15.273991 systemd[1]: Starting ignition-kargs.service... Apr 12 18:53:15.284725 ignition[730]: Ignition 2.14.0 Apr 12 18:53:15.284734 ignition[730]: Stage: kargs Apr 12 18:53:15.284841 ignition[730]: no configs at "/usr/lib/ignition/base.d" Apr 12 18:53:15.284853 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:53:15.286508 ignition[730]: kargs: kargs passed Apr 12 18:53:15.286561 ignition[730]: Ignition finished successfully Apr 12 18:53:15.290088 systemd[1]: Finished ignition-kargs.service. Apr 12 18:53:15.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:15.291194 systemd[1]: Starting ignition-disks.service... Apr 12 18:53:15.349816 ignition[736]: Ignition 2.14.0 Apr 12 18:53:15.349826 ignition[736]: Stage: disks Apr 12 18:53:15.349942 ignition[736]: no configs at "/usr/lib/ignition/base.d" Apr 12 18:53:15.349953 ignition[736]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:53:15.354554 ignition[736]: disks: disks passed Apr 12 18:53:15.354604 ignition[736]: Ignition finished successfully Apr 12 18:53:15.356766 systemd[1]: Finished ignition-disks.service. Apr 12 18:53:15.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:15.357342 systemd[1]: Reached target initrd-root-device.target. Apr 12 18:53:15.358751 systemd[1]: Reached target local-fs-pre.target. Apr 12 18:53:15.360358 systemd[1]: Reached target local-fs.target. Apr 12 18:53:15.361803 systemd[1]: Reached target sysinit.target. Apr 12 18:53:15.363226 systemd[1]: Reached target basic.target. Apr 12 18:53:15.365598 systemd[1]: Starting systemd-fsck-root.service... Apr 12 18:53:15.376016 systemd-fsck[744]: ROOT: clean, 612/553520 files, 56019/553472 blocks Apr 12 18:53:15.380556 systemd[1]: Finished systemd-fsck-root.service. Apr 12 18:53:15.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:15.381850 systemd[1]: Mounting sysroot.mount... Apr 12 18:53:15.387917 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Apr 12 18:53:15.388118 systemd[1]: Mounted sysroot.mount. Apr 12 18:53:15.389488 systemd[1]: Reached target initrd-root-fs.target. Apr 12 18:53:15.391790 systemd[1]: Mounting sysroot-usr.mount... Apr 12 18:53:15.393389 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Apr 12 18:53:15.393422 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 12 18:53:15.394740 systemd[1]: Reached target ignition-diskful.target. Apr 12 18:53:15.398629 systemd[1]: Mounted sysroot-usr.mount. Apr 12 18:53:15.400519 systemd[1]: Starting initrd-setup-root.service... Apr 12 18:53:15.404201 initrd-setup-root[754]: cut: /sysroot/etc/passwd: No such file or directory Apr 12 18:53:15.406872 initrd-setup-root[762]: cut: /sysroot/etc/group: No such file or directory Apr 12 18:53:15.410154 initrd-setup-root[770]: cut: /sysroot/etc/shadow: No such file or directory Apr 12 18:53:15.413315 initrd-setup-root[778]: cut: /sysroot/etc/gshadow: No such file or directory Apr 12 18:53:15.436967 systemd[1]: Finished initrd-setup-root.service. Apr 12 18:53:15.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:15.438757 systemd[1]: Starting ignition-mount.service... Apr 12 18:53:15.439713 systemd[1]: Starting sysroot-boot.service... Apr 12 18:53:15.446163 bash[795]: umount: /sysroot/usr/share/oem: not mounted. Apr 12 18:53:15.457855 systemd[1]: Finished sysroot-boot.service. Apr 12 18:53:15.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:15.463810 ignition[797]: INFO : Ignition 2.14.0 Apr 12 18:53:15.463810 ignition[797]: INFO : Stage: mount Apr 12 18:53:15.466056 ignition[797]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 12 18:53:15.466056 ignition[797]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:53:15.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:15.469469 ignition[797]: INFO : mount: mount passed Apr 12 18:53:15.469469 ignition[797]: INFO : Ignition finished successfully Apr 12 18:53:15.467115 systemd[1]: Finished ignition-mount.service. Apr 12 18:53:15.914033 systemd[1]: Mounting sysroot-usr-share-oem.mount... Apr 12 18:53:15.926546 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (805) Apr 12 18:53:15.926580 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 12 18:53:15.926594 kernel: BTRFS info (device vda6): using free space tree Apr 12 18:53:15.927372 kernel: BTRFS info (device vda6): has skinny extents Apr 12 18:53:15.931058 systemd[1]: Mounted sysroot-usr-share-oem.mount. Apr 12 18:53:15.933503 systemd[1]: Starting ignition-files.service... Apr 12 18:53:15.946271 ignition[825]: INFO : Ignition 2.14.0 Apr 12 18:53:15.946271 ignition[825]: INFO : Stage: files Apr 12 18:53:15.948253 ignition[825]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 12 18:53:15.948253 ignition[825]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:53:15.948253 ignition[825]: DEBUG : files: compiled without relabeling support, skipping Apr 12 18:53:15.952505 ignition[825]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 12 18:53:15.952505 ignition[825]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 12 18:53:15.952505 ignition[825]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 12 18:53:15.952505 ignition[825]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 12 18:53:15.952505 ignition[825]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 12 18:53:15.952505 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 12 18:53:15.952505 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 12 18:53:15.951061 unknown[825]: wrote ssh authorized keys file for user: core Apr 12 18:53:15.999091 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 12 18:53:16.107811 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 12 18:53:16.107811 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Apr 12 18:53:16.111954 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Apr 12 18:53:16.583165 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 12 18:53:16.862146 ignition[825]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Apr 12 18:53:16.862146 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Apr 12 18:53:16.867769 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Apr 12 18:53:16.867769 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Apr 12 18:53:16.988086 systemd-networkd[709]: eth0: Gained IPv6LL Apr 12 18:53:17.289032 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Apr 12 18:53:17.474925 ignition[825]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Apr 12 18:53:17.474925 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Apr 12 18:53:17.480164 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 12 18:53:17.480164 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 12 18:53:17.480164 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Apr 12 18:53:17.480164 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubeadm: attempt #1 Apr 12 18:53:17.548534 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Apr 12 18:53:17.952121 ignition[825]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: f40216b7d14046931c58072d10c7122934eac5a23c08821371f8b08ac1779443ad11d3458a4c5dcde7cf80fc600a9fefb14b1942aa46a52330248d497ca88836 Apr 12 18:53:17.952121 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Apr 12 18:53:17.956956 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Apr 12 18:53:17.956956 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubectl: attempt #1 Apr 12 18:53:18.006249 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Apr 12 18:53:18.561636 ignition[825]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 857e67001e74840518413593d90c6e64ad3f00d55fa44ad9a8e2ed6135392c908caff7ec19af18cbe10784b8f83afe687a0bc3bacbc9eee984cdeb9c0749cb83 Apr 12 18:53:18.565423 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Apr 12 18:53:18.565423 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubelet" Apr 12 18:53:18.565423 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubelet: attempt #1 Apr 12 18:53:18.618174 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Apr 12 18:53:19.274410 ignition[825]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: a283da2224d456958b2cb99b4f6faf4457c4ed89e9e95f37d970c637f6a7f64ff4dd4d2bfce538759b2d2090933bece599a285ef8fd132eb383fece9a3941560 Apr 12 18:53:19.279525 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubelet" Apr 12 18:53:19.279525 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Apr 12 18:53:19.279525 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Apr 12 18:53:19.279525 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 12 18:53:19.279525 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 12 18:53:19.648025 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 12 18:53:19.748717 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 12 18:53:19.751114 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/install.sh" Apr 12 18:53:19.751114 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/install.sh" Apr 12 18:53:19.751114 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 12 18:53:19.751114 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 12 18:53:19.751114 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 12 18:53:19.751114 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 12 18:53:19.751114 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 12 18:53:19.751114 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 12 18:53:19.751114 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 12 18:53:19.751114 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 12 18:53:19.751114 ignition[825]: INFO : files: op(11): [started] processing unit "prepare-critools.service" Apr 12 18:53:19.751114 ignition[825]: INFO : files: op(11): op(12): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Apr 12 18:53:19.751114 ignition[825]: INFO : files: op(11): op(12): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Apr 12 18:53:19.751114 ignition[825]: INFO : files: op(11): [finished] processing unit "prepare-critools.service" Apr 12 18:53:19.751114 ignition[825]: INFO : files: op(13): [started] processing unit "prepare-helm.service" Apr 12 18:53:19.751114 ignition[825]: INFO : files: op(13): op(14): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 12 18:53:19.751114 ignition[825]: INFO : files: op(13): op(14): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 12 18:53:19.786035 ignition[825]: INFO : files: op(13): [finished] processing unit "prepare-helm.service" Apr 12 18:53:19.786035 ignition[825]: INFO : files: op(15): [started] processing unit "coreos-metadata.service" Apr 12 18:53:19.786035 ignition[825]: INFO : files: op(15): op(16): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 12 18:53:19.786035 ignition[825]: INFO : files: op(15): op(16): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 12 18:53:19.786035 ignition[825]: INFO : files: op(15): [finished] processing unit "coreos-metadata.service" Apr 12 18:53:19.786035 ignition[825]: INFO : files: op(17): [started] processing unit "containerd.service" Apr 12 18:53:19.786035 ignition[825]: INFO : files: op(17): op(18): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 12 18:53:19.786035 ignition[825]: INFO : files: op(17): op(18): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 12 18:53:19.786035 ignition[825]: INFO : files: op(17): [finished] processing unit "containerd.service" Apr 12 18:53:19.786035 ignition[825]: INFO : files: op(19): [started] processing unit "prepare-cni-plugins.service" Apr 12 18:53:19.786035 ignition[825]: INFO : files: op(19): op(1a): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Apr 12 18:53:19.786035 ignition[825]: INFO : files: op(19): op(1a): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Apr 12 18:53:19.786035 ignition[825]: INFO : files: op(19): [finished] processing unit "prepare-cni-plugins.service" Apr 12 18:53:19.786035 ignition[825]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-critools.service" Apr 12 18:53:19.786035 ignition[825]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-critools.service" Apr 12 18:53:19.786035 ignition[825]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-helm.service" Apr 12 18:53:19.786035 ignition[825]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-helm.service" Apr 12 18:53:19.786035 ignition[825]: INFO : files: op(1d): [started] setting preset to disabled for "coreos-metadata.service" Apr 12 18:53:19.786035 ignition[825]: INFO : files: op(1d): op(1e): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 12 18:53:19.845023 ignition[825]: INFO : files: op(1d): op(1e): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 12 18:53:19.848497 ignition[825]: INFO : files: op(1d): [finished] setting preset to disabled for "coreos-metadata.service" Apr 12 18:53:19.848497 ignition[825]: INFO : files: op(1f): [started] setting preset to enabled for "prepare-cni-plugins.service" Apr 12 18:53:19.848497 ignition[825]: INFO : files: op(1f): [finished] setting preset to enabled for "prepare-cni-plugins.service" Apr 12 18:53:19.848497 ignition[825]: INFO : files: createResultFile: createFiles: op(20): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 12 18:53:19.848497 ignition[825]: INFO : files: createResultFile: createFiles: op(20): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 12 18:53:19.848497 ignition[825]: INFO : files: files passed Apr 12 18:53:19.848497 ignition[825]: INFO : Ignition finished successfully Apr 12 18:53:19.881077 kernel: kauditd_printk_skb: 24 callbacks suppressed Apr 12 18:53:19.881119 kernel: audit: type=1130 audit(1712947999.847:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:19.881134 kernel: audit: type=1130 audit(1712947999.859:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:19.881150 kernel: audit: type=1130 audit(1712947999.866:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:19.881164 kernel: audit: type=1131 audit(1712947999.866:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:19.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:19.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:19.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:19.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:19.846778 systemd[1]: Finished ignition-files.service. Apr 12 18:53:19.849639 systemd[1]: Starting initrd-setup-root-after-ignition.service... Apr 12 18:53:19.855080 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Apr 12 18:53:19.886392 initrd-setup-root-after-ignition[849]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Apr 12 18:53:19.856029 systemd[1]: Starting ignition-quench.service... Apr 12 18:53:19.889477 initrd-setup-root-after-ignition[852]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 12 18:53:19.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:19.857969 systemd[1]: Finished initrd-setup-root-after-ignition.service. Apr 12 18:53:19.903420 kernel: audit: type=1130 audit(1712947999.891:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:19.903453 kernel: audit: type=1131 audit(1712947999.891:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:19.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:19.860996 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 12 18:53:19.861063 systemd[1]: Finished ignition-quench.service. Apr 12 18:53:19.867780 systemd[1]: Reached target ignition-complete.target. Apr 12 18:53:19.878652 systemd[1]: Starting initrd-parse-etc.service... Apr 12 18:53:19.889764 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 12 18:53:19.889836 systemd[1]: Finished initrd-parse-etc.service. Apr 12 18:53:19.892475 systemd[1]: Reached target initrd-fs.target. Apr 12 18:53:19.900555 systemd[1]: Reached target initrd.target. Apr 12 18:53:19.901260 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Apr 12 18:53:19.901936 systemd[1]: Starting dracut-pre-pivot.service... Apr 12 18:53:19.918635 kernel: audit: type=1130 audit(1712947999.913:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:19.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:19.912650 systemd[1]: Finished dracut-pre-pivot.service. Apr 12 18:53:19.915306 systemd[1]: Starting initrd-cleanup.service... Apr 12 18:53:19.925129 systemd[1]: Stopped target nss-lookup.target. Apr 12 18:53:19.927005 systemd[1]: Stopped target remote-cryptsetup.target. Apr 12 18:53:19.928635 systemd[1]: Stopped target timers.target. Apr 12 18:53:19.930371 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 12 18:53:19.936430 kernel: audit: type=1131 audit(1712947999.931:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:19.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:19.930520 systemd[1]: Stopped dracut-pre-pivot.service. Apr 12 18:53:19.932151 systemd[1]: Stopped target initrd.target. Apr 12 18:53:19.936558 systemd[1]: Stopped target basic.target. Apr 12 18:53:19.938242 systemd[1]: Stopped target ignition-complete.target. Apr 12 18:53:19.939829 systemd[1]: Stopped target ignition-diskful.target. Apr 12 18:53:19.941404 systemd[1]: Stopped target initrd-root-device.target. Apr 12 18:53:19.943140 systemd[1]: Stopped target remote-fs.target. Apr 12 18:53:19.944746 systemd[1]: Stopped target remote-fs-pre.target. Apr 12 18:53:19.946425 systemd[1]: Stopped target sysinit.target. Apr 12 18:53:19.947974 systemd[1]: Stopped target local-fs.target. Apr 12 18:53:19.949558 systemd[1]: Stopped target local-fs-pre.target. Apr 12 18:53:19.951113 systemd[1]: Stopped target swap.target. Apr 12 18:53:19.958703 kernel: audit: type=1131 audit(1712947999.953:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:19.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:19.952559 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 12 18:53:19.952704 systemd[1]: Stopped dracut-pre-mount.service. Apr 12 18:53:19.965185 kernel: audit: type=1131 audit(1712947999.959:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:19.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:19.954337 systemd[1]: Stopped target cryptsetup.target. Apr 12 18:53:19.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:19.958774 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 12 18:53:19.958921 systemd[1]: Stopped dracut-initqueue.service. Apr 12 18:53:19.960708 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 12 18:53:19.960878 systemd[1]: Stopped ignition-fetch-offline.service. Apr 12 18:53:19.965358 systemd[1]: Stopped target paths.target. Apr 12 18:53:19.966848 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 12 18:53:19.971945 systemd[1]: Stopped systemd-ask-password-console.path. Apr 12 18:53:19.973285 systemd[1]: Stopped target slices.target. Apr 12 18:53:19.974973 systemd[1]: Stopped target sockets.target. Apr 12 18:53:19.976466 systemd[1]: iscsid.socket: Deactivated successfully. Apr 12 18:53:19.976557 systemd[1]: Closed iscsid.socket. Apr 12 18:53:19.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:19.978052 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 12 18:53:19.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:19.978196 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Apr 12 18:53:19.979965 systemd[1]: ignition-files.service: Deactivated successfully. Apr 12 18:53:19.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:19.980102 systemd[1]: Stopped ignition-files.service. Apr 12 18:53:19.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:19.990421 ignition[866]: INFO : Ignition 2.14.0 Apr 12 18:53:19.990421 ignition[866]: INFO : Stage: umount Apr 12 18:53:19.990421 ignition[866]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 12 18:53:19.990421 ignition[866]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:53:19.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:19.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:19.982485 systemd[1]: Stopping ignition-mount.service... Apr 12 18:53:19.997616 ignition[866]: INFO : umount: umount passed Apr 12 18:53:19.997616 ignition[866]: INFO : Ignition finished successfully Apr 12 18:53:19.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:19.983465 systemd[1]: Stopping iscsiuio.service... Apr 12 18:53:19.986503 systemd[1]: Stopping sysroot-boot.service... Apr 12 18:53:20.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:20.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:19.987311 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 12 18:53:19.987469 systemd[1]: Stopped systemd-udev-trigger.service. Apr 12 18:53:19.988332 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 12 18:53:20.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:20.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:19.988488 systemd[1]: Stopped dracut-pre-trigger.service. Apr 12 18:53:19.992161 systemd[1]: iscsiuio.service: Deactivated successfully. Apr 12 18:53:19.992267 systemd[1]: Stopped iscsiuio.service. Apr 12 18:53:19.993221 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 12 18:53:20.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:19.993313 systemd[1]: Stopped ignition-mount.service. Apr 12 18:53:19.994990 systemd[1]: Stopped target network.target. Apr 12 18:53:19.996577 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 12 18:53:20.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:20.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:19.996618 systemd[1]: Closed iscsiuio.socket. Apr 12 18:53:19.998217 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 12 18:53:20.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:19.998264 systemd[1]: Stopped ignition-disks.service. Apr 12 18:53:19.999503 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 12 18:53:19.999590 systemd[1]: Stopped ignition-kargs.service. Apr 12 18:53:20.001202 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 12 18:53:20.001246 systemd[1]: Stopped ignition-setup.service. Apr 12 18:53:20.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:20.003198 systemd[1]: Stopping systemd-networkd.service... Apr 12 18:53:20.004826 systemd[1]: Stopping systemd-resolved.service... Apr 12 18:53:20.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:20.005688 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 12 18:53:20.005782 systemd[1]: Finished initrd-cleanup.service. Apr 12 18:53:20.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:20.036000 audit: BPF prog-id=6 op=UNLOAD Apr 12 18:53:20.009949 systemd-networkd[709]: eth0: DHCPv6 lease lost Apr 12 18:53:20.036000 audit: BPF prog-id=9 op=UNLOAD Apr 12 18:53:20.011310 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 12 18:53:20.011410 systemd[1]: Stopped systemd-networkd.service. Apr 12 18:53:20.013002 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 12 18:53:20.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:20.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:20.013043 systemd[1]: Closed systemd-networkd.socket. Apr 12 18:53:20.016286 systemd[1]: Stopping network-cleanup.service... Apr 12 18:53:20.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:20.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:20.018393 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 12 18:53:20.018447 systemd[1]: Stopped parse-ip-for-networkd.service. Apr 12 18:53:20.020156 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 12 18:53:20.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:20.020202 systemd[1]: Stopped systemd-sysctl.service. Apr 12 18:53:20.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:20.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:20.022101 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 12 18:53:20.022147 systemd[1]: Stopped systemd-modules-load.service. Apr 12 18:53:20.024137 systemd[1]: Stopping systemd-udevd.service... Apr 12 18:53:20.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:20.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:20.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:20.027106 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 12 18:53:20.028975 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 12 18:53:20.029492 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 12 18:53:20.029627 systemd[1]: Stopped systemd-resolved.service. Apr 12 18:53:20.032010 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 12 18:53:20.032168 systemd[1]: Stopped systemd-udevd.service. Apr 12 18:53:20.034718 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 12 18:53:20.034809 systemd[1]: Stopped sysroot-boot.service. Apr 12 18:53:20.037195 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 12 18:53:20.037252 systemd[1]: Closed systemd-udevd-control.socket. Apr 12 18:53:20.069000 audit: BPF prog-id=8 op=UNLOAD Apr 12 18:53:20.069000 audit: BPF prog-id=7 op=UNLOAD Apr 12 18:53:20.038962 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 12 18:53:20.038998 systemd[1]: Closed systemd-udevd-kernel.socket. Apr 12 18:53:20.071000 audit: BPF prog-id=5 op=UNLOAD Apr 12 18:53:20.071000 audit: BPF prog-id=4 op=UNLOAD Apr 12 18:53:20.071000 audit: BPF prog-id=3 op=UNLOAD Apr 12 18:53:20.040799 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 12 18:53:20.040844 systemd[1]: Stopped dracut-pre-udev.service. Apr 12 18:53:20.042414 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 12 18:53:20.042459 systemd[1]: Stopped dracut-cmdline.service. Apr 12 18:53:20.044221 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 12 18:53:20.044268 systemd[1]: Stopped dracut-cmdline-ask.service. Apr 12 18:53:20.045870 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 12 18:53:20.045930 systemd[1]: Stopped initrd-setup-root.service. Apr 12 18:53:20.048294 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Apr 12 18:53:20.049346 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 12 18:53:20.049400 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Apr 12 18:53:20.052078 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 12 18:53:20.052139 systemd[1]: Stopped kmod-static-nodes.service. Apr 12 18:53:20.053050 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 12 18:53:20.088944 systemd-journald[197]: Received SIGTERM from PID 1 (n/a). Apr 12 18:53:20.088977 iscsid[716]: iscsid shutting down. Apr 12 18:53:20.053106 systemd[1]: Stopped systemd-vconsole-setup.service. Apr 12 18:53:20.055609 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 12 18:53:20.056186 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 12 18:53:20.056281 systemd[1]: Stopped network-cleanup.service. Apr 12 18:53:20.057640 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 12 18:53:20.057733 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Apr 12 18:53:20.059365 systemd[1]: Reached target initrd-switch-root.target. Apr 12 18:53:20.062375 systemd[1]: Starting initrd-switch-root.service... Apr 12 18:53:20.068040 systemd[1]: Switching root. Apr 12 18:53:20.096746 systemd-journald[197]: Journal stopped Apr 12 18:53:24.212169 kernel: SELinux: Class mctp_socket not defined in policy. Apr 12 18:53:24.212226 kernel: SELinux: Class anon_inode not defined in policy. Apr 12 18:53:24.212242 kernel: SELinux: the above unknown classes and permissions will be allowed Apr 12 18:53:24.212259 kernel: SELinux: policy capability network_peer_controls=1 Apr 12 18:53:24.212277 kernel: SELinux: policy capability open_perms=1 Apr 12 18:53:24.212290 kernel: SELinux: policy capability extended_socket_class=1 Apr 12 18:53:24.212303 kernel: SELinux: policy capability always_check_network=0 Apr 12 18:53:24.212316 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 12 18:53:24.212329 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 12 18:53:24.212342 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 12 18:53:24.212355 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 12 18:53:24.212374 systemd[1]: Successfully loaded SELinux policy in 39.028ms. Apr 12 18:53:24.212398 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.655ms. Apr 12 18:53:24.212415 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Apr 12 18:53:24.212426 systemd[1]: Detected virtualization kvm. Apr 12 18:53:24.212436 systemd[1]: Detected architecture x86-64. Apr 12 18:53:24.212447 systemd[1]: Detected first boot. Apr 12 18:53:24.212457 systemd[1]: Initializing machine ID from VM UUID. Apr 12 18:53:24.212467 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Apr 12 18:53:24.212477 systemd[1]: Populated /etc with preset unit settings. Apr 12 18:53:24.212487 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:53:24.212503 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:53:24.212514 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:53:24.212525 systemd[1]: Queued start job for default target multi-user.target. Apr 12 18:53:24.212535 systemd[1]: Unnecessary job was removed for dev-vda6.device. Apr 12 18:53:24.212547 systemd[1]: Created slice system-addon\x2dconfig.slice. Apr 12 18:53:24.212557 systemd[1]: Created slice system-addon\x2drun.slice. Apr 12 18:53:24.212570 systemd[1]: Created slice system-getty.slice. Apr 12 18:53:24.212582 systemd[1]: Created slice system-modprobe.slice. Apr 12 18:53:24.212598 systemd[1]: Created slice system-serial\x2dgetty.slice. Apr 12 18:53:24.212612 systemd[1]: Created slice system-system\x2dcloudinit.slice. Apr 12 18:53:24.212624 systemd[1]: Created slice system-systemd\x2dfsck.slice. Apr 12 18:53:24.212635 systemd[1]: Created slice user.slice. Apr 12 18:53:24.212645 systemd[1]: Started systemd-ask-password-console.path. Apr 12 18:53:24.212655 systemd[1]: Started systemd-ask-password-wall.path. Apr 12 18:53:24.212666 systemd[1]: Set up automount boot.automount. Apr 12 18:53:24.212676 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Apr 12 18:53:24.212686 systemd[1]: Reached target integritysetup.target. Apr 12 18:53:24.212698 systemd[1]: Reached target remote-cryptsetup.target. Apr 12 18:53:24.212708 systemd[1]: Reached target remote-fs.target. Apr 12 18:53:24.212719 systemd[1]: Reached target slices.target. Apr 12 18:53:24.212729 systemd[1]: Reached target swap.target. Apr 12 18:53:24.212739 systemd[1]: Reached target torcx.target. Apr 12 18:53:24.212750 systemd[1]: Reached target veritysetup.target. Apr 12 18:53:24.212760 systemd[1]: Listening on systemd-coredump.socket. Apr 12 18:53:24.212770 systemd[1]: Listening on systemd-initctl.socket. Apr 12 18:53:24.212783 systemd[1]: Listening on systemd-journald-audit.socket. Apr 12 18:53:24.212793 systemd[1]: Listening on systemd-journald-dev-log.socket. Apr 12 18:53:24.212804 systemd[1]: Listening on systemd-journald.socket. Apr 12 18:53:24.212814 systemd[1]: Listening on systemd-networkd.socket. Apr 12 18:53:24.212824 systemd[1]: Listening on systemd-udevd-control.socket. Apr 12 18:53:24.212835 systemd[1]: Listening on systemd-udevd-kernel.socket. Apr 12 18:53:24.212845 systemd[1]: Listening on systemd-userdbd.socket. Apr 12 18:53:24.212855 systemd[1]: Mounting dev-hugepages.mount... Apr 12 18:53:24.212866 systemd[1]: Mounting dev-mqueue.mount... Apr 12 18:53:24.212875 systemd[1]: Mounting media.mount... Apr 12 18:53:24.212888 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 12 18:53:24.212918 systemd[1]: Mounting sys-kernel-debug.mount... Apr 12 18:53:24.212929 systemd[1]: Mounting sys-kernel-tracing.mount... Apr 12 18:53:24.212939 systemd[1]: Mounting tmp.mount... Apr 12 18:53:24.212950 systemd[1]: Starting flatcar-tmpfiles.service... Apr 12 18:53:24.212961 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Apr 12 18:53:24.212971 systemd[1]: Starting kmod-static-nodes.service... Apr 12 18:53:24.212994 systemd[1]: Starting modprobe@configfs.service... Apr 12 18:53:24.213008 systemd[1]: Starting modprobe@dm_mod.service... Apr 12 18:53:24.213024 systemd[1]: Starting modprobe@drm.service... Apr 12 18:53:24.213037 systemd[1]: Starting modprobe@efi_pstore.service... Apr 12 18:53:24.213051 systemd[1]: Starting modprobe@fuse.service... Apr 12 18:53:24.213064 systemd[1]: Starting modprobe@loop.service... Apr 12 18:53:24.213077 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 12 18:53:24.213091 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 12 18:53:24.213104 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Apr 12 18:53:24.213118 systemd[1]: Starting systemd-journald.service... Apr 12 18:53:24.213134 systemd[1]: Starting systemd-modules-load.service... Apr 12 18:53:24.213149 systemd[1]: Starting systemd-network-generator.service... Apr 12 18:53:24.213164 kernel: fuse: init (API version 7.34) Apr 12 18:53:24.213177 systemd[1]: Starting systemd-remount-fs.service... Apr 12 18:53:24.213189 kernel: loop: module loaded Apr 12 18:53:24.213201 systemd[1]: Starting systemd-udev-trigger.service... Apr 12 18:53:24.213214 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 12 18:53:24.213227 systemd[1]: Mounted dev-hugepages.mount. Apr 12 18:53:24.213240 systemd[1]: Mounted dev-mqueue.mount. Apr 12 18:53:24.213253 systemd[1]: Mounted media.mount. Apr 12 18:53:24.213270 systemd[1]: Mounted sys-kernel-debug.mount. Apr 12 18:53:24.213283 systemd[1]: Mounted sys-kernel-tracing.mount. Apr 12 18:53:24.213296 systemd-journald[1006]: Journal started Apr 12 18:53:24.213344 systemd-journald[1006]: Runtime Journal (/run/log/journal/1468baae763146e3b947534e71661b6e) is 6.0M, max 48.5M, 42.5M free. Apr 12 18:53:24.120000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Apr 12 18:53:24.210000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Apr 12 18:53:24.210000 audit[1006]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffc025d0990 a2=4000 a3=7ffc025d0a2c items=0 ppid=1 pid=1006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:53:24.210000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Apr 12 18:53:24.216242 systemd[1]: Started systemd-journald.service. Apr 12 18:53:24.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:24.217089 systemd[1]: Mounted tmp.mount. Apr 12 18:53:24.218260 systemd[1]: Finished kmod-static-nodes.service. Apr 12 18:53:24.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:24.219370 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 12 18:53:24.219606 systemd[1]: Finished modprobe@configfs.service. Apr 12 18:53:24.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:24.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:24.220756 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 12 18:53:24.220999 systemd[1]: Finished modprobe@dm_mod.service. Apr 12 18:53:24.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:24.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:24.222144 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 12 18:53:24.222347 systemd[1]: Finished modprobe@drm.service. Apr 12 18:53:24.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:24.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:24.223428 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 12 18:53:24.223622 systemd[1]: Finished modprobe@efi_pstore.service. Apr 12 18:53:24.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:24.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:24.224714 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 12 18:53:24.224939 systemd[1]: Finished modprobe@fuse.service. Apr 12 18:53:24.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:24.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:24.226033 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 12 18:53:24.226330 systemd[1]: Finished modprobe@loop.service. Apr 12 18:53:24.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:24.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:24.230133 systemd[1]: Finished systemd-modules-load.service. Apr 12 18:53:24.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:24.231349 systemd[1]: Finished systemd-network-generator.service. Apr 12 18:53:24.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:24.232619 systemd[1]: Finished flatcar-tmpfiles.service. Apr 12 18:53:24.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:24.233713 systemd[1]: Finished systemd-remount-fs.service. Apr 12 18:53:24.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:24.234992 systemd[1]: Reached target network-pre.target. Apr 12 18:53:24.237090 systemd[1]: Mounting sys-fs-fuse-connections.mount... Apr 12 18:53:24.238868 systemd[1]: Mounting sys-kernel-config.mount... Apr 12 18:53:24.239713 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 12 18:53:24.241309 systemd[1]: Starting systemd-hwdb-update.service... Apr 12 18:53:24.246474 systemd[1]: Starting systemd-journal-flush.service... Apr 12 18:53:24.247401 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 12 18:53:24.248579 systemd[1]: Starting systemd-random-seed.service... Apr 12 18:53:24.249406 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Apr 12 18:53:24.250872 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:53:24.252016 systemd-journald[1006]: Time spent on flushing to /var/log/journal/1468baae763146e3b947534e71661b6e is 16.924ms for 1069 entries. Apr 12 18:53:24.252016 systemd-journald[1006]: System Journal (/var/log/journal/1468baae763146e3b947534e71661b6e) is 8.0M, max 195.6M, 187.6M free. Apr 12 18:53:24.277530 systemd-journald[1006]: Received client request to flush runtime journal. Apr 12 18:53:24.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:24.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:24.254202 systemd[1]: Starting systemd-sysusers.service... Apr 12 18:53:24.257838 systemd[1]: Mounted sys-fs-fuse-connections.mount. Apr 12 18:53:24.258802 systemd[1]: Mounted sys-kernel-config.mount. Apr 12 18:53:24.261118 systemd[1]: Finished systemd-random-seed.service. Apr 12 18:53:24.262065 systemd[1]: Reached target first-boot-complete.target. Apr 12 18:53:24.274409 systemd[1]: Finished systemd-sysusers.service. Apr 12 18:53:24.276419 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Apr 12 18:53:24.278380 systemd[1]: Finished systemd-journal-flush.service. Apr 12 18:53:24.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:24.281440 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:53:24.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:24.292157 systemd[1]: Finished systemd-udev-trigger.service. Apr 12 18:53:24.294332 systemd[1]: Starting systemd-udev-settle.service... Apr 12 18:53:24.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:24.300412 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Apr 12 18:53:24.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:24.301652 udevadm[1060]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 12 18:53:24.874941 systemd[1]: Finished systemd-hwdb-update.service. Apr 12 18:53:24.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:24.877200 systemd[1]: Starting systemd-udevd.service... Apr 12 18:53:24.879844 kernel: kauditd_printk_skb: 76 callbacks suppressed Apr 12 18:53:24.879881 kernel: audit: type=1130 audit(1712948004.875:112): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:24.895842 systemd-udevd[1063]: Using default interface naming scheme 'v252'. Apr 12 18:53:24.909765 systemd[1]: Started systemd-udevd.service. Apr 12 18:53:24.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:24.912710 systemd[1]: Starting systemd-networkd.service... Apr 12 18:53:24.916907 kernel: audit: type=1130 audit(1712948004.909:113): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:24.919758 systemd[1]: Starting systemd-userdbd.service... Apr 12 18:53:24.935026 systemd[1]: Found device dev-ttyS0.device. Apr 12 18:53:24.951809 systemd[1]: Started systemd-userdbd.service. Apr 12 18:53:24.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:24.958039 kernel: audit: type=1130 audit(1712948004.952:114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:24.978950 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Apr 12 18:53:24.991929 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 12 18:53:25.000000 audit[1083]: AVC avc: denied { confidentiality } for pid=1083 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Apr 12 18:53:25.007905 kernel: audit: type=1400 audit(1712948005.000:115): avc: denied { confidentiality } for pid=1083 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Apr 12 18:53:25.012424 systemd-networkd[1073]: lo: Link UP Apr 12 18:53:25.012434 systemd-networkd[1073]: lo: Gained carrier Apr 12 18:53:25.012806 systemd-networkd[1073]: Enumeration completed Apr 12 18:53:25.012931 systemd[1]: Started systemd-networkd.service. Apr 12 18:53:25.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:25.000000 audit[1083]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55c8dd52f630 a1=32194 a2=7f445cee5bc5 a3=5 items=108 ppid=1063 pid=1083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:53:25.019576 systemd-networkd[1073]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 12 18:53:25.022544 kernel: audit: type=1130 audit(1712948005.012:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:25.022587 kernel: audit: type=1300 audit(1712948005.000:115): arch=c000003e syscall=175 success=yes exit=0 a0=55c8dd52f630 a1=32194 a2=7f445cee5bc5 a3=5 items=108 ppid=1063 pid=1083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:53:25.022604 kernel: ACPI: button: Power Button [PWRF] Apr 12 18:53:25.022628 kernel: audit: type=1307 audit(1712948005.000:115): cwd="/" Apr 12 18:53:25.000000 audit: CWD cwd="/" Apr 12 18:53:25.023995 systemd-networkd[1073]: eth0: Link UP Apr 12 18:53:25.024009 systemd-networkd[1073]: eth0: Gained carrier Apr 12 18:53:25.000000 audit: PATH item=0 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.027755 kernel: audit: type=1302 audit(1712948005.000:115): item=0 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.027820 kernel: audit: type=1302 audit(1712948005.000:115): item=1 name=(null) inode=15545 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=1 name=(null) inode=15545 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=2 name=(null) inode=15545 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.034934 kernel: audit: type=1302 audit(1712948005.000:115): item=2 name=(null) inode=15545 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=3 name=(null) inode=15546 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=4 name=(null) inode=15545 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=5 name=(null) inode=15547 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=6 name=(null) inode=15545 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=7 name=(null) inode=15548 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=8 name=(null) inode=15548 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=9 name=(null) inode=15549 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=10 name=(null) inode=15548 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=11 name=(null) inode=15550 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=12 name=(null) inode=15548 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=13 name=(null) inode=15551 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=14 name=(null) inode=15548 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=15 name=(null) inode=15552 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=16 name=(null) inode=15548 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=17 name=(null) inode=15553 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=18 name=(null) inode=15545 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=19 name=(null) inode=15554 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=20 name=(null) inode=15554 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=21 name=(null) inode=15555 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=22 name=(null) inode=15554 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=23 name=(null) inode=15556 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=24 name=(null) inode=15554 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=25 name=(null) inode=15557 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=26 name=(null) inode=15554 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=27 name=(null) inode=15558 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=28 name=(null) inode=15554 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=29 name=(null) inode=15559 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=30 name=(null) inode=15545 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=31 name=(null) inode=15560 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=32 name=(null) inode=15560 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=33 name=(null) inode=15561 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=34 name=(null) inode=15560 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=35 name=(null) inode=15562 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=36 name=(null) inode=15560 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=37 name=(null) inode=15563 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=38 name=(null) inode=15560 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=39 name=(null) inode=15564 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=40 name=(null) inode=15560 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=41 name=(null) inode=15565 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=42 name=(null) inode=15545 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=43 name=(null) inode=15566 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=44 name=(null) inode=15566 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=45 name=(null) inode=15567 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=46 name=(null) inode=15566 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=47 name=(null) inode=15568 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=48 name=(null) inode=15566 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=49 name=(null) inode=15569 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=50 name=(null) inode=15566 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=51 name=(null) inode=15570 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=52 name=(null) inode=15566 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=53 name=(null) inode=15571 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=54 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=55 name=(null) inode=15572 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=56 name=(null) inode=15572 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=57 name=(null) inode=15573 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=58 name=(null) inode=15572 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=59 name=(null) inode=15574 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=60 name=(null) inode=15572 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=61 name=(null) inode=15575 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=62 name=(null) inode=15575 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=63 name=(null) inode=15576 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=64 name=(null) inode=15575 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=65 name=(null) inode=15577 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=66 name=(null) inode=15575 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=67 name=(null) inode=15578 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=68 name=(null) inode=15575 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=69 name=(null) inode=15579 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=70 name=(null) inode=15575 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=71 name=(null) inode=15580 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=72 name=(null) inode=15572 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=73 name=(null) inode=15581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=74 name=(null) inode=15581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=75 name=(null) inode=15582 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=76 name=(null) inode=15581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=77 name=(null) inode=15583 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=78 name=(null) inode=15581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=79 name=(null) inode=15584 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=80 name=(null) inode=15581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=81 name=(null) inode=15585 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=82 name=(null) inode=15581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=83 name=(null) inode=15586 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=84 name=(null) inode=15572 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=85 name=(null) inode=15587 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=86 name=(null) inode=15587 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=87 name=(null) inode=15588 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=88 name=(null) inode=15587 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=89 name=(null) inode=15589 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=90 name=(null) inode=15587 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=91 name=(null) inode=15590 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=92 name=(null) inode=15587 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=93 name=(null) inode=15591 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=94 name=(null) inode=15587 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=95 name=(null) inode=15592 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=96 name=(null) inode=15572 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=97 name=(null) inode=15593 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=98 name=(null) inode=15593 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=99 name=(null) inode=15594 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=100 name=(null) inode=15593 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=101 name=(null) inode=15595 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=102 name=(null) inode=15593 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=103 name=(null) inode=15596 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=104 name=(null) inode=15593 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=105 name=(null) inode=15597 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=106 name=(null) inode=15593 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PATH item=107 name=(null) inode=15598 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:53:25.000000 audit: PROCTITLE proctitle="(udev-worker)" Apr 12 18:53:25.042180 systemd-networkd[1073]: eth0: DHCPv4 address 10.0.0.108/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 12 18:53:25.042915 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Apr 12 18:53:25.045915 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 12 18:53:25.088947 kernel: mousedev: PS/2 mouse device common for all mice Apr 12 18:53:25.121425 kernel: kvm: Nested Virtualization enabled Apr 12 18:53:25.121483 kernel: SVM: kvm: Nested Paging enabled Apr 12 18:53:25.121499 kernel: SVM: Virtual VMLOAD VMSAVE supported Apr 12 18:53:25.122905 kernel: SVM: Virtual GIF supported Apr 12 18:53:25.139926 kernel: EDAC MC: Ver: 3.0.0 Apr 12 18:53:25.162578 systemd[1]: Finished systemd-udev-settle.service. Apr 12 18:53:25.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:25.165413 systemd[1]: Starting lvm2-activation-early.service... Apr 12 18:53:25.173340 lvm[1099]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 12 18:53:25.202054 systemd[1]: Finished lvm2-activation-early.service. Apr 12 18:53:25.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:25.203225 systemd[1]: Reached target cryptsetup.target. Apr 12 18:53:25.205616 systemd[1]: Starting lvm2-activation.service... Apr 12 18:53:25.208972 lvm[1101]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 12 18:53:25.237418 systemd[1]: Finished lvm2-activation.service. Apr 12 18:53:25.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:25.238413 systemd[1]: Reached target local-fs-pre.target. Apr 12 18:53:25.239294 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 12 18:53:25.239314 systemd[1]: Reached target local-fs.target. Apr 12 18:53:25.240135 systemd[1]: Reached target machines.target. Apr 12 18:53:25.242042 systemd[1]: Starting ldconfig.service... Apr 12 18:53:25.243086 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Apr 12 18:53:25.243117 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:53:25.243983 systemd[1]: Starting systemd-boot-update.service... Apr 12 18:53:25.245769 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Apr 12 18:53:25.248154 systemd[1]: Starting systemd-machine-id-commit.service... Apr 12 18:53:25.249332 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Apr 12 18:53:25.249383 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Apr 12 18:53:25.250381 systemd[1]: Starting systemd-tmpfiles-setup.service... Apr 12 18:53:25.253149 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1104 (bootctl) Apr 12 18:53:25.254236 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Apr 12 18:53:25.259236 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Apr 12 18:53:25.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:25.262722 systemd-tmpfiles[1107]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Apr 12 18:53:25.264444 systemd-tmpfiles[1107]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 12 18:53:25.265742 systemd-tmpfiles[1107]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 12 18:53:25.320337 systemd-fsck[1113]: fsck.fat 4.2 (2021-01-31) Apr 12 18:53:25.320337 systemd-fsck[1113]: /dev/vda1: 789 files, 119240/258078 clusters Apr 12 18:53:25.321356 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Apr 12 18:53:25.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:25.324708 systemd[1]: Mounting boot.mount... Apr 12 18:53:25.345582 systemd[1]: Mounted boot.mount. Apr 12 18:53:25.395204 systemd[1]: Finished systemd-boot-update.service. Apr 12 18:53:25.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:25.445792 ldconfig[1103]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 12 18:53:26.182418 systemd[1]: Finished ldconfig.service. Apr 12 18:53:26.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:26.196647 systemd[1]: Finished systemd-tmpfiles-setup.service. Apr 12 18:53:26.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:26.199384 systemd[1]: Starting audit-rules.service... Apr 12 18:53:26.201186 systemd[1]: Starting clean-ca-certificates.service... Apr 12 18:53:26.203540 systemd[1]: Starting systemd-journal-catalog-update.service... Apr 12 18:53:26.206228 systemd[1]: Starting systemd-resolved.service... Apr 12 18:53:26.210417 systemd[1]: Starting systemd-timesyncd.service... Apr 12 18:53:26.212655 systemd[1]: Starting systemd-update-utmp.service... Apr 12 18:53:26.214324 systemd[1]: Finished clean-ca-certificates.service. Apr 12 18:53:26.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:26.215766 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 12 18:53:26.218000 audit[1134]: SYSTEM_BOOT pid=1134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 12 18:53:26.222459 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 12 18:53:26.223195 systemd[1]: Finished systemd-machine-id-commit.service. Apr 12 18:53:26.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:26.224519 systemd[1]: Finished systemd-update-utmp.service. Apr 12 18:53:26.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:26.239056 systemd[1]: Finished systemd-journal-catalog-update.service. Apr 12 18:53:26.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:26.241527 systemd[1]: Starting systemd-update-done.service... Apr 12 18:53:26.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:53:26.249136 systemd[1]: Finished systemd-update-done.service. Apr 12 18:53:26.252000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 12 18:53:26.252000 audit[1145]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd57377af0 a2=420 a3=0 items=0 ppid=1122 pid=1145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:53:26.252000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 12 18:53:26.253995 augenrules[1145]: No rules Apr 12 18:53:26.254483 systemd[1]: Finished audit-rules.service. Apr 12 18:53:26.274383 systemd[1]: Started systemd-timesyncd.service. Apr 12 18:53:26.275543 systemd[1]: Reached target time-set.target. Apr 12 18:53:26.827891 systemd-timesyncd[1133]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 12 18:53:26.827931 systemd-timesyncd[1133]: Initial clock synchronization to Fri 2024-04-12 18:53:26.827819 UTC. Apr 12 18:53:26.828244 systemd-resolved[1131]: Positive Trust Anchors: Apr 12 18:53:26.828255 systemd-resolved[1131]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 12 18:53:26.828281 systemd-resolved[1131]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Apr 12 18:53:26.911955 systemd-resolved[1131]: Defaulting to hostname 'linux'. Apr 12 18:53:26.914182 systemd[1]: Started systemd-resolved.service. Apr 12 18:53:26.915179 systemd[1]: Reached target network.target. Apr 12 18:53:26.915950 systemd[1]: Reached target nss-lookup.target. Apr 12 18:53:26.916831 systemd[1]: Reached target sysinit.target. Apr 12 18:53:26.917702 systemd[1]: Started motdgen.path. Apr 12 18:53:26.918439 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Apr 12 18:53:26.919661 systemd[1]: Started logrotate.timer. Apr 12 18:53:26.920449 systemd[1]: Started mdadm.timer. Apr 12 18:53:26.921127 systemd[1]: Started systemd-tmpfiles-clean.timer. Apr 12 18:53:26.921960 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 12 18:53:26.921980 systemd[1]: Reached target paths.target. Apr 12 18:53:26.922718 systemd[1]: Reached target timers.target. Apr 12 18:53:26.923748 systemd[1]: Listening on dbus.socket. Apr 12 18:53:26.925509 systemd[1]: Starting docker.socket... Apr 12 18:53:26.927131 systemd[1]: Listening on sshd.socket. Apr 12 18:53:26.927933 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:53:26.928187 systemd[1]: Listening on docker.socket. Apr 12 18:53:26.928945 systemd[1]: Reached target sockets.target. Apr 12 18:53:26.929764 systemd[1]: Reached target basic.target. Apr 12 18:53:26.930613 systemd[1]: System is tainted: cgroupsv1 Apr 12 18:53:26.930649 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Apr 12 18:53:26.930666 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Apr 12 18:53:26.931560 systemd[1]: Starting containerd.service... Apr 12 18:53:26.933103 systemd[1]: Starting dbus.service... Apr 12 18:53:26.934735 systemd[1]: Starting enable-oem-cloudinit.service... Apr 12 18:53:26.936536 systemd[1]: Starting extend-filesystems.service... Apr 12 18:53:26.937449 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Apr 12 18:53:26.938582 systemd[1]: Starting motdgen.service... Apr 12 18:53:26.940373 jq[1161]: false Apr 12 18:53:26.940219 systemd[1]: Starting prepare-cni-plugins.service... Apr 12 18:53:26.942316 systemd[1]: Starting prepare-critools.service... Apr 12 18:53:26.944263 systemd[1]: Starting prepare-helm.service... Apr 12 18:53:26.946357 systemd[1]: Starting ssh-key-proc-cmdline.service... Apr 12 18:53:26.948234 systemd[1]: Starting sshd-keygen.service... Apr 12 18:53:26.950757 systemd[1]: Starting systemd-logind.service... Apr 12 18:53:26.951552 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:53:26.951607 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 12 18:53:26.952702 systemd[1]: Starting update-engine.service... Apr 12 18:53:26.966832 jq[1180]: true Apr 12 18:53:26.955121 systemd[1]: Starting update-ssh-keys-after-ignition.service... Apr 12 18:53:26.958606 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 12 18:53:26.958858 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Apr 12 18:53:26.963356 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 12 18:53:26.963636 systemd[1]: Finished ssh-key-proc-cmdline.service. Apr 12 18:53:26.970985 tar[1184]: ./ Apr 12 18:53:26.970985 tar[1184]: ./loopback Apr 12 18:53:26.972045 jq[1193]: true Apr 12 18:53:26.977858 tar[1185]: crictl Apr 12 18:53:26.978363 systemd[1]: Started dbus.service. Apr 12 18:53:26.978191 dbus-daemon[1160]: [system] SELinux support is enabled Apr 12 18:53:26.981815 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 12 18:53:26.981852 systemd[1]: Reached target system-config.target. Apr 12 18:53:26.983170 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 12 18:53:26.989724 tar[1190]: linux-amd64/helm Apr 12 18:53:26.983193 systemd[1]: Reached target user-config.target. Apr 12 18:53:26.990137 extend-filesystems[1162]: Found sr0 Apr 12 18:53:26.990137 extend-filesystems[1162]: Found vda Apr 12 18:53:26.990137 extend-filesystems[1162]: Found vda1 Apr 12 18:53:26.990137 extend-filesystems[1162]: Found vda2 Apr 12 18:53:26.990137 extend-filesystems[1162]: Found vda3 Apr 12 18:53:26.990137 extend-filesystems[1162]: Found usr Apr 12 18:53:26.990137 extend-filesystems[1162]: Found vda4 Apr 12 18:53:27.041262 extend-filesystems[1162]: Found vda6 Apr 12 18:53:27.041262 extend-filesystems[1162]: Found vda7 Apr 12 18:53:27.041262 extend-filesystems[1162]: Found vda9 Apr 12 18:53:27.041262 extend-filesystems[1162]: Checking size of /dev/vda9 Apr 12 18:53:27.041262 extend-filesystems[1162]: Resized partition /dev/vda9 Apr 12 18:53:26.994616 systemd[1]: motdgen.service: Deactivated successfully. Apr 12 18:53:27.053380 extend-filesystems[1220]: resize2fs 1.46.5 (30-Dec-2021) Apr 12 18:53:26.994917 systemd[1]: Finished motdgen.service. Apr 12 18:53:27.057046 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 12 18:53:27.080360 update_engine[1177]: I0412 18:53:27.080179 1177 main.cc:92] Flatcar Update Engine starting Apr 12 18:53:27.085265 systemd[1]: Started update-engine.service. Apr 12 18:53:27.085892 update_engine[1177]: I0412 18:53:27.085325 1177 update_check_scheduler.cc:74] Next update check in 4m10s Apr 12 18:53:27.088144 systemd[1]: Started locksmithd.service. Apr 12 18:53:27.097704 systemd-logind[1176]: Watching system buttons on /dev/input/event1 (Power Button) Apr 12 18:53:27.097740 systemd-logind[1176]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 12 18:53:27.099013 systemd-logind[1176]: New seat seat0. Apr 12 18:53:27.103378 systemd[1]: Started systemd-logind.service. Apr 12 18:53:27.103984 tar[1184]: ./bandwidth Apr 12 18:53:27.107031 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 12 18:53:27.108691 bash[1221]: Updated "/home/core/.ssh/authorized_keys" Apr 12 18:53:27.109452 systemd[1]: Finished update-ssh-keys-after-ignition.service. Apr 12 18:53:27.132685 env[1199]: time="2024-04-12T18:53:27.132616954Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Apr 12 18:53:27.134218 extend-filesystems[1220]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 12 18:53:27.134218 extend-filesystems[1220]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 12 18:53:27.134218 extend-filesystems[1220]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 12 18:53:27.144119 extend-filesystems[1162]: Resized filesystem in /dev/vda9 Apr 12 18:53:27.135421 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 12 18:53:27.135661 systemd[1]: Finished extend-filesystems.service. Apr 12 18:53:27.152989 env[1199]: time="2024-04-12T18:53:27.152952678Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 12 18:53:27.153423 env[1199]: time="2024-04-12T18:53:27.153404866Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:53:27.159840 env[1199]: time="2024-04-12T18:53:27.159789341Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.154-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:53:27.160040 env[1199]: time="2024-04-12T18:53:27.160010917Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:53:27.160412 env[1199]: time="2024-04-12T18:53:27.160392142Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:53:27.160494 env[1199]: time="2024-04-12T18:53:27.160475709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 12 18:53:27.160590 env[1199]: time="2024-04-12T18:53:27.160569725Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Apr 12 18:53:27.160668 env[1199]: time="2024-04-12T18:53:27.160646779Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 12 18:53:27.160805 env[1199]: time="2024-04-12T18:53:27.160787063Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:53:27.161126 env[1199]: time="2024-04-12T18:53:27.161107494Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:53:27.161375 env[1199]: time="2024-04-12T18:53:27.161348736Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:53:27.161461 env[1199]: time="2024-04-12T18:53:27.161441440Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 12 18:53:27.161587 env[1199]: time="2024-04-12T18:53:27.161568118Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Apr 12 18:53:27.161665 env[1199]: time="2024-04-12T18:53:27.161646214Z" level=info msg="metadata content store policy set" policy=shared Apr 12 18:53:27.177783 env[1199]: time="2024-04-12T18:53:27.175966094Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 12 18:53:27.177783 env[1199]: time="2024-04-12T18:53:27.176035815Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 12 18:53:27.177783 env[1199]: time="2024-04-12T18:53:27.176051645Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 12 18:53:27.177783 env[1199]: time="2024-04-12T18:53:27.176092211Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 12 18:53:27.177783 env[1199]: time="2024-04-12T18:53:27.176106047Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 12 18:53:27.177783 env[1199]: time="2024-04-12T18:53:27.176118500Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 12 18:53:27.177783 env[1199]: time="2024-04-12T18:53:27.176130062Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 12 18:53:27.177783 env[1199]: time="2024-04-12T18:53:27.176142786Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 12 18:53:27.177783 env[1199]: time="2024-04-12T18:53:27.176154197Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Apr 12 18:53:27.177783 env[1199]: time="2024-04-12T18:53:27.176171149Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 12 18:53:27.177783 env[1199]: time="2024-04-12T18:53:27.176184714Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 12 18:53:27.177783 env[1199]: time="2024-04-12T18:53:27.176199051Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 12 18:53:27.177783 env[1199]: time="2024-04-12T18:53:27.176335717Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 12 18:53:27.177783 env[1199]: time="2024-04-12T18:53:27.176404266Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 12 18:53:27.178187 env[1199]: time="2024-04-12T18:53:27.176697276Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 12 18:53:27.178187 env[1199]: time="2024-04-12T18:53:27.176725258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 12 18:53:27.178187 env[1199]: time="2024-04-12T18:53:27.176737150Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 12 18:53:27.178187 env[1199]: time="2024-04-12T18:53:27.176779590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 12 18:53:27.178187 env[1199]: time="2024-04-12T18:53:27.176791112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 12 18:53:27.178187 env[1199]: time="2024-04-12T18:53:27.176802162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 12 18:53:27.178187 env[1199]: time="2024-04-12T18:53:27.176811750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 12 18:53:27.178187 env[1199]: time="2024-04-12T18:53:27.176835104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 12 18:53:27.178187 env[1199]: time="2024-04-12T18:53:27.176849611Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 12 18:53:27.178187 env[1199]: time="2024-04-12T18:53:27.176859901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 12 18:53:27.178187 env[1199]: time="2024-04-12T18:53:27.176869308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 12 18:53:27.178187 env[1199]: time="2024-04-12T18:53:27.176881702Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 12 18:53:27.178187 env[1199]: time="2024-04-12T18:53:27.176984384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 12 18:53:27.178187 env[1199]: time="2024-04-12T18:53:27.177027214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 12 18:53:27.178187 env[1199]: time="2024-04-12T18:53:27.177039217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 12 18:53:27.178468 env[1199]: time="2024-04-12T18:53:27.177049186Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 12 18:53:27.178468 env[1199]: time="2024-04-12T18:53:27.177062851Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Apr 12 18:53:27.178468 env[1199]: time="2024-04-12T18:53:27.177072149Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 12 18:53:27.178468 env[1199]: time="2024-04-12T18:53:27.177100332Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Apr 12 18:53:27.178468 env[1199]: time="2024-04-12T18:53:27.177139726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 12 18:53:27.178567 env[1199]: time="2024-04-12T18:53:27.177326576Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 12 18:53:27.178567 env[1199]: time="2024-04-12T18:53:27.177372412Z" level=info msg="Connect containerd service" Apr 12 18:53:27.178567 env[1199]: time="2024-04-12T18:53:27.177405394Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 12 18:53:27.179713 env[1199]: time="2024-04-12T18:53:27.179111324Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 12 18:53:27.184263 env[1199]: time="2024-04-12T18:53:27.181959977Z" level=info msg="Start subscribing containerd event" Apr 12 18:53:27.184263 env[1199]: time="2024-04-12T18:53:27.182261873Z" level=info msg="Start recovering state" Apr 12 18:53:27.184263 env[1199]: time="2024-04-12T18:53:27.182367321Z" level=info msg="Start event monitor" Apr 12 18:53:27.184263 env[1199]: time="2024-04-12T18:53:27.182393280Z" level=info msg="Start snapshots syncer" Apr 12 18:53:27.184263 env[1199]: time="2024-04-12T18:53:27.182408598Z" level=info msg="Start cni network conf syncer for default" Apr 12 18:53:27.184263 env[1199]: time="2024-04-12T18:53:27.182415722Z" level=info msg="Start streaming server" Apr 12 18:53:27.191132 tar[1184]: ./ptp Apr 12 18:53:27.191662 env[1199]: time="2024-04-12T18:53:27.191630256Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 12 18:53:27.191797 env[1199]: time="2024-04-12T18:53:27.191770138Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 12 18:53:27.191940 env[1199]: time="2024-04-12T18:53:27.191924818Z" level=info msg="containerd successfully booted in 0.082545s" Apr 12 18:53:27.192050 systemd[1]: Started containerd.service. Apr 12 18:53:27.250088 tar[1184]: ./vlan Apr 12 18:53:27.297843 tar[1184]: ./host-device Apr 12 18:53:27.360513 tar[1184]: ./tuning Apr 12 18:53:27.429957 tar[1184]: ./vrf Apr 12 18:53:27.487790 tar[1184]: ./sbr Apr 12 18:53:27.530132 systemd-networkd[1073]: eth0: Gained IPv6LL Apr 12 18:53:27.574246 tar[1184]: ./tap Apr 12 18:53:27.616962 tar[1184]: ./dhcp Apr 12 18:53:27.763478 sshd_keygen[1194]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 12 18:53:27.771456 tar[1184]: ./static Apr 12 18:53:27.773253 systemd[1]: Finished prepare-critools.service. Apr 12 18:53:27.780496 locksmithd[1228]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 12 18:53:27.789611 systemd[1]: Finished sshd-keygen.service. Apr 12 18:53:27.792270 systemd[1]: Starting issuegen.service... Apr 12 18:53:27.800250 tar[1184]: ./firewall Apr 12 18:53:27.796415 systemd[1]: issuegen.service: Deactivated successfully. Apr 12 18:53:27.796572 systemd[1]: Finished issuegen.service. Apr 12 18:53:27.798516 systemd[1]: Starting systemd-user-sessions.service... Apr 12 18:53:27.802760 systemd[1]: Finished systemd-user-sessions.service. Apr 12 18:53:27.804747 systemd[1]: Started getty@tty1.service. Apr 12 18:53:27.806660 systemd[1]: Started serial-getty@ttyS0.service. Apr 12 18:53:27.807785 systemd[1]: Reached target getty.target. Apr 12 18:53:27.823252 tar[1190]: linux-amd64/LICENSE Apr 12 18:53:27.823303 tar[1190]: linux-amd64/README.md Apr 12 18:53:27.827161 systemd[1]: Finished prepare-helm.service. Apr 12 18:53:27.836257 tar[1184]: ./macvlan Apr 12 18:53:27.866704 tar[1184]: ./dummy Apr 12 18:53:27.896177 tar[1184]: ./bridge Apr 12 18:53:27.928697 tar[1184]: ./ipvlan Apr 12 18:53:27.958365 tar[1184]: ./portmap Apr 12 18:53:27.986798 tar[1184]: ./host-local Apr 12 18:53:28.021851 systemd[1]: Finished prepare-cni-plugins.service. Apr 12 18:53:28.023081 systemd[1]: Reached target multi-user.target. Apr 12 18:53:28.025116 systemd[1]: Starting systemd-update-utmp-runlevel.service... Apr 12 18:53:28.030717 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Apr 12 18:53:28.030910 systemd[1]: Finished systemd-update-utmp-runlevel.service. Apr 12 18:53:28.033167 systemd[1]: Startup finished in 8.084s (kernel) + 7.349s (userspace) = 15.434s. Apr 12 18:53:29.694250 systemd[1]: Created slice system-sshd.slice. Apr 12 18:53:29.695443 systemd[1]: Started sshd@0-10.0.0.108:22-10.0.0.1:56608.service. Apr 12 18:53:29.730746 sshd[1273]: Accepted publickey for core from 10.0.0.1 port 56608 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:53:29.732064 sshd[1273]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:53:29.739915 systemd-logind[1176]: New session 1 of user core. Apr 12 18:53:29.740770 systemd[1]: Created slice user-500.slice. Apr 12 18:53:29.741683 systemd[1]: Starting user-runtime-dir@500.service... Apr 12 18:53:29.750144 systemd[1]: Finished user-runtime-dir@500.service. Apr 12 18:53:29.751504 systemd[1]: Starting user@500.service... Apr 12 18:53:29.754628 (systemd)[1277]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:53:29.841039 systemd[1277]: Queued start job for default target default.target. Apr 12 18:53:29.841330 systemd[1277]: Reached target paths.target. Apr 12 18:53:29.841357 systemd[1277]: Reached target sockets.target. Apr 12 18:53:29.841373 systemd[1277]: Reached target timers.target. Apr 12 18:53:29.841387 systemd[1277]: Reached target basic.target. Apr 12 18:53:29.841432 systemd[1277]: Reached target default.target. Apr 12 18:53:29.841459 systemd[1277]: Startup finished in 81ms. Apr 12 18:53:29.841564 systemd[1]: Started user@500.service. Apr 12 18:53:29.842563 systemd[1]: Started session-1.scope. Apr 12 18:53:29.892528 systemd[1]: Started sshd@1-10.0.0.108:22-10.0.0.1:56614.service. Apr 12 18:53:29.926632 sshd[1287]: Accepted publickey for core from 10.0.0.1 port 56614 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:53:29.927874 sshd[1287]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:53:29.931491 systemd-logind[1176]: New session 2 of user core. Apr 12 18:53:29.932245 systemd[1]: Started session-2.scope. Apr 12 18:53:29.986288 sshd[1287]: pam_unix(sshd:session): session closed for user core Apr 12 18:53:29.988891 systemd[1]: Started sshd@2-10.0.0.108:22-10.0.0.1:56618.service. Apr 12 18:53:29.989343 systemd[1]: sshd@1-10.0.0.108:22-10.0.0.1:56614.service: Deactivated successfully. Apr 12 18:53:29.990551 systemd[1]: session-2.scope: Deactivated successfully. Apr 12 18:53:29.990566 systemd-logind[1176]: Session 2 logged out. Waiting for processes to exit. Apr 12 18:53:29.991446 systemd-logind[1176]: Removed session 2. Apr 12 18:53:30.021749 sshd[1293]: Accepted publickey for core from 10.0.0.1 port 56618 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:53:30.022904 sshd[1293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:53:30.026776 systemd-logind[1176]: New session 3 of user core. Apr 12 18:53:30.027601 systemd[1]: Started session-3.scope. Apr 12 18:53:30.077754 sshd[1293]: pam_unix(sshd:session): session closed for user core Apr 12 18:53:30.080057 systemd[1]: Started sshd@3-10.0.0.108:22-10.0.0.1:56632.service. Apr 12 18:53:30.080657 systemd[1]: sshd@2-10.0.0.108:22-10.0.0.1:56618.service: Deactivated successfully. Apr 12 18:53:30.081705 systemd[1]: session-3.scope: Deactivated successfully. Apr 12 18:53:30.081738 systemd-logind[1176]: Session 3 logged out. Waiting for processes to exit. Apr 12 18:53:30.082757 systemd-logind[1176]: Removed session 3. Apr 12 18:53:30.114281 sshd[1299]: Accepted publickey for core from 10.0.0.1 port 56632 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:53:30.115412 sshd[1299]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:53:30.118787 systemd-logind[1176]: New session 4 of user core. Apr 12 18:53:30.119401 systemd[1]: Started session-4.scope. Apr 12 18:53:30.174691 sshd[1299]: pam_unix(sshd:session): session closed for user core Apr 12 18:53:30.176632 systemd[1]: Started sshd@4-10.0.0.108:22-10.0.0.1:56646.service. Apr 12 18:53:30.177144 systemd[1]: sshd@3-10.0.0.108:22-10.0.0.1:56632.service: Deactivated successfully. Apr 12 18:53:30.178135 systemd-logind[1176]: Session 4 logged out. Waiting for processes to exit. Apr 12 18:53:30.178136 systemd[1]: session-4.scope: Deactivated successfully. Apr 12 18:53:30.179344 systemd-logind[1176]: Removed session 4. Apr 12 18:53:30.206872 sshd[1306]: Accepted publickey for core from 10.0.0.1 port 56646 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:53:30.207808 sshd[1306]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:53:30.211132 systemd-logind[1176]: New session 5 of user core. Apr 12 18:53:30.211718 systemd[1]: Started session-5.scope. Apr 12 18:53:30.267094 sudo[1312]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 12 18:53:30.267267 sudo[1312]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Apr 12 18:53:30.800397 systemd[1]: Starting systemd-networkd-wait-online.service... Apr 12 18:53:30.917929 systemd[1]: Finished systemd-networkd-wait-online.service. Apr 12 18:53:30.918347 systemd[1]: Reached target network-online.target. Apr 12 18:53:30.920122 systemd[1]: Starting docker.service... Apr 12 18:53:30.961140 env[1331]: time="2024-04-12T18:53:30.961073844Z" level=info msg="Starting up" Apr 12 18:53:30.962671 env[1331]: time="2024-04-12T18:53:30.962626777Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 12 18:53:30.962671 env[1331]: time="2024-04-12T18:53:30.962658376Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 12 18:53:30.962767 env[1331]: time="2024-04-12T18:53:30.962679827Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Apr 12 18:53:30.962767 env[1331]: time="2024-04-12T18:53:30.962689735Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 12 18:53:30.964239 env[1331]: time="2024-04-12T18:53:30.964209436Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 12 18:53:30.964239 env[1331]: time="2024-04-12T18:53:30.964233180Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 12 18:53:30.964310 env[1331]: time="2024-04-12T18:53:30.964252847Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Apr 12 18:53:30.964310 env[1331]: time="2024-04-12T18:53:30.964263397Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 12 18:53:32.284946 env[1331]: time="2024-04-12T18:53:32.284895115Z" level=warning msg="Your kernel does not support cgroup blkio weight" Apr 12 18:53:32.284946 env[1331]: time="2024-04-12T18:53:32.284933326Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Apr 12 18:53:32.285458 env[1331]: time="2024-04-12T18:53:32.285097174Z" level=info msg="Loading containers: start." Apr 12 18:53:32.393037 kernel: Initializing XFRM netlink socket Apr 12 18:53:32.419176 env[1331]: time="2024-04-12T18:53:32.419134590Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Apr 12 18:53:32.463736 systemd-networkd[1073]: docker0: Link UP Apr 12 18:53:32.473703 env[1331]: time="2024-04-12T18:53:32.473663981Z" level=info msg="Loading containers: done." Apr 12 18:53:32.506065 env[1331]: time="2024-04-12T18:53:32.505992540Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 12 18:53:32.506267 env[1331]: time="2024-04-12T18:53:32.506241417Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Apr 12 18:53:32.506357 env[1331]: time="2024-04-12T18:53:32.506340092Z" level=info msg="Daemon has completed initialization" Apr 12 18:53:32.522573 systemd[1]: Started docker.service. Apr 12 18:53:32.529666 env[1331]: time="2024-04-12T18:53:32.529612054Z" level=info msg="API listen on /run/docker.sock" Apr 12 18:53:32.546131 systemd[1]: Reloading. Apr 12 18:53:32.604104 /usr/lib/systemd/system-generators/torcx-generator[1472]: time="2024-04-12T18:53:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:53:32.604132 /usr/lib/systemd/system-generators/torcx-generator[1472]: time="2024-04-12T18:53:32Z" level=info msg="torcx already run" Apr 12 18:53:32.670289 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:53:32.670308 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:53:32.689074 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:53:32.753775 systemd[1]: Started kubelet.service. Apr 12 18:53:32.877885 kubelet[1520]: E0412 18:53:32.877750 1520 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Apr 12 18:53:32.879461 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 18:53:32.879630 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 18:53:33.223198 env[1199]: time="2024-04-12T18:53:33.223067869Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.12\"" Apr 12 18:53:34.175862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2775344984.mount: Deactivated successfully. Apr 12 18:53:36.257869 env[1199]: time="2024-04-12T18:53:36.257759553Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:36.351654 env[1199]: time="2024-04-12T18:53:36.351558793Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:394383b7bc9634d67978b735802d4039f702efd9e5cc2499eac1a8ad78184809,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:36.392454 env[1199]: time="2024-04-12T18:53:36.392407604Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:36.442627 env[1199]: time="2024-04-12T18:53:36.442558092Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:cf0c29f585316888225cf254949988bdbedc7ba6238bc9a24bf6f0c508c42b6c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:36.443317 env[1199]: time="2024-04-12T18:53:36.443275357Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.12\" returns image reference \"sha256:394383b7bc9634d67978b735802d4039f702efd9e5cc2499eac1a8ad78184809\"" Apr 12 18:53:36.453960 env[1199]: time="2024-04-12T18:53:36.453923941Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.12\"" Apr 12 18:53:38.546045 env[1199]: time="2024-04-12T18:53:38.545959903Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:38.548199 env[1199]: time="2024-04-12T18:53:38.548163255Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b68567f81c92edc7c53449e3958d8cf5ad474ac00bbbdfcd2bd47558a9bba5d7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:38.549920 env[1199]: time="2024-04-12T18:53:38.549887399Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:38.552449 env[1199]: time="2024-04-12T18:53:38.552402938Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:6caa3a4278e87169371d031861e49db21742bcbd8df650d7fe519a1a7f6764af,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:38.553513 env[1199]: time="2024-04-12T18:53:38.553453468Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.12\" returns image reference \"sha256:b68567f81c92edc7c53449e3958d8cf5ad474ac00bbbdfcd2bd47558a9bba5d7\"" Apr 12 18:53:38.564185 env[1199]: time="2024-04-12T18:53:38.564143249Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.12\"" Apr 12 18:53:40.105723 env[1199]: time="2024-04-12T18:53:40.105645517Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:40.107693 env[1199]: time="2024-04-12T18:53:40.107651369Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5fab684ed62aaef7130a9e5533c28699a5be380abc7cdbcd32502cca8b56e833,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:40.109752 env[1199]: time="2024-04-12T18:53:40.109726562Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:40.111544 env[1199]: time="2024-04-12T18:53:40.111500930Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:b8bb7b17a4f915419575ceb885e128d0bb5ea8e67cb88dbde257988b770a4dce,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:40.112073 env[1199]: time="2024-04-12T18:53:40.112042736Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.12\" returns image reference \"sha256:5fab684ed62aaef7130a9e5533c28699a5be380abc7cdbcd32502cca8b56e833\"" Apr 12 18:53:40.121291 env[1199]: time="2024-04-12T18:53:40.121251890Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.12\"" Apr 12 18:53:41.350566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3888267508.mount: Deactivated successfully. Apr 12 18:53:42.482564 env[1199]: time="2024-04-12T18:53:42.482459090Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:42.487190 env[1199]: time="2024-04-12T18:53:42.486977726Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2b5590cbba38a0f4f32cbe39a2d3a1a1348612e7550f8b68af937ba5b6e9ba3d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:42.493529 env[1199]: time="2024-04-12T18:53:42.489561642Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:42.499056 env[1199]: time="2024-04-12T18:53:42.496420718Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:b0539f35b586abc54ca7660f9bb8a539d010b9e07d20e9e3d529cf0ca35d4ddf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:42.499056 env[1199]: time="2024-04-12T18:53:42.496748753Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.12\" returns image reference \"sha256:2b5590cbba38a0f4f32cbe39a2d3a1a1348612e7550f8b68af937ba5b6e9ba3d\"" Apr 12 18:53:42.551951 env[1199]: time="2024-04-12T18:53:42.551769627Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 12 18:53:43.088380 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 12 18:53:43.088578 systemd[1]: Stopped kubelet.service. Apr 12 18:53:43.090222 systemd[1]: Started kubelet.service. Apr 12 18:53:43.140457 kubelet[1571]: E0412 18:53:43.140387 1571 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Apr 12 18:53:43.143477 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 18:53:43.143703 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 18:53:43.594913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2587472036.mount: Deactivated successfully. Apr 12 18:53:43.620760 env[1199]: time="2024-04-12T18:53:43.620656743Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:43.626262 env[1199]: time="2024-04-12T18:53:43.626175194Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:43.629725 env[1199]: time="2024-04-12T18:53:43.629639012Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:43.632801 env[1199]: time="2024-04-12T18:53:43.632725290Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:43.633154 env[1199]: time="2024-04-12T18:53:43.633107558Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Apr 12 18:53:43.656491 env[1199]: time="2024-04-12T18:53:43.656417440Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\"" Apr 12 18:53:44.767710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4232916738.mount: Deactivated successfully. Apr 12 18:53:49.717486 env[1199]: time="2024-04-12T18:53:49.717376206Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:49.719922 env[1199]: time="2024-04-12T18:53:49.719836792Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:49.722134 env[1199]: time="2024-04-12T18:53:49.722052578Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:49.724129 env[1199]: time="2024-04-12T18:53:49.724058731Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:49.725307 env[1199]: time="2024-04-12T18:53:49.725246439Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\" returns image reference \"sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681\"" Apr 12 18:53:49.737816 env[1199]: time="2024-04-12T18:53:49.737732729Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Apr 12 18:53:51.192073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount540508578.mount: Deactivated successfully. Apr 12 18:53:51.821035 env[1199]: time="2024-04-12T18:53:51.820926485Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:51.823390 env[1199]: time="2024-04-12T18:53:51.823336726Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:51.825518 env[1199]: time="2024-04-12T18:53:51.825429983Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:51.827439 env[1199]: time="2024-04-12T18:53:51.827394908Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:51.828122 env[1199]: time="2024-04-12T18:53:51.828059325Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Apr 12 18:53:53.338606 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 12 18:53:53.338892 systemd[1]: Stopped kubelet.service. Apr 12 18:53:53.341191 systemd[1]: Started kubelet.service. Apr 12 18:53:53.406961 kubelet[1669]: E0412 18:53:53.406879 1669 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Apr 12 18:53:53.409450 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 18:53:53.409686 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 18:53:54.310848 systemd[1]: Stopped kubelet.service. Apr 12 18:53:54.330743 systemd[1]: Reloading. Apr 12 18:53:54.395395 /usr/lib/systemd/system-generators/torcx-generator[1700]: time="2024-04-12T18:53:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:53:54.395428 /usr/lib/systemd/system-generators/torcx-generator[1700]: time="2024-04-12T18:53:54Z" level=info msg="torcx already run" Apr 12 18:53:54.475101 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:53:54.475118 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:53:54.494743 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:53:54.570044 systemd[1]: Started kubelet.service. Apr 12 18:53:54.616252 kubelet[1748]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:53:54.616252 kubelet[1748]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 12 18:53:54.616252 kubelet[1748]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:53:54.616749 kubelet[1748]: I0412 18:53:54.616287 1748 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 12 18:53:54.783755 kubelet[1748]: I0412 18:53:54.783694 1748 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Apr 12 18:53:54.783755 kubelet[1748]: I0412 18:53:54.783744 1748 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 12 18:53:54.784111 kubelet[1748]: I0412 18:53:54.784085 1748 server.go:837] "Client rotation is on, will bootstrap in background" Apr 12 18:53:54.788943 kubelet[1748]: I0412 18:53:54.788898 1748 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 18:53:54.789799 kubelet[1748]: E0412 18:53:54.789775 1748 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.108:6443: connect: connection refused Apr 12 18:53:54.794965 kubelet[1748]: I0412 18:53:54.794928 1748 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 12 18:53:54.795504 kubelet[1748]: I0412 18:53:54.795477 1748 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 12 18:53:54.795631 kubelet[1748]: I0412 18:53:54.795596 1748 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Apr 12 18:53:54.795754 kubelet[1748]: I0412 18:53:54.795644 1748 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Apr 12 18:53:54.795754 kubelet[1748]: I0412 18:53:54.795661 1748 container_manager_linux.go:302] "Creating device plugin manager" Apr 12 18:53:54.795829 kubelet[1748]: I0412 18:53:54.795812 1748 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:53:54.799321 kubelet[1748]: I0412 18:53:54.799277 1748 kubelet.go:405] "Attempting to sync node with API server" Apr 12 18:53:54.799404 kubelet[1748]: I0412 18:53:54.799326 1748 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 12 18:53:54.799404 kubelet[1748]: I0412 18:53:54.799355 1748 kubelet.go:309] "Adding apiserver pod source" Apr 12 18:53:54.799404 kubelet[1748]: I0412 18:53:54.799379 1748 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 12 18:53:54.800212 kubelet[1748]: I0412 18:53:54.800184 1748 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Apr 12 18:53:54.800453 kubelet[1748]: W0412 18:53:54.800409 1748 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.108:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Apr 12 18:53:54.800515 kubelet[1748]: E0412 18:53:54.800490 1748 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.108:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Apr 12 18:53:54.800657 kubelet[1748]: W0412 18:53:54.800564 1748 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Apr 12 18:53:54.800731 kubelet[1748]: E0412 18:53:54.800708 1748 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Apr 12 18:53:54.800731 kubelet[1748]: W0412 18:53:54.800667 1748 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 12 18:53:54.801449 kubelet[1748]: I0412 18:53:54.801398 1748 server.go:1168] "Started kubelet" Apr 12 18:53:54.801646 kubelet[1748]: I0412 18:53:54.801512 1748 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Apr 12 18:53:54.801646 kubelet[1748]: I0412 18:53:54.801591 1748 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Apr 12 18:53:54.802941 kubelet[1748]: E0412 18:53:54.802642 1748 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17c59d297d09f3a8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.April, 12, 18, 53, 54, 801365928, time.Local), LastTimestamp:time.Date(2024, time.April, 12, 18, 53, 54, 801365928, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.108:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.108:6443: connect: connection refused'(may retry after sleeping) Apr 12 18:53:54.803109 kubelet[1748]: E0412 18:53:54.803054 1748 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Apr 12 18:53:54.803109 kubelet[1748]: E0412 18:53:54.803083 1748 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 12 18:53:54.806638 kubelet[1748]: I0412 18:53:54.806617 1748 server.go:461] "Adding debug handlers to kubelet server" Apr 12 18:53:54.806893 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Apr 12 18:53:54.807289 kubelet[1748]: I0412 18:53:54.807264 1748 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 12 18:53:54.807869 kubelet[1748]: I0412 18:53:54.807798 1748 volume_manager.go:284] "Starting Kubelet Volume Manager" Apr 12 18:53:54.810439 kubelet[1748]: I0412 18:53:54.810407 1748 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Apr 12 18:53:54.811068 kubelet[1748]: W0412 18:53:54.810992 1748 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Apr 12 18:53:54.811189 kubelet[1748]: E0412 18:53:54.811166 1748 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Apr 12 18:53:54.811459 kubelet[1748]: E0412 18:53:54.811429 1748 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="200ms" Apr 12 18:53:54.828228 kubelet[1748]: I0412 18:53:54.828084 1748 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Apr 12 18:53:54.830906 kubelet[1748]: I0412 18:53:54.830871 1748 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Apr 12 18:53:54.830906 kubelet[1748]: I0412 18:53:54.830914 1748 status_manager.go:207] "Starting to sync pod status with apiserver" Apr 12 18:53:54.831167 kubelet[1748]: I0412 18:53:54.830951 1748 kubelet.go:2257] "Starting kubelet main sync loop" Apr 12 18:53:54.831167 kubelet[1748]: E0412 18:53:54.831042 1748 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 12 18:53:54.833335 kubelet[1748]: W0412 18:53:54.833257 1748 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Apr 12 18:53:54.833488 kubelet[1748]: E0412 18:53:54.833357 1748 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Apr 12 18:53:54.859060 kubelet[1748]: I0412 18:53:54.859024 1748 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 12 18:53:54.859060 kubelet[1748]: I0412 18:53:54.859050 1748 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 12 18:53:54.859060 kubelet[1748]: I0412 18:53:54.859073 1748 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:53:54.862307 kubelet[1748]: I0412 18:53:54.862269 1748 policy_none.go:49] "None policy: Start" Apr 12 18:53:54.862968 kubelet[1748]: I0412 18:53:54.862924 1748 memory_manager.go:169] "Starting memorymanager" policy="None" Apr 12 18:53:54.863072 kubelet[1748]: I0412 18:53:54.862984 1748 state_mem.go:35] "Initializing new in-memory state store" Apr 12 18:53:54.871785 kubelet[1748]: I0412 18:53:54.871738 1748 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 12 18:53:54.872116 kubelet[1748]: I0412 18:53:54.872092 1748 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 12 18:53:54.873362 kubelet[1748]: E0412 18:53:54.873319 1748 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 12 18:53:54.912185 kubelet[1748]: I0412 18:53:54.912149 1748 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Apr 12 18:53:54.912600 kubelet[1748]: E0412 18:53:54.912581 1748 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Apr 12 18:53:54.931915 kubelet[1748]: I0412 18:53:54.931835 1748 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:53:54.933448 kubelet[1748]: I0412 18:53:54.933408 1748 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:53:54.934524 kubelet[1748]: I0412 18:53:54.934488 1748 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:53:55.012254 kubelet[1748]: E0412 18:53:55.012212 1748 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="400ms" Apr 12 18:53:55.111702 kubelet[1748]: I0412 18:53:55.111612 1748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7675e6cc9e1d7e031374e1504cebab70-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7675e6cc9e1d7e031374e1504cebab70\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:53:55.111702 kubelet[1748]: I0412 18:53:55.111703 1748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:53:55.112024 kubelet[1748]: I0412 18:53:55.111736 1748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:53:55.112024 kubelet[1748]: I0412 18:53:55.111766 1748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f7d78630cba827a770c684e2dbe6ce6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2f7d78630cba827a770c684e2dbe6ce6\") " pod="kube-system/kube-scheduler-localhost" Apr 12 18:53:55.112024 kubelet[1748]: I0412 18:53:55.111794 1748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7675e6cc9e1d7e031374e1504cebab70-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7675e6cc9e1d7e031374e1504cebab70\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:53:55.112024 kubelet[1748]: I0412 18:53:55.111822 1748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7675e6cc9e1d7e031374e1504cebab70-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7675e6cc9e1d7e031374e1504cebab70\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:53:55.112024 kubelet[1748]: I0412 18:53:55.111914 1748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:53:55.112265 kubelet[1748]: I0412 18:53:55.111965 1748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:53:55.112265 kubelet[1748]: I0412 18:53:55.112029 1748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:53:55.114240 kubelet[1748]: I0412 18:53:55.114189 1748 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Apr 12 18:53:55.114676 kubelet[1748]: E0412 18:53:55.114648 1748 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Apr 12 18:53:55.238504 kubelet[1748]: E0412 18:53:55.238435 1748 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:55.239280 env[1199]: time="2024-04-12T18:53:55.239239814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7675e6cc9e1d7e031374e1504cebab70,Namespace:kube-system,Attempt:0,}" Apr 12 18:53:55.241400 kubelet[1748]: E0412 18:53:55.241371 1748 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:55.241611 kubelet[1748]: E0412 18:53:55.241448 1748 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:55.241848 env[1199]: time="2024-04-12T18:53:55.241805917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2f7d78630cba827a770c684e2dbe6ce6,Namespace:kube-system,Attempt:0,}" Apr 12 18:53:55.242073 env[1199]: time="2024-04-12T18:53:55.242048082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b23ea803843027eb81926493bf073366,Namespace:kube-system,Attempt:0,}" Apr 12 18:53:55.413932 kubelet[1748]: E0412 18:53:55.413789 1748 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="800ms" Apr 12 18:53:55.516674 kubelet[1748]: I0412 18:53:55.516628 1748 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Apr 12 18:53:55.517062 kubelet[1748]: E0412 18:53:55.517043 1748 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Apr 12 18:53:55.876882 kubelet[1748]: W0412 18:53:55.876814 1748 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Apr 12 18:53:55.876882 kubelet[1748]: E0412 18:53:55.876882 1748 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Apr 12 18:53:55.897471 kubelet[1748]: W0412 18:53:55.897377 1748 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Apr 12 18:53:55.897471 kubelet[1748]: E0412 18:53:55.897446 1748 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Apr 12 18:53:55.965056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount255907754.mount: Deactivated successfully. Apr 12 18:53:55.972008 env[1199]: time="2024-04-12T18:53:55.971885688Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:55.975255 env[1199]: time="2024-04-12T18:53:55.975185197Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:55.976506 env[1199]: time="2024-04-12T18:53:55.976455049Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:55.978575 env[1199]: time="2024-04-12T18:53:55.978503101Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:55.981092 env[1199]: time="2024-04-12T18:53:55.981044778Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:55.982735 env[1199]: time="2024-04-12T18:53:55.982673223Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:55.984575 env[1199]: time="2024-04-12T18:53:55.984503426Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:55.984681 kubelet[1748]: E0412 18:53:55.984430 1748 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17c59d297d09f3a8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.April, 12, 18, 53, 54, 801365928, time.Local), LastTimestamp:time.Date(2024, time.April, 12, 18, 53, 54, 801365928, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.108:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.108:6443: connect: connection refused'(may retry after sleeping) Apr 12 18:53:55.986641 env[1199]: time="2024-04-12T18:53:55.986531049Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:55.988924 env[1199]: time="2024-04-12T18:53:55.988682915Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:55.991474 env[1199]: time="2024-04-12T18:53:55.991397196Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:55.992231 env[1199]: time="2024-04-12T18:53:55.992188260Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:55.994100 env[1199]: time="2024-04-12T18:53:55.994044532Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:56.031448 env[1199]: time="2024-04-12T18:53:56.030497217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:53:56.031448 env[1199]: time="2024-04-12T18:53:56.030537032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:53:56.031448 env[1199]: time="2024-04-12T18:53:56.030550006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:53:56.031448 env[1199]: time="2024-04-12T18:53:56.030684248Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8d458099fef4714100293f6b036133ac6423658b31f2cecb24327effff7080c1 pid=1811 runtime=io.containerd.runc.v2 Apr 12 18:53:56.031725 env[1199]: time="2024-04-12T18:53:56.024312236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:53:56.031725 env[1199]: time="2024-04-12T18:53:56.024375404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:53:56.031725 env[1199]: time="2024-04-12T18:53:56.024389731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:53:56.031725 env[1199]: time="2024-04-12T18:53:56.024738255Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd0ff5ad7c5a7361818fb4d90ec5cedacbeaaadec9a46cb47cf1d079e63352a7 pid=1790 runtime=io.containerd.runc.v2 Apr 12 18:53:56.033279 env[1199]: time="2024-04-12T18:53:56.033210266Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:53:56.033352 env[1199]: time="2024-04-12T18:53:56.033290316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:53:56.033352 env[1199]: time="2024-04-12T18:53:56.033326113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:53:56.033536 env[1199]: time="2024-04-12T18:53:56.033493367Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a9062f75ab293d6a4daefdf794a37bf1061bdfa54a26a708a07fad8e6a2b0f9a pid=1816 runtime=io.containerd.runc.v2 Apr 12 18:53:56.092259 env[1199]: time="2024-04-12T18:53:56.091445629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2f7d78630cba827a770c684e2dbe6ce6,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9062f75ab293d6a4daefdf794a37bf1061bdfa54a26a708a07fad8e6a2b0f9a\"" Apr 12 18:53:56.093014 kubelet[1748]: E0412 18:53:56.092967 1748 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:56.095842 env[1199]: time="2024-04-12T18:53:56.095793364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b23ea803843027eb81926493bf073366,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd0ff5ad7c5a7361818fb4d90ec5cedacbeaaadec9a46cb47cf1d079e63352a7\"" Apr 12 18:53:56.096466 kubelet[1748]: E0412 18:53:56.096443 1748 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:56.099099 env[1199]: time="2024-04-12T18:53:56.098981213Z" level=info msg="CreateContainer within sandbox \"a9062f75ab293d6a4daefdf794a37bf1061bdfa54a26a708a07fad8e6a2b0f9a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 12 18:53:56.100513 env[1199]: time="2024-04-12T18:53:56.100480085Z" level=info msg="CreateContainer within sandbox \"dd0ff5ad7c5a7361818fb4d90ec5cedacbeaaadec9a46cb47cf1d079e63352a7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 12 18:53:56.104242 env[1199]: time="2024-04-12T18:53:56.104202377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7675e6cc9e1d7e031374e1504cebab70,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d458099fef4714100293f6b036133ac6423658b31f2cecb24327effff7080c1\"" Apr 12 18:53:56.105168 kubelet[1748]: E0412 18:53:56.105145 1748 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:56.107014 env[1199]: time="2024-04-12T18:53:56.106946544Z" level=info msg="CreateContainer within sandbox \"8d458099fef4714100293f6b036133ac6423658b31f2cecb24327effff7080c1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 12 18:53:56.130893 env[1199]: time="2024-04-12T18:53:56.130785058Z" level=info msg="CreateContainer within sandbox \"a9062f75ab293d6a4daefdf794a37bf1061bdfa54a26a708a07fad8e6a2b0f9a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cee85be2461cc8819a2fb064018b1f2fa252834beb240a041331b02f78144074\"" Apr 12 18:53:56.131814 env[1199]: time="2024-04-12T18:53:56.131776247Z" level=info msg="StartContainer for \"cee85be2461cc8819a2fb064018b1f2fa252834beb240a041331b02f78144074\"" Apr 12 18:53:56.134388 env[1199]: time="2024-04-12T18:53:56.134339175Z" level=info msg="CreateContainer within sandbox \"8d458099fef4714100293f6b036133ac6423658b31f2cecb24327effff7080c1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"96183f3e177c5b0fa2767006b46524e968657530ff3106e66f5bb72986eafe0b\"" Apr 12 18:53:56.134867 env[1199]: time="2024-04-12T18:53:56.134830867Z" level=info msg="StartContainer for \"96183f3e177c5b0fa2767006b46524e968657530ff3106e66f5bb72986eafe0b\"" Apr 12 18:53:56.136033 env[1199]: time="2024-04-12T18:53:56.135988358Z" level=info msg="CreateContainer within sandbox \"dd0ff5ad7c5a7361818fb4d90ec5cedacbeaaadec9a46cb47cf1d079e63352a7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0315f56dc13ac44f2ce855141f8230caedaa1eda449a9e3d13f8ec74a9517f9c\"" Apr 12 18:53:56.137182 env[1199]: time="2024-04-12T18:53:56.137138235Z" level=info msg="StartContainer for \"0315f56dc13ac44f2ce855141f8230caedaa1eda449a9e3d13f8ec74a9517f9c\"" Apr 12 18:53:56.200466 env[1199]: time="2024-04-12T18:53:56.200412109Z" level=info msg="StartContainer for \"96183f3e177c5b0fa2767006b46524e968657530ff3106e66f5bb72986eafe0b\" returns successfully" Apr 12 18:53:56.215498 kubelet[1748]: E0412 18:53:56.215424 1748 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="1.6s" Apr 12 18:53:56.227316 env[1199]: time="2024-04-12T18:53:56.227224501Z" level=info msg="StartContainer for \"cee85be2461cc8819a2fb064018b1f2fa252834beb240a041331b02f78144074\" returns successfully" Apr 12 18:53:56.240418 env[1199]: time="2024-04-12T18:53:56.240364768Z" level=info msg="StartContainer for \"0315f56dc13ac44f2ce855141f8230caedaa1eda449a9e3d13f8ec74a9517f9c\" returns successfully" Apr 12 18:53:56.318886 kubelet[1748]: I0412 18:53:56.318853 1748 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Apr 12 18:53:56.840656 kubelet[1748]: E0412 18:53:56.840625 1748 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:56.842695 kubelet[1748]: E0412 18:53:56.842672 1748 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:56.844628 kubelet[1748]: E0412 18:53:56.844604 1748 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:57.709871 kubelet[1748]: I0412 18:53:57.709827 1748 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Apr 12 18:53:57.718552 kubelet[1748]: E0412 18:53:57.718496 1748 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:53:57.818686 kubelet[1748]: E0412 18:53:57.818640 1748 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:53:57.846345 kubelet[1748]: E0412 18:53:57.846303 1748 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:57.919390 kubelet[1748]: E0412 18:53:57.919345 1748 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:53:58.019928 kubelet[1748]: E0412 18:53:58.019815 1748 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:53:58.120493 kubelet[1748]: E0412 18:53:58.120425 1748 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:53:58.221242 kubelet[1748]: E0412 18:53:58.221158 1748 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:53:58.322442 kubelet[1748]: E0412 18:53:58.322298 1748 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:53:58.422952 kubelet[1748]: E0412 18:53:58.422865 1748 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:53:58.523676 kubelet[1748]: E0412 18:53:58.523602 1748 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:53:58.624495 kubelet[1748]: E0412 18:53:58.624446 1748 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:53:58.802447 kubelet[1748]: I0412 18:53:58.802400 1748 apiserver.go:52] "Watching apiserver" Apr 12 18:53:58.811735 kubelet[1748]: I0412 18:53:58.811678 1748 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Apr 12 18:53:58.840115 kubelet[1748]: I0412 18:53:58.840073 1748 reconciler.go:41] "Reconciler: start to sync state" Apr 12 18:54:00.151950 systemd[1]: Reloading. Apr 12 18:54:00.229347 /usr/lib/systemd/system-generators/torcx-generator[2041]: time="2024-04-12T18:54:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:54:00.229371 /usr/lib/systemd/system-generators/torcx-generator[2041]: time="2024-04-12T18:54:00Z" level=info msg="torcx already run" Apr 12 18:54:00.325043 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:54:00.325065 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:54:00.354331 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:54:00.431944 kubelet[1748]: I0412 18:54:00.431845 1748 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 18:54:00.431941 systemd[1]: Stopping kubelet.service... Apr 12 18:54:00.453297 systemd[1]: kubelet.service: Deactivated successfully. Apr 12 18:54:00.453600 systemd[1]: Stopped kubelet.service. Apr 12 18:54:00.455404 systemd[1]: Started kubelet.service. Apr 12 18:54:00.502905 kubelet[2089]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:54:00.502905 kubelet[2089]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 12 18:54:00.502905 kubelet[2089]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:54:00.503348 kubelet[2089]: I0412 18:54:00.502940 2089 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 12 18:54:00.508179 kubelet[2089]: I0412 18:54:00.508149 2089 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Apr 12 18:54:00.508179 kubelet[2089]: I0412 18:54:00.508174 2089 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 12 18:54:00.508367 kubelet[2089]: I0412 18:54:00.508344 2089 server.go:837] "Client rotation is on, will bootstrap in background" Apr 12 18:54:00.509650 kubelet[2089]: I0412 18:54:00.509629 2089 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 12 18:54:00.510508 kubelet[2089]: I0412 18:54:00.510462 2089 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 18:54:00.513892 kubelet[2089]: I0412 18:54:00.513869 2089 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 12 18:54:00.514295 kubelet[2089]: I0412 18:54:00.514273 2089 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 12 18:54:00.514356 kubelet[2089]: I0412 18:54:00.514338 2089 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Apr 12 18:54:00.514445 kubelet[2089]: I0412 18:54:00.514359 2089 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Apr 12 18:54:00.514445 kubelet[2089]: I0412 18:54:00.514370 2089 container_manager_linux.go:302] "Creating device plugin manager" Apr 12 18:54:00.514445 kubelet[2089]: I0412 18:54:00.514395 2089 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:54:00.516927 kubelet[2089]: I0412 18:54:00.516910 2089 kubelet.go:405] "Attempting to sync node with API server" Apr 12 18:54:00.516927 kubelet[2089]: I0412 18:54:00.516929 2089 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 12 18:54:00.517028 kubelet[2089]: I0412 18:54:00.516948 2089 kubelet.go:309] "Adding apiserver pod source" Apr 12 18:54:00.517028 kubelet[2089]: I0412 18:54:00.516962 2089 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 12 18:54:00.523394 kubelet[2089]: I0412 18:54:00.517777 2089 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Apr 12 18:54:00.523394 kubelet[2089]: I0412 18:54:00.518150 2089 server.go:1168] "Started kubelet" Apr 12 18:54:00.523394 kubelet[2089]: I0412 18:54:00.519591 2089 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 12 18:54:00.524386 kubelet[2089]: I0412 18:54:00.524356 2089 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Apr 12 18:54:00.525242 kubelet[2089]: I0412 18:54:00.525206 2089 server.go:461] "Adding debug handlers to kubelet server" Apr 12 18:54:00.526892 kubelet[2089]: I0412 18:54:00.526363 2089 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Apr 12 18:54:00.528492 kubelet[2089]: E0412 18:54:00.527451 2089 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Apr 12 18:54:00.528492 kubelet[2089]: E0412 18:54:00.527495 2089 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 12 18:54:00.533546 kubelet[2089]: I0412 18:54:00.533514 2089 volume_manager.go:284] "Starting Kubelet Volume Manager" Apr 12 18:54:00.538296 kubelet[2089]: I0412 18:54:00.533874 2089 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Apr 12 18:54:00.537779 sudo[2108]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 12 18:54:00.537956 sudo[2108]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Apr 12 18:54:00.547363 kubelet[2089]: I0412 18:54:00.547333 2089 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Apr 12 18:54:00.548364 kubelet[2089]: I0412 18:54:00.548338 2089 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Apr 12 18:54:00.548364 kubelet[2089]: I0412 18:54:00.548367 2089 status_manager.go:207] "Starting to sync pod status with apiserver" Apr 12 18:54:00.548443 kubelet[2089]: I0412 18:54:00.548388 2089 kubelet.go:2257] "Starting kubelet main sync loop" Apr 12 18:54:00.548443 kubelet[2089]: E0412 18:54:00.548437 2089 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 12 18:54:00.605050 kubelet[2089]: I0412 18:54:00.605012 2089 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 12 18:54:00.605050 kubelet[2089]: I0412 18:54:00.605043 2089 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 12 18:54:00.605205 kubelet[2089]: I0412 18:54:00.605067 2089 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:54:00.605249 kubelet[2089]: I0412 18:54:00.605214 2089 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 12 18:54:00.605249 kubelet[2089]: I0412 18:54:00.605239 2089 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Apr 12 18:54:00.605249 kubelet[2089]: I0412 18:54:00.605246 2089 policy_none.go:49] "None policy: Start" Apr 12 18:54:00.605798 kubelet[2089]: I0412 18:54:00.605775 2089 memory_manager.go:169] "Starting memorymanager" policy="None" Apr 12 18:54:00.605798 kubelet[2089]: I0412 18:54:00.605798 2089 state_mem.go:35] "Initializing new in-memory state store" Apr 12 18:54:00.605918 kubelet[2089]: I0412 18:54:00.605898 2089 state_mem.go:75] "Updated machine memory state" Apr 12 18:54:00.607117 kubelet[2089]: I0412 18:54:00.607068 2089 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 12 18:54:00.607496 kubelet[2089]: I0412 18:54:00.607475 2089 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 12 18:54:00.638040 kubelet[2089]: I0412 18:54:00.638011 2089 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Apr 12 18:54:00.644310 kubelet[2089]: I0412 18:54:00.644265 2089 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Apr 12 18:54:00.644510 kubelet[2089]: I0412 18:54:00.644353 2089 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Apr 12 18:54:00.649098 kubelet[2089]: I0412 18:54:00.649074 2089 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:54:00.649287 kubelet[2089]: I0412 18:54:00.649268 2089 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:54:00.649428 kubelet[2089]: I0412 18:54:00.649407 2089 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:54:00.741155 kubelet[2089]: I0412 18:54:00.741025 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:54:00.741155 kubelet[2089]: I0412 18:54:00.741073 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:54:00.741155 kubelet[2089]: I0412 18:54:00.741095 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:54:00.741348 kubelet[2089]: I0412 18:54:00.741199 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:54:00.741348 kubelet[2089]: I0412 18:54:00.741254 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7675e6cc9e1d7e031374e1504cebab70-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7675e6cc9e1d7e031374e1504cebab70\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:54:00.741348 kubelet[2089]: I0412 18:54:00.741295 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:54:00.741348 kubelet[2089]: I0412 18:54:00.741314 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f7d78630cba827a770c684e2dbe6ce6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2f7d78630cba827a770c684e2dbe6ce6\") " pod="kube-system/kube-scheduler-localhost" Apr 12 18:54:00.741348 kubelet[2089]: I0412 18:54:00.741330 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7675e6cc9e1d7e031374e1504cebab70-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7675e6cc9e1d7e031374e1504cebab70\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:54:00.741486 kubelet[2089]: I0412 18:54:00.741346 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7675e6cc9e1d7e031374e1504cebab70-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7675e6cc9e1d7e031374e1504cebab70\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:54:00.954958 kubelet[2089]: E0412 18:54:00.954922 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:00.956481 kubelet[2089]: E0412 18:54:00.956453 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:00.957161 kubelet[2089]: E0412 18:54:00.957139 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:01.008700 sudo[2108]: pam_unix(sudo:session): session closed for user root Apr 12 18:54:01.517417 kubelet[2089]: I0412 18:54:01.517348 2089 apiserver.go:52] "Watching apiserver" Apr 12 18:54:01.534967 kubelet[2089]: I0412 18:54:01.534900 2089 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Apr 12 18:54:01.547162 kubelet[2089]: I0412 18:54:01.547105 2089 reconciler.go:41] "Reconciler: start to sync state" Apr 12 18:54:01.560054 kubelet[2089]: E0412 18:54:01.560031 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:01.561418 kubelet[2089]: E0412 18:54:01.561379 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:01.619638 kubelet[2089]: E0412 18:54:01.619604 2089 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 12 18:54:01.623325 kubelet[2089]: E0412 18:54:01.623272 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:01.631182 kubelet[2089]: I0412 18:54:01.631118 2089 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.631039866 podCreationTimestamp="2024-04-12 18:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:54:01.620617019 +0000 UTC m=+1.161984010" watchObservedRunningTime="2024-04-12 18:54:01.631039866 +0000 UTC m=+1.172406846" Apr 12 18:54:01.639775 kubelet[2089]: I0412 18:54:01.639729 2089 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.639670557 podCreationTimestamp="2024-04-12 18:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:54:01.631554074 +0000 UTC m=+1.172921054" watchObservedRunningTime="2024-04-12 18:54:01.639670557 +0000 UTC m=+1.181037537" Apr 12 18:54:02.247580 sudo[1312]: pam_unix(sudo:session): session closed for user root Apr 12 18:54:02.249412 sshd[1306]: pam_unix(sshd:session): session closed for user core Apr 12 18:54:02.251914 systemd[1]: sshd@4-10.0.0.108:22-10.0.0.1:56646.service: Deactivated successfully. Apr 12 18:54:02.253162 systemd-logind[1176]: Session 5 logged out. Waiting for processes to exit. Apr 12 18:54:02.253231 systemd[1]: session-5.scope: Deactivated successfully. Apr 12 18:54:02.254248 systemd-logind[1176]: Removed session 5. Apr 12 18:54:02.562087 kubelet[2089]: E0412 18:54:02.561911 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:03.563687 kubelet[2089]: E0412 18:54:03.563654 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:07.464693 kubelet[2089]: E0412 18:54:07.464632 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:07.534642 kubelet[2089]: I0412 18:54:07.534574 2089 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=7.534527932 podCreationTimestamp="2024-04-12 18:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:54:01.640206918 +0000 UTC m=+1.181573898" watchObservedRunningTime="2024-04-12 18:54:07.534527932 +0000 UTC m=+7.075894912" Apr 12 18:54:07.569179 kubelet[2089]: E0412 18:54:07.569138 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:08.557477 kubelet[2089]: E0412 18:54:08.557434 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:08.570904 kubelet[2089]: E0412 18:54:08.570870 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:08.572041 kubelet[2089]: E0412 18:54:08.571425 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:09.053643 kubelet[2089]: E0412 18:54:09.053603 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:09.572025 kubelet[2089]: E0412 18:54:09.571971 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:12.605620 update_engine[1177]: I0412 18:54:12.605544 1177 update_attempter.cc:509] Updating boot flags... Apr 12 18:54:13.135929 kubelet[2089]: I0412 18:54:13.135892 2089 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 12 18:54:13.136425 env[1199]: time="2024-04-12T18:54:13.136384884Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 12 18:54:13.136659 kubelet[2089]: I0412 18:54:13.136572 2089 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 12 18:54:13.954485 kubelet[2089]: I0412 18:54:13.954435 2089 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:54:13.962458 kubelet[2089]: I0412 18:54:13.962426 2089 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:54:14.020456 kubelet[2089]: I0412 18:54:14.020378 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-xtables-lock\") pod \"cilium-665dd\" (UID: \"2df9ace5-4433-4936-8c79-e49b42acc0e9\") " pod="kube-system/cilium-665dd" Apr 12 18:54:14.020456 kubelet[2089]: I0412 18:54:14.020446 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-bpf-maps\") pod \"cilium-665dd\" (UID: \"2df9ace5-4433-4936-8c79-e49b42acc0e9\") " pod="kube-system/cilium-665dd" Apr 12 18:54:14.020744 kubelet[2089]: I0412 18:54:14.020550 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-hostproc\") pod \"cilium-665dd\" (UID: \"2df9ace5-4433-4936-8c79-e49b42acc0e9\") " pod="kube-system/cilium-665dd" Apr 12 18:54:14.020744 kubelet[2089]: I0412 18:54:14.020632 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ntkc\" (UniqueName: \"kubernetes.io/projected/2df9ace5-4433-4936-8c79-e49b42acc0e9-kube-api-access-4ntkc\") pod \"cilium-665dd\" (UID: \"2df9ace5-4433-4936-8c79-e49b42acc0e9\") " pod="kube-system/cilium-665dd" Apr 12 18:54:14.020744 kubelet[2089]: I0412 18:54:14.020673 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ddf11647-eb98-4987-97e7-7d2490863013-xtables-lock\") pod \"kube-proxy-hx2gq\" (UID: \"ddf11647-eb98-4987-97e7-7d2490863013\") " pod="kube-system/kube-proxy-hx2gq" Apr 12 18:54:14.020856 kubelet[2089]: I0412 18:54:14.020817 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-etc-cni-netd\") pod \"cilium-665dd\" (UID: \"2df9ace5-4433-4936-8c79-e49b42acc0e9\") " pod="kube-system/cilium-665dd" Apr 12 18:54:14.020927 kubelet[2089]: I0412 18:54:14.020909 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-cilium-cgroup\") pod \"cilium-665dd\" (UID: \"2df9ace5-4433-4936-8c79-e49b42acc0e9\") " pod="kube-system/cilium-665dd" Apr 12 18:54:14.020975 kubelet[2089]: I0412 18:54:14.020965 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2df9ace5-4433-4936-8c79-e49b42acc0e9-cilium-config-path\") pod \"cilium-665dd\" (UID: \"2df9ace5-4433-4936-8c79-e49b42acc0e9\") " pod="kube-system/cilium-665dd" Apr 12 18:54:14.021108 kubelet[2089]: I0412 18:54:14.021066 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4q4n\" (UniqueName: \"kubernetes.io/projected/ddf11647-eb98-4987-97e7-7d2490863013-kube-api-access-l4q4n\") pod \"kube-proxy-hx2gq\" (UID: \"ddf11647-eb98-4987-97e7-7d2490863013\") " pod="kube-system/kube-proxy-hx2gq" Apr 12 18:54:14.021162 kubelet[2089]: I0412 18:54:14.021130 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-lib-modules\") pod \"cilium-665dd\" (UID: \"2df9ace5-4433-4936-8c79-e49b42acc0e9\") " pod="kube-system/cilium-665dd" Apr 12 18:54:14.021199 kubelet[2089]: I0412 18:54:14.021175 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2df9ace5-4433-4936-8c79-e49b42acc0e9-hubble-tls\") pod \"cilium-665dd\" (UID: \"2df9ace5-4433-4936-8c79-e49b42acc0e9\") " pod="kube-system/cilium-665dd" Apr 12 18:54:14.021230 kubelet[2089]: I0412 18:54:14.021209 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-cilium-run\") pod \"cilium-665dd\" (UID: \"2df9ace5-4433-4936-8c79-e49b42acc0e9\") " pod="kube-system/cilium-665dd" Apr 12 18:54:14.021260 kubelet[2089]: I0412 18:54:14.021243 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ddf11647-eb98-4987-97e7-7d2490863013-kube-proxy\") pod \"kube-proxy-hx2gq\" (UID: \"ddf11647-eb98-4987-97e7-7d2490863013\") " pod="kube-system/kube-proxy-hx2gq" Apr 12 18:54:14.021294 kubelet[2089]: I0412 18:54:14.021278 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2df9ace5-4433-4936-8c79-e49b42acc0e9-clustermesh-secrets\") pod \"cilium-665dd\" (UID: \"2df9ace5-4433-4936-8c79-e49b42acc0e9\") " pod="kube-system/cilium-665dd" Apr 12 18:54:14.021336 kubelet[2089]: I0412 18:54:14.021310 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ddf11647-eb98-4987-97e7-7d2490863013-lib-modules\") pod \"kube-proxy-hx2gq\" (UID: \"ddf11647-eb98-4987-97e7-7d2490863013\") " pod="kube-system/kube-proxy-hx2gq" Apr 12 18:54:14.021385 kubelet[2089]: I0412 18:54:14.021369 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-cni-path\") pod \"cilium-665dd\" (UID: \"2df9ace5-4433-4936-8c79-e49b42acc0e9\") " pod="kube-system/cilium-665dd" Apr 12 18:54:14.021425 kubelet[2089]: I0412 18:54:14.021415 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-host-proc-sys-net\") pod \"cilium-665dd\" (UID: \"2df9ace5-4433-4936-8c79-e49b42acc0e9\") " pod="kube-system/cilium-665dd" Apr 12 18:54:14.021473 kubelet[2089]: I0412 18:54:14.021450 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-host-proc-sys-kernel\") pod \"cilium-665dd\" (UID: \"2df9ace5-4433-4936-8c79-e49b42acc0e9\") " pod="kube-system/cilium-665dd" Apr 12 18:54:14.094397 kubelet[2089]: I0412 18:54:14.094314 2089 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:54:14.122456 kubelet[2089]: I0412 18:54:14.122414 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/646f34bf-a3b1-4462-abb0-f9846f7ecc24-cilium-config-path\") pod \"cilium-operator-574c4bb98d-phwn5\" (UID: \"646f34bf-a3b1-4462-abb0-f9846f7ecc24\") " pod="kube-system/cilium-operator-574c4bb98d-phwn5" Apr 12 18:54:14.122813 kubelet[2089]: I0412 18:54:14.122795 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-664kj\" (UniqueName: \"kubernetes.io/projected/646f34bf-a3b1-4462-abb0-f9846f7ecc24-kube-api-access-664kj\") pod \"cilium-operator-574c4bb98d-phwn5\" (UID: \"646f34bf-a3b1-4462-abb0-f9846f7ecc24\") " pod="kube-system/cilium-operator-574c4bb98d-phwn5" Apr 12 18:54:14.262638 kubelet[2089]: E0412 18:54:14.262479 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:14.264160 env[1199]: time="2024-04-12T18:54:14.264093751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hx2gq,Uid:ddf11647-eb98-4987-97e7-7d2490863013,Namespace:kube-system,Attempt:0,}" Apr 12 18:54:14.269631 kubelet[2089]: E0412 18:54:14.269581 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:14.270983 env[1199]: time="2024-04-12T18:54:14.270920384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-665dd,Uid:2df9ace5-4433-4936-8c79-e49b42acc0e9,Namespace:kube-system,Attempt:0,}" Apr 12 18:54:14.291078 env[1199]: time="2024-04-12T18:54:14.290926572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:54:14.291078 env[1199]: time="2024-04-12T18:54:14.291013888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:54:14.291078 env[1199]: time="2024-04-12T18:54:14.291031542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:54:14.291592 env[1199]: time="2024-04-12T18:54:14.291233304Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9f70364a2900bf40fab4f95812fe6b8c4d12297182c0911b91b04f2d54424f8e pid=2198 runtime=io.containerd.runc.v2 Apr 12 18:54:14.311687 env[1199]: time="2024-04-12T18:54:14.311123603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:54:14.311687 env[1199]: time="2024-04-12T18:54:14.311203184Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:54:14.311687 env[1199]: time="2024-04-12T18:54:14.311221449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:54:14.311687 env[1199]: time="2024-04-12T18:54:14.311462706Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/471141734105913d7bb42e7e937cd57ff553f986551f7a39270615680a06347c pid=2220 runtime=io.containerd.runc.v2 Apr 12 18:54:14.340729 env[1199]: time="2024-04-12T18:54:14.340652275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hx2gq,Uid:ddf11647-eb98-4987-97e7-7d2490863013,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f70364a2900bf40fab4f95812fe6b8c4d12297182c0911b91b04f2d54424f8e\"" Apr 12 18:54:14.342176 kubelet[2089]: E0412 18:54:14.342151 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:14.344658 env[1199]: time="2024-04-12T18:54:14.344624145Z" level=info msg="CreateContainer within sandbox \"9f70364a2900bf40fab4f95812fe6b8c4d12297182c0911b91b04f2d54424f8e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 12 18:54:14.358537 env[1199]: time="2024-04-12T18:54:14.358406544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-665dd,Uid:2df9ace5-4433-4936-8c79-e49b42acc0e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"471141734105913d7bb42e7e937cd57ff553f986551f7a39270615680a06347c\"" Apr 12 18:54:14.359127 kubelet[2089]: E0412 18:54:14.359101 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:14.360709 env[1199]: time="2024-04-12T18:54:14.360668143Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 12 18:54:14.383476 env[1199]: time="2024-04-12T18:54:14.383379029Z" level=info msg="CreateContainer within sandbox \"9f70364a2900bf40fab4f95812fe6b8c4d12297182c0911b91b04f2d54424f8e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a1e0285b32c7c6a3cedd0fab531e24b34dc49d0648d9be2a576dfba47d6cfd04\"" Apr 12 18:54:14.385153 env[1199]: time="2024-04-12T18:54:14.384683723Z" level=info msg="StartContainer for \"a1e0285b32c7c6a3cedd0fab531e24b34dc49d0648d9be2a576dfba47d6cfd04\"" Apr 12 18:54:14.397746 kubelet[2089]: E0412 18:54:14.397704 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:14.399074 env[1199]: time="2024-04-12T18:54:14.398359941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-phwn5,Uid:646f34bf-a3b1-4462-abb0-f9846f7ecc24,Namespace:kube-system,Attempt:0,}" Apr 12 18:54:14.427589 env[1199]: time="2024-04-12T18:54:14.427487762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:54:14.427785 env[1199]: time="2024-04-12T18:54:14.427605677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:54:14.427785 env[1199]: time="2024-04-12T18:54:14.427647055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:54:14.428130 env[1199]: time="2024-04-12T18:54:14.428081999Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/982d52cb6ef4acf249d4a43e54a5022b65826178307e1f55d945c966f85b5ba1 pid=2303 runtime=io.containerd.runc.v2 Apr 12 18:54:14.456159 env[1199]: time="2024-04-12T18:54:14.456058589Z" level=info msg="StartContainer for \"a1e0285b32c7c6a3cedd0fab531e24b34dc49d0648d9be2a576dfba47d6cfd04\" returns successfully" Apr 12 18:54:14.492394 env[1199]: time="2024-04-12T18:54:14.492133943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-phwn5,Uid:646f34bf-a3b1-4462-abb0-f9846f7ecc24,Namespace:kube-system,Attempt:0,} returns sandbox id \"982d52cb6ef4acf249d4a43e54a5022b65826178307e1f55d945c966f85b5ba1\"" Apr 12 18:54:14.493068 kubelet[2089]: E0412 18:54:14.493019 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:14.586266 kubelet[2089]: E0412 18:54:14.585216 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:20.214037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2122865053.mount: Deactivated successfully. Apr 12 18:54:26.030403 env[1199]: time="2024-04-12T18:54:26.030338989Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:54:26.032365 env[1199]: time="2024-04-12T18:54:26.032320414Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:54:26.034179 env[1199]: time="2024-04-12T18:54:26.034136587Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:54:26.034852 env[1199]: time="2024-04-12T18:54:26.034811980Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 12 18:54:26.036986 env[1199]: time="2024-04-12T18:54:26.036934732Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 12 18:54:26.038827 env[1199]: time="2024-04-12T18:54:26.038780320Z" level=info msg="CreateContainer within sandbox \"471141734105913d7bb42e7e937cd57ff553f986551f7a39270615680a06347c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 18:54:26.054554 env[1199]: time="2024-04-12T18:54:26.054497407Z" level=info msg="CreateContainer within sandbox \"471141734105913d7bb42e7e937cd57ff553f986551f7a39270615680a06347c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d3745b9d861854f16aba830fc6026f492d27619e7e85ac50fbc76ca71b32e805\"" Apr 12 18:54:26.055147 env[1199]: time="2024-04-12T18:54:26.055108359Z" level=info msg="StartContainer for \"d3745b9d861854f16aba830fc6026f492d27619e7e85ac50fbc76ca71b32e805\"" Apr 12 18:54:26.099234 env[1199]: time="2024-04-12T18:54:26.099176337Z" level=info msg="StartContainer for \"d3745b9d861854f16aba830fc6026f492d27619e7e85ac50fbc76ca71b32e805\" returns successfully" Apr 12 18:54:26.611308 kubelet[2089]: E0412 18:54:26.610656 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:26.640020 kubelet[2089]: I0412 18:54:26.639970 2089 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-hx2gq" podStartSLOduration=13.639924788 podCreationTimestamp="2024-04-12 18:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:54:14.694604228 +0000 UTC m=+14.235971228" watchObservedRunningTime="2024-04-12 18:54:26.639924788 +0000 UTC m=+26.181291768" Apr 12 18:54:26.656232 env[1199]: time="2024-04-12T18:54:26.656145154Z" level=info msg="shim disconnected" id=d3745b9d861854f16aba830fc6026f492d27619e7e85ac50fbc76ca71b32e805 Apr 12 18:54:26.656232 env[1199]: time="2024-04-12T18:54:26.656204645Z" level=warning msg="cleaning up after shim disconnected" id=d3745b9d861854f16aba830fc6026f492d27619e7e85ac50fbc76ca71b32e805 namespace=k8s.io Apr 12 18:54:26.656232 env[1199]: time="2024-04-12T18:54:26.656213442Z" level=info msg="cleaning up dead shim" Apr 12 18:54:26.663509 env[1199]: time="2024-04-12T18:54:26.663434163Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:54:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2516 runtime=io.containerd.runc.v2\n" Apr 12 18:54:27.049146 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3745b9d861854f16aba830fc6026f492d27619e7e85ac50fbc76ca71b32e805-rootfs.mount: Deactivated successfully. Apr 12 18:54:27.613354 kubelet[2089]: E0412 18:54:27.613324 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:27.614911 env[1199]: time="2024-04-12T18:54:27.614857390Z" level=info msg="CreateContainer within sandbox \"471141734105913d7bb42e7e937cd57ff553f986551f7a39270615680a06347c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 12 18:54:27.936764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2629414541.mount: Deactivated successfully. Apr 12 18:54:27.938087 env[1199]: time="2024-04-12T18:54:27.938040171Z" level=info msg="CreateContainer within sandbox \"471141734105913d7bb42e7e937cd57ff553f986551f7a39270615680a06347c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8e91b26d5737c6d7a18cce6b14a2cde6fe87340f33fd3689287cf08f13dbdd12\"" Apr 12 18:54:27.938693 env[1199]: time="2024-04-12T18:54:27.938656814Z" level=info msg="StartContainer for \"8e91b26d5737c6d7a18cce6b14a2cde6fe87340f33fd3689287cf08f13dbdd12\"" Apr 12 18:54:27.990527 env[1199]: time="2024-04-12T18:54:27.990476423Z" level=info msg="StartContainer for \"8e91b26d5737c6d7a18cce6b14a2cde6fe87340f33fd3689287cf08f13dbdd12\" returns successfully" Apr 12 18:54:27.996904 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 12 18:54:27.997746 systemd[1]: Stopped systemd-sysctl.service. Apr 12 18:54:27.998069 systemd[1]: Stopping systemd-sysctl.service... Apr 12 18:54:28.000014 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:54:28.007518 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:54:28.026919 env[1199]: time="2024-04-12T18:54:28.026850924Z" level=info msg="shim disconnected" id=8e91b26d5737c6d7a18cce6b14a2cde6fe87340f33fd3689287cf08f13dbdd12 Apr 12 18:54:28.026919 env[1199]: time="2024-04-12T18:54:28.026921436Z" level=warning msg="cleaning up after shim disconnected" id=8e91b26d5737c6d7a18cce6b14a2cde6fe87340f33fd3689287cf08f13dbdd12 namespace=k8s.io Apr 12 18:54:28.027220 env[1199]: time="2024-04-12T18:54:28.026934621Z" level=info msg="cleaning up dead shim" Apr 12 18:54:28.034818 env[1199]: time="2024-04-12T18:54:28.034766656Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:54:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2581 runtime=io.containerd.runc.v2\n" Apr 12 18:54:28.048736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e91b26d5737c6d7a18cce6b14a2cde6fe87340f33fd3689287cf08f13dbdd12-rootfs.mount: Deactivated successfully. Apr 12 18:54:28.616481 kubelet[2089]: E0412 18:54:28.616450 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:28.618368 env[1199]: time="2024-04-12T18:54:28.618323884Z" level=info msg="CreateContainer within sandbox \"471141734105913d7bb42e7e937cd57ff553f986551f7a39270615680a06347c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 12 18:54:28.902853 env[1199]: time="2024-04-12T18:54:28.902698712Z" level=info msg="CreateContainer within sandbox \"471141734105913d7bb42e7e937cd57ff553f986551f7a39270615680a06347c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5d07f23a87a020c0bb00d0996558c5397aa3494bcc3a40f45a88c9d65ad5144a\"" Apr 12 18:54:28.903476 env[1199]: time="2024-04-12T18:54:28.903423537Z" level=info msg="StartContainer for \"5d07f23a87a020c0bb00d0996558c5397aa3494bcc3a40f45a88c9d65ad5144a\"" Apr 12 18:54:28.953330 env[1199]: time="2024-04-12T18:54:28.953272215Z" level=info msg="StartContainer for \"5d07f23a87a020c0bb00d0996558c5397aa3494bcc3a40f45a88c9d65ad5144a\" returns successfully" Apr 12 18:54:29.042877 env[1199]: time="2024-04-12T18:54:29.042818548Z" level=info msg="shim disconnected" id=5d07f23a87a020c0bb00d0996558c5397aa3494bcc3a40f45a88c9d65ad5144a Apr 12 18:54:29.043652 env[1199]: time="2024-04-12T18:54:29.043610470Z" level=warning msg="cleaning up after shim disconnected" id=5d07f23a87a020c0bb00d0996558c5397aa3494bcc3a40f45a88c9d65ad5144a namespace=k8s.io Apr 12 18:54:29.043776 env[1199]: time="2024-04-12T18:54:29.043746997Z" level=info msg="cleaning up dead shim" Apr 12 18:54:29.048881 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d07f23a87a020c0bb00d0996558c5397aa3494bcc3a40f45a88c9d65ad5144a-rootfs.mount: Deactivated successfully. Apr 12 18:54:29.053831 env[1199]: time="2024-04-12T18:54:29.053782838Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:54:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2637 runtime=io.containerd.runc.v2\n" Apr 12 18:54:29.620498 kubelet[2089]: E0412 18:54:29.620452 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:29.623030 env[1199]: time="2024-04-12T18:54:29.622302539Z" level=info msg="CreateContainer within sandbox \"471141734105913d7bb42e7e937cd57ff553f986551f7a39270615680a06347c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 12 18:54:29.930499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount150810649.mount: Deactivated successfully. Apr 12 18:54:30.276514 env[1199]: time="2024-04-12T18:54:30.276156022Z" level=info msg="CreateContainer within sandbox \"471141734105913d7bb42e7e937cd57ff553f986551f7a39270615680a06347c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"369282fb8e9cc7c2a53e74f3e681ae82ec91be5d8cf15f526ed56b5430ff587b\"" Apr 12 18:54:30.276878 env[1199]: time="2024-04-12T18:54:30.276836023Z" level=info msg="StartContainer for \"369282fb8e9cc7c2a53e74f3e681ae82ec91be5d8cf15f526ed56b5430ff587b\"" Apr 12 18:54:30.297851 systemd[1]: run-containerd-runc-k8s.io-369282fb8e9cc7c2a53e74f3e681ae82ec91be5d8cf15f526ed56b5430ff587b-runc.UXykEG.mount: Deactivated successfully. Apr 12 18:54:30.551227 env[1199]: time="2024-04-12T18:54:30.550729373Z" level=info msg="StartContainer for \"369282fb8e9cc7c2a53e74f3e681ae82ec91be5d8cf15f526ed56b5430ff587b\" returns successfully" Apr 12 18:54:30.563098 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-369282fb8e9cc7c2a53e74f3e681ae82ec91be5d8cf15f526ed56b5430ff587b-rootfs.mount: Deactivated successfully. Apr 12 18:54:30.624260 kubelet[2089]: E0412 18:54:30.624097 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:31.294600 env[1199]: time="2024-04-12T18:54:31.294507011Z" level=error msg="collecting metrics for 369282fb8e9cc7c2a53e74f3e681ae82ec91be5d8cf15f526ed56b5430ff587b" error="cgroups: cgroup deleted: unknown" Apr 12 18:54:31.431026 env[1199]: time="2024-04-12T18:54:31.430930009Z" level=info msg="shim disconnected" id=369282fb8e9cc7c2a53e74f3e681ae82ec91be5d8cf15f526ed56b5430ff587b Apr 12 18:54:31.431026 env[1199]: time="2024-04-12T18:54:31.430977228Z" level=warning msg="cleaning up after shim disconnected" id=369282fb8e9cc7c2a53e74f3e681ae82ec91be5d8cf15f526ed56b5430ff587b namespace=k8s.io Apr 12 18:54:31.431026 env[1199]: time="2024-04-12T18:54:31.430985403Z" level=info msg="cleaning up dead shim" Apr 12 18:54:31.438913 env[1199]: time="2024-04-12T18:54:31.438862281Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:54:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2691 runtime=io.containerd.runc.v2\n" Apr 12 18:54:31.480166 env[1199]: time="2024-04-12T18:54:31.480133822Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:54:31.502284 env[1199]: time="2024-04-12T18:54:31.502224807Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:54:31.506672 env[1199]: time="2024-04-12T18:54:31.506619800Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:54:31.507046 env[1199]: time="2024-04-12T18:54:31.507013922Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 12 18:54:31.508863 env[1199]: time="2024-04-12T18:54:31.508826514Z" level=info msg="CreateContainer within sandbox \"982d52cb6ef4acf249d4a43e54a5022b65826178307e1f55d945c966f85b5ba1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 12 18:54:31.535487 env[1199]: time="2024-04-12T18:54:31.535421557Z" level=info msg="CreateContainer within sandbox \"982d52cb6ef4acf249d4a43e54a5022b65826178307e1f55d945c966f85b5ba1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8317e9657a7021869448d05671c40c56bd3e12ba5c5de6199aa013ec04ff8187\"" Apr 12 18:54:31.536191 env[1199]: time="2024-04-12T18:54:31.536144508Z" level=info msg="StartContainer for \"8317e9657a7021869448d05671c40c56bd3e12ba5c5de6199aa013ec04ff8187\"" Apr 12 18:54:31.577567 env[1199]: time="2024-04-12T18:54:31.577413154Z" level=info msg="StartContainer for \"8317e9657a7021869448d05671c40c56bd3e12ba5c5de6199aa013ec04ff8187\" returns successfully" Apr 12 18:54:31.627741 kubelet[2089]: E0412 18:54:31.627696 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:31.636581 kubelet[2089]: E0412 18:54:31.634886 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:31.644036 env[1199]: time="2024-04-12T18:54:31.643971547Z" level=info msg="CreateContainer within sandbox \"471141734105913d7bb42e7e937cd57ff553f986551f7a39270615680a06347c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 12 18:54:31.651036 kubelet[2089]: I0412 18:54:31.650977 2089 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-phwn5" podStartSLOduration=0.637812481 podCreationTimestamp="2024-04-12 18:54:14 +0000 UTC" firstStartedPulling="2024-04-12 18:54:14.494170384 +0000 UTC m=+14.035537364" lastFinishedPulling="2024-04-12 18:54:31.507297967 +0000 UTC m=+31.048664957" observedRunningTime="2024-04-12 18:54:31.636041077 +0000 UTC m=+31.177408057" watchObservedRunningTime="2024-04-12 18:54:31.650940074 +0000 UTC m=+31.192307054" Apr 12 18:54:31.661518 env[1199]: time="2024-04-12T18:54:31.661467271Z" level=info msg="CreateContainer within sandbox \"471141734105913d7bb42e7e937cd57ff553f986551f7a39270615680a06347c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0b7b8826e9d2b0a629a8158793e92bc08f653ecae0496e91c288ae2f2e4d5707\"" Apr 12 18:54:31.662555 env[1199]: time="2024-04-12T18:54:31.662501047Z" level=info msg="StartContainer for \"0b7b8826e9d2b0a629a8158793e92bc08f653ecae0496e91c288ae2f2e4d5707\"" Apr 12 18:54:31.738639 env[1199]: time="2024-04-12T18:54:31.738578398Z" level=info msg="StartContainer for \"0b7b8826e9d2b0a629a8158793e92bc08f653ecae0496e91c288ae2f2e4d5707\" returns successfully" Apr 12 18:54:31.906796 kubelet[2089]: I0412 18:54:31.906750 2089 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Apr 12 18:54:31.930459 kubelet[2089]: I0412 18:54:31.930381 2089 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:54:31.937301 kubelet[2089]: I0412 18:54:31.937265 2089 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:54:31.948382 kubelet[2089]: I0412 18:54:31.948342 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngtl8\" (UniqueName: \"kubernetes.io/projected/07297f13-be36-4301-95f4-7fa46cda98d6-kube-api-access-ngtl8\") pod \"coredns-5d78c9869d-pgm5r\" (UID: \"07297f13-be36-4301-95f4-7fa46cda98d6\") " pod="kube-system/coredns-5d78c9869d-pgm5r" Apr 12 18:54:31.948719 kubelet[2089]: I0412 18:54:31.948686 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07297f13-be36-4301-95f4-7fa46cda98d6-config-volume\") pod \"coredns-5d78c9869d-pgm5r\" (UID: \"07297f13-be36-4301-95f4-7fa46cda98d6\") " pod="kube-system/coredns-5d78c9869d-pgm5r" Apr 12 18:54:32.049290 kubelet[2089]: I0412 18:54:32.049244 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c100c8a7-4eed-4679-ac4e-d1ee6189abfd-config-volume\") pod \"coredns-5d78c9869d-c9wgx\" (UID: \"c100c8a7-4eed-4679-ac4e-d1ee6189abfd\") " pod="kube-system/coredns-5d78c9869d-c9wgx" Apr 12 18:54:32.049546 kubelet[2089]: I0412 18:54:32.049532 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8ftf\" (UniqueName: \"kubernetes.io/projected/c100c8a7-4eed-4679-ac4e-d1ee6189abfd-kube-api-access-f8ftf\") pod \"coredns-5d78c9869d-c9wgx\" (UID: \"c100c8a7-4eed-4679-ac4e-d1ee6189abfd\") " pod="kube-system/coredns-5d78c9869d-c9wgx" Apr 12 18:54:32.236830 kubelet[2089]: E0412 18:54:32.236619 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:32.237703 env[1199]: time="2024-04-12T18:54:32.237640963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-pgm5r,Uid:07297f13-be36-4301-95f4-7fa46cda98d6,Namespace:kube-system,Attempt:0,}" Apr 12 18:54:32.240638 kubelet[2089]: E0412 18:54:32.240610 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:32.241087 env[1199]: time="2024-04-12T18:54:32.240990537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-c9wgx,Uid:c100c8a7-4eed-4679-ac4e-d1ee6189abfd,Namespace:kube-system,Attempt:0,}" Apr 12 18:54:32.644565 kubelet[2089]: E0412 18:54:32.644533 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:32.645205 kubelet[2089]: E0412 18:54:32.645192 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:32.678232 kubelet[2089]: I0412 18:54:32.678196 2089 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-665dd" podStartSLOduration=8.002781887 podCreationTimestamp="2024-04-12 18:54:13 +0000 UTC" firstStartedPulling="2024-04-12 18:54:14.360062294 +0000 UTC m=+13.901429274" lastFinishedPulling="2024-04-12 18:54:26.0354325 +0000 UTC m=+25.576799480" observedRunningTime="2024-04-12 18:54:32.677612989 +0000 UTC m=+32.218979959" watchObservedRunningTime="2024-04-12 18:54:32.678152093 +0000 UTC m=+32.219519073" Apr 12 18:54:33.646832 kubelet[2089]: E0412 18:54:33.646792 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:33.712458 systemd[1]: Started sshd@5-10.0.0.108:22-10.0.0.1:44342.service. Apr 12 18:54:33.747105 sshd[2896]: Accepted publickey for core from 10.0.0.1 port 44342 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:54:33.748494 sshd[2896]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:54:33.754725 systemd-logind[1176]: New session 6 of user core. Apr 12 18:54:33.754972 systemd[1]: Started session-6.scope. Apr 12 18:54:33.911358 sshd[2896]: pam_unix(sshd:session): session closed for user core Apr 12 18:54:33.914470 systemd[1]: sshd@5-10.0.0.108:22-10.0.0.1:44342.service: Deactivated successfully. Apr 12 18:54:33.915931 systemd-logind[1176]: Session 6 logged out. Waiting for processes to exit. Apr 12 18:54:33.916107 systemd[1]: session-6.scope: Deactivated successfully. Apr 12 18:54:33.917092 systemd-logind[1176]: Removed session 6. Apr 12 18:54:33.973973 systemd-networkd[1073]: cilium_host: Link UP Apr 12 18:54:33.974137 systemd-networkd[1073]: cilium_net: Link UP Apr 12 18:54:33.974139 systemd-networkd[1073]: cilium_net: Gained carrier Apr 12 18:54:33.974268 systemd-networkd[1073]: cilium_host: Gained carrier Apr 12 18:54:33.978878 systemd-networkd[1073]: cilium_host: Gained IPv6LL Apr 12 18:54:33.979664 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Apr 12 18:54:34.077720 systemd-networkd[1073]: cilium_vxlan: Link UP Apr 12 18:54:34.077728 systemd-networkd[1073]: cilium_vxlan: Gained carrier Apr 12 18:54:34.084100 systemd-networkd[1073]: cilium_net: Gained IPv6LL Apr 12 18:54:34.341046 kernel: NET: Registered PF_ALG protocol family Apr 12 18:54:34.652931 kubelet[2089]: E0412 18:54:34.652430 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:35.645516 systemd-networkd[1073]: lxc_health: Link UP Apr 12 18:54:35.659185 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Apr 12 18:54:35.654720 systemd-networkd[1073]: lxc_health: Gained carrier Apr 12 18:54:35.870899 systemd-networkd[1073]: lxc91369fe08376: Link UP Apr 12 18:54:35.883047 kernel: eth0: renamed from tmp1fa36 Apr 12 18:54:35.920053 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc91369fe08376: link becomes ready Apr 12 18:54:35.928108 kernel: eth0: renamed from tmpddac6 Apr 12 18:54:35.927168 systemd-networkd[1073]: lxc91369fe08376: Gained carrier Apr 12 18:54:35.929636 systemd-networkd[1073]: lxc4852ae61219b: Link UP Apr 12 18:54:35.960037 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc4852ae61219b: link becomes ready Apr 12 18:54:35.962896 systemd-networkd[1073]: lxc4852ae61219b: Gained carrier Apr 12 18:54:36.135394 systemd-networkd[1073]: cilium_vxlan: Gained IPv6LL Apr 12 18:54:36.274252 kubelet[2089]: E0412 18:54:36.273677 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:36.685942 kubelet[2089]: E0412 18:54:36.685905 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:37.434503 systemd-networkd[1073]: lxc91369fe08376: Gained IPv6LL Apr 12 18:54:37.668563 systemd-networkd[1073]: lxc_health: Gained IPv6LL Apr 12 18:54:37.688881 kubelet[2089]: E0412 18:54:37.688729 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:37.862452 systemd-networkd[1073]: lxc4852ae61219b: Gained IPv6LL Apr 12 18:54:38.920826 systemd[1]: Started sshd@6-10.0.0.108:22-10.0.0.1:44352.service. Apr 12 18:54:38.982728 sshd[3282]: Accepted publickey for core from 10.0.0.1 port 44352 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:54:38.990106 sshd[3282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:54:39.002985 systemd-logind[1176]: New session 7 of user core. Apr 12 18:54:39.003584 systemd[1]: Started session-7.scope. Apr 12 18:54:39.295222 sshd[3282]: pam_unix(sshd:session): session closed for user core Apr 12 18:54:39.304405 systemd[1]: sshd@6-10.0.0.108:22-10.0.0.1:44352.service: Deactivated successfully. Apr 12 18:54:39.306540 systemd[1]: session-7.scope: Deactivated successfully. Apr 12 18:54:39.307353 systemd-logind[1176]: Session 7 logged out. Waiting for processes to exit. Apr 12 18:54:39.308680 systemd-logind[1176]: Removed session 7. Apr 12 18:54:41.408309 env[1199]: time="2024-04-12T18:54:41.407895504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:54:41.408309 env[1199]: time="2024-04-12T18:54:41.407966348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:54:41.408309 env[1199]: time="2024-04-12T18:54:41.407981697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:54:41.408309 env[1199]: time="2024-04-12T18:54:41.408206059Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1fa363329bf9e7d504e72a99b1caf7a033ad2e8a56791c1a8f929238356005e2 pid=3315 runtime=io.containerd.runc.v2 Apr 12 18:54:41.420132 env[1199]: time="2024-04-12T18:54:41.420050326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:54:41.420331 env[1199]: time="2024-04-12T18:54:41.420145195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:54:41.420331 env[1199]: time="2024-04-12T18:54:41.420184128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:54:41.422537 env[1199]: time="2024-04-12T18:54:41.421033394Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ddac6f1fac94bfc6c55cc134fdaa25d381d79318ca776123af0cb35d2fb137af pid=3336 runtime=io.containerd.runc.v2 Apr 12 18:54:41.473921 systemd-resolved[1131]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 12 18:54:41.475696 systemd-resolved[1131]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 12 18:54:41.519570 env[1199]: time="2024-04-12T18:54:41.517886274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-pgm5r,Uid:07297f13-be36-4301-95f4-7fa46cda98d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"ddac6f1fac94bfc6c55cc134fdaa25d381d79318ca776123af0cb35d2fb137af\"" Apr 12 18:54:41.519783 kubelet[2089]: E0412 18:54:41.518681 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:41.523340 env[1199]: time="2024-04-12T18:54:41.522925644Z" level=info msg="CreateContainer within sandbox \"ddac6f1fac94bfc6c55cc134fdaa25d381d79318ca776123af0cb35d2fb137af\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 12 18:54:41.526070 env[1199]: time="2024-04-12T18:54:41.525958644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-c9wgx,Uid:c100c8a7-4eed-4679-ac4e-d1ee6189abfd,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fa363329bf9e7d504e72a99b1caf7a033ad2e8a56791c1a8f929238356005e2\"" Apr 12 18:54:41.527859 kubelet[2089]: E0412 18:54:41.527794 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:41.530222 env[1199]: time="2024-04-12T18:54:41.530158888Z" level=info msg="CreateContainer within sandbox \"1fa363329bf9e7d504e72a99b1caf7a033ad2e8a56791c1a8f929238356005e2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 12 18:54:41.565499 env[1199]: time="2024-04-12T18:54:41.565366821Z" level=info msg="CreateContainer within sandbox \"ddac6f1fac94bfc6c55cc134fdaa25d381d79318ca776123af0cb35d2fb137af\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2c56fdf09328f3bda974eebc91ea1453c4b458f7b55cf548e79c09642b44ff81\"" Apr 12 18:54:41.569526 env[1199]: time="2024-04-12T18:54:41.567897317Z" level=info msg="StartContainer for \"2c56fdf09328f3bda974eebc91ea1453c4b458f7b55cf548e79c09642b44ff81\"" Apr 12 18:54:41.591268 env[1199]: time="2024-04-12T18:54:41.591200238Z" level=info msg="CreateContainer within sandbox \"1fa363329bf9e7d504e72a99b1caf7a033ad2e8a56791c1a8f929238356005e2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a42e0e4c09f3d9b289bb5c1a74a68e4522f47ed662cf802db2f95bf4161f72f3\"" Apr 12 18:54:41.593356 env[1199]: time="2024-04-12T18:54:41.591951821Z" level=info msg="StartContainer for \"a42e0e4c09f3d9b289bb5c1a74a68e4522f47ed662cf802db2f95bf4161f72f3\"" Apr 12 18:54:41.646109 env[1199]: time="2024-04-12T18:54:41.645515670Z" level=info msg="StartContainer for \"2c56fdf09328f3bda974eebc91ea1453c4b458f7b55cf548e79c09642b44ff81\" returns successfully" Apr 12 18:54:41.682681 env[1199]: time="2024-04-12T18:54:41.680811640Z" level=info msg="StartContainer for \"a42e0e4c09f3d9b289bb5c1a74a68e4522f47ed662cf802db2f95bf4161f72f3\" returns successfully" Apr 12 18:54:41.705026 kubelet[2089]: E0412 18:54:41.704954 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:41.709720 kubelet[2089]: E0412 18:54:41.709197 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:41.727928 kubelet[2089]: I0412 18:54:41.727865 2089 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-pgm5r" podStartSLOduration=27.727815151 podCreationTimestamp="2024-04-12 18:54:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:54:41.727534974 +0000 UTC m=+41.268901955" watchObservedRunningTime="2024-04-12 18:54:41.727815151 +0000 UTC m=+41.269182131" Apr 12 18:54:41.754903 kubelet[2089]: I0412 18:54:41.754375 2089 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-c9wgx" podStartSLOduration=27.754318908 podCreationTimestamp="2024-04-12 18:54:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:54:41.74963713 +0000 UTC m=+41.291004130" watchObservedRunningTime="2024-04-12 18:54:41.754318908 +0000 UTC m=+41.295685888" Apr 12 18:54:42.713130 kubelet[2089]: E0412 18:54:42.711314 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:42.713130 kubelet[2089]: E0412 18:54:42.712160 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:43.713515 kubelet[2089]: E0412 18:54:43.713465 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:43.714070 kubelet[2089]: E0412 18:54:43.713465 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:44.303480 systemd[1]: Started sshd@7-10.0.0.108:22-10.0.0.1:48738.service. Apr 12 18:54:44.357015 sshd[3476]: Accepted publickey for core from 10.0.0.1 port 48738 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:54:44.359759 sshd[3476]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:54:44.368436 systemd-logind[1176]: New session 8 of user core. Apr 12 18:54:44.370208 systemd[1]: Started session-8.scope. Apr 12 18:54:44.536711 sshd[3476]: pam_unix(sshd:session): session closed for user core Apr 12 18:54:44.541098 systemd[1]: sshd@7-10.0.0.108:22-10.0.0.1:48738.service: Deactivated successfully. Apr 12 18:54:44.542826 systemd-logind[1176]: Session 8 logged out. Waiting for processes to exit. Apr 12 18:54:44.542990 systemd[1]: session-8.scope: Deactivated successfully. Apr 12 18:54:44.544557 systemd-logind[1176]: Removed session 8. Apr 12 18:54:49.541819 systemd[1]: Started sshd@8-10.0.0.108:22-10.0.0.1:60898.service. Apr 12 18:54:49.610872 sshd[3493]: Accepted publickey for core from 10.0.0.1 port 60898 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:54:49.613124 sshd[3493]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:54:49.624395 systemd-logind[1176]: New session 9 of user core. Apr 12 18:54:49.628069 systemd[1]: Started session-9.scope. Apr 12 18:54:49.874837 sshd[3493]: pam_unix(sshd:session): session closed for user core Apr 12 18:54:49.878862 systemd[1]: sshd@8-10.0.0.108:22-10.0.0.1:60898.service: Deactivated successfully. Apr 12 18:54:49.882674 systemd-logind[1176]: Session 9 logged out. Waiting for processes to exit. Apr 12 18:54:49.882693 systemd[1]: session-9.scope: Deactivated successfully. Apr 12 18:54:49.885181 systemd-logind[1176]: Removed session 9. Apr 12 18:54:54.881283 systemd[1]: Started sshd@9-10.0.0.108:22-10.0.0.1:60912.service. Apr 12 18:54:54.921752 sshd[3511]: Accepted publickey for core from 10.0.0.1 port 60912 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:54:54.923197 sshd[3511]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:54:54.928279 systemd-logind[1176]: New session 10 of user core. Apr 12 18:54:54.929389 systemd[1]: Started session-10.scope. Apr 12 18:54:55.069200 sshd[3511]: pam_unix(sshd:session): session closed for user core Apr 12 18:54:55.072814 systemd[1]: sshd@9-10.0.0.108:22-10.0.0.1:60912.service: Deactivated successfully. Apr 12 18:54:55.074433 systemd-logind[1176]: Session 10 logged out. Waiting for processes to exit. Apr 12 18:54:55.074437 systemd[1]: session-10.scope: Deactivated successfully. Apr 12 18:54:55.076846 systemd-logind[1176]: Removed session 10. Apr 12 18:55:00.073089 systemd[1]: Started sshd@10-10.0.0.108:22-10.0.0.1:49174.service. Apr 12 18:55:00.105932 sshd[3527]: Accepted publickey for core from 10.0.0.1 port 49174 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:55:00.107531 sshd[3527]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:55:00.112362 systemd-logind[1176]: New session 11 of user core. Apr 12 18:55:00.113550 systemd[1]: Started session-11.scope. Apr 12 18:55:00.237092 sshd[3527]: pam_unix(sshd:session): session closed for user core Apr 12 18:55:00.240054 systemd[1]: sshd@10-10.0.0.108:22-10.0.0.1:49174.service: Deactivated successfully. Apr 12 18:55:00.241093 systemd[1]: session-11.scope: Deactivated successfully. Apr 12 18:55:00.242165 systemd-logind[1176]: Session 11 logged out. Waiting for processes to exit. Apr 12 18:55:00.243073 systemd-logind[1176]: Removed session 11. Apr 12 18:55:05.245254 systemd[1]: Started sshd@11-10.0.0.108:22-10.0.0.1:49186.service. Apr 12 18:55:05.284356 sshd[3544]: Accepted publickey for core from 10.0.0.1 port 49186 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:55:05.289035 sshd[3544]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:55:05.306488 systemd-logind[1176]: New session 12 of user core. Apr 12 18:55:05.307793 systemd[1]: Started session-12.scope. Apr 12 18:55:05.471484 sshd[3544]: pam_unix(sshd:session): session closed for user core Apr 12 18:55:05.474364 systemd[1]: Started sshd@12-10.0.0.108:22-10.0.0.1:49202.service. Apr 12 18:55:05.477209 systemd[1]: sshd@11-10.0.0.108:22-10.0.0.1:49186.service: Deactivated successfully. Apr 12 18:55:05.479047 systemd[1]: session-12.scope: Deactivated successfully. Apr 12 18:55:05.480659 systemd-logind[1176]: Session 12 logged out. Waiting for processes to exit. Apr 12 18:55:05.483500 systemd-logind[1176]: Removed session 12. Apr 12 18:55:05.511252 sshd[3557]: Accepted publickey for core from 10.0.0.1 port 49202 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:55:05.514356 sshd[3557]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:55:05.523807 systemd-logind[1176]: New session 13 of user core. Apr 12 18:55:05.525045 systemd[1]: Started session-13.scope. Apr 12 18:55:06.700801 sshd[3557]: pam_unix(sshd:session): session closed for user core Apr 12 18:55:06.705267 systemd[1]: Started sshd@13-10.0.0.108:22-10.0.0.1:49212.service. Apr 12 18:55:06.711236 systemd[1]: sshd@12-10.0.0.108:22-10.0.0.1:49202.service: Deactivated successfully. Apr 12 18:55:06.713195 systemd[1]: session-13.scope: Deactivated successfully. Apr 12 18:55:06.720980 systemd-logind[1176]: Session 13 logged out. Waiting for processes to exit. Apr 12 18:55:06.723057 systemd-logind[1176]: Removed session 13. Apr 12 18:55:06.771622 sshd[3569]: Accepted publickey for core from 10.0.0.1 port 49212 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:55:06.774310 sshd[3569]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:55:06.788225 systemd-logind[1176]: New session 14 of user core. Apr 12 18:55:06.790262 systemd[1]: Started session-14.scope. Apr 12 18:55:06.980031 sshd[3569]: pam_unix(sshd:session): session closed for user core Apr 12 18:55:06.987280 systemd[1]: sshd@13-10.0.0.108:22-10.0.0.1:49212.service: Deactivated successfully. Apr 12 18:55:06.988546 systemd[1]: session-14.scope: Deactivated successfully. Apr 12 18:55:06.992730 systemd-logind[1176]: Session 14 logged out. Waiting for processes to exit. Apr 12 18:55:06.995040 systemd-logind[1176]: Removed session 14. Apr 12 18:55:09.550048 kubelet[2089]: E0412 18:55:09.549971 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:55:11.891694 kernel: hrtimer: interrupt took 13984086 ns Apr 12 18:55:11.989256 systemd[1]: Started sshd@14-10.0.0.108:22-10.0.0.1:36214.service. Apr 12 18:55:12.047121 sshd[3585]: Accepted publickey for core from 10.0.0.1 port 36214 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:55:12.052357 sshd[3585]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:55:12.059317 systemd-logind[1176]: New session 15 of user core. Apr 12 18:55:12.059858 systemd[1]: Started session-15.scope. Apr 12 18:55:12.279152 sshd[3585]: pam_unix(sshd:session): session closed for user core Apr 12 18:55:12.283898 systemd[1]: sshd@14-10.0.0.108:22-10.0.0.1:36214.service: Deactivated successfully. Apr 12 18:55:12.285055 systemd[1]: session-15.scope: Deactivated successfully. Apr 12 18:55:12.287284 systemd-logind[1176]: Session 15 logged out. Waiting for processes to exit. Apr 12 18:55:12.291509 systemd-logind[1176]: Removed session 15. Apr 12 18:55:17.284584 systemd[1]: Started sshd@15-10.0.0.108:22-10.0.0.1:36228.service. Apr 12 18:55:17.324150 sshd[3602]: Accepted publickey for core from 10.0.0.1 port 36228 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:55:17.326567 sshd[3602]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:55:17.335157 systemd-logind[1176]: New session 16 of user core. Apr 12 18:55:17.335652 systemd[1]: Started session-16.scope. Apr 12 18:55:17.468658 sshd[3602]: pam_unix(sshd:session): session closed for user core Apr 12 18:55:17.472196 systemd[1]: sshd@15-10.0.0.108:22-10.0.0.1:36228.service: Deactivated successfully. Apr 12 18:55:17.474803 systemd-logind[1176]: Session 16 logged out. Waiting for processes to exit. Apr 12 18:55:17.475153 systemd[1]: session-16.scope: Deactivated successfully. Apr 12 18:55:17.477136 systemd-logind[1176]: Removed session 16. Apr 12 18:55:22.477303 systemd[1]: Started sshd@16-10.0.0.108:22-10.0.0.1:39474.service. Apr 12 18:55:22.508481 sshd[3617]: Accepted publickey for core from 10.0.0.1 port 39474 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:55:22.510622 sshd[3617]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:55:22.516020 systemd-logind[1176]: New session 17 of user core. Apr 12 18:55:22.517380 systemd[1]: Started session-17.scope. Apr 12 18:55:22.657387 sshd[3617]: pam_unix(sshd:session): session closed for user core Apr 12 18:55:22.661971 systemd[1]: Started sshd@17-10.0.0.108:22-10.0.0.1:39476.service. Apr 12 18:55:22.662852 systemd[1]: sshd@16-10.0.0.108:22-10.0.0.1:39474.service: Deactivated successfully. Apr 12 18:55:22.665387 systemd[1]: session-17.scope: Deactivated successfully. Apr 12 18:55:22.666136 systemd-logind[1176]: Session 17 logged out. Waiting for processes to exit. Apr 12 18:55:22.667195 systemd-logind[1176]: Removed session 17. Apr 12 18:55:22.701502 sshd[3630]: Accepted publickey for core from 10.0.0.1 port 39476 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:55:22.703884 sshd[3630]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:55:22.713483 systemd-logind[1176]: New session 18 of user core. Apr 12 18:55:22.714522 systemd[1]: Started session-18.scope. Apr 12 18:55:23.077317 sshd[3630]: pam_unix(sshd:session): session closed for user core Apr 12 18:55:23.080810 systemd[1]: Started sshd@18-10.0.0.108:22-10.0.0.1:39480.service. Apr 12 18:55:23.083935 systemd[1]: sshd@17-10.0.0.108:22-10.0.0.1:39476.service: Deactivated successfully. Apr 12 18:55:23.085602 systemd[1]: session-18.scope: Deactivated successfully. Apr 12 18:55:23.086363 systemd-logind[1176]: Session 18 logged out. Waiting for processes to exit. Apr 12 18:55:23.088272 systemd-logind[1176]: Removed session 18. Apr 12 18:55:23.126322 sshd[3641]: Accepted publickey for core from 10.0.0.1 port 39480 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:55:23.127899 sshd[3641]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:55:23.137426 systemd-logind[1176]: New session 19 of user core. Apr 12 18:55:23.138735 systemd[1]: Started session-19.scope. Apr 12 18:55:23.549768 kubelet[2089]: E0412 18:55:23.549714 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:55:24.190152 sshd[3641]: pam_unix(sshd:session): session closed for user core Apr 12 18:55:24.193746 systemd[1]: Started sshd@19-10.0.0.108:22-10.0.0.1:39490.service. Apr 12 18:55:24.199448 systemd[1]: sshd@18-10.0.0.108:22-10.0.0.1:39480.service: Deactivated successfully. Apr 12 18:55:24.201542 systemd[1]: session-19.scope: Deactivated successfully. Apr 12 18:55:24.201845 systemd-logind[1176]: Session 19 logged out. Waiting for processes to exit. Apr 12 18:55:24.203887 systemd-logind[1176]: Removed session 19. Apr 12 18:55:24.236107 sshd[3666]: Accepted publickey for core from 10.0.0.1 port 39490 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:55:24.238029 sshd[3666]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:55:24.247169 systemd[1]: Started session-20.scope. Apr 12 18:55:24.249712 systemd-logind[1176]: New session 20 of user core. Apr 12 18:55:24.732462 sshd[3666]: pam_unix(sshd:session): session closed for user core Apr 12 18:55:24.735906 systemd[1]: Started sshd@20-10.0.0.108:22-10.0.0.1:39502.service. Apr 12 18:55:24.737122 systemd[1]: sshd@19-10.0.0.108:22-10.0.0.1:39490.service: Deactivated successfully. Apr 12 18:55:24.739665 systemd[1]: session-20.scope: Deactivated successfully. Apr 12 18:55:24.740513 systemd-logind[1176]: Session 20 logged out. Waiting for processes to exit. Apr 12 18:55:24.742301 systemd-logind[1176]: Removed session 20. Apr 12 18:55:24.773575 sshd[3678]: Accepted publickey for core from 10.0.0.1 port 39502 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:55:24.775263 sshd[3678]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:55:24.781685 systemd-logind[1176]: New session 21 of user core. Apr 12 18:55:24.782770 systemd[1]: Started session-21.scope. Apr 12 18:55:24.921778 sshd[3678]: pam_unix(sshd:session): session closed for user core Apr 12 18:55:24.925080 systemd[1]: sshd@20-10.0.0.108:22-10.0.0.1:39502.service: Deactivated successfully. Apr 12 18:55:24.926346 systemd-logind[1176]: Session 21 logged out. Waiting for processes to exit. Apr 12 18:55:24.926392 systemd[1]: session-21.scope: Deactivated successfully. Apr 12 18:55:24.927452 systemd-logind[1176]: Removed session 21. Apr 12 18:55:28.549449 kubelet[2089]: E0412 18:55:28.549400 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:55:29.926679 systemd[1]: Started sshd@21-10.0.0.108:22-10.0.0.1:56044.service. Apr 12 18:55:29.977766 sshd[3695]: Accepted publickey for core from 10.0.0.1 port 56044 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:55:29.984829 sshd[3695]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:55:29.997192 systemd-logind[1176]: New session 22 of user core. Apr 12 18:55:29.997757 systemd[1]: Started session-22.scope. Apr 12 18:55:30.186635 sshd[3695]: pam_unix(sshd:session): session closed for user core Apr 12 18:55:30.192182 systemd[1]: sshd@21-10.0.0.108:22-10.0.0.1:56044.service: Deactivated successfully. Apr 12 18:55:30.199469 systemd[1]: session-22.scope: Deactivated successfully. Apr 12 18:55:30.200759 systemd-logind[1176]: Session 22 logged out. Waiting for processes to exit. Apr 12 18:55:30.202934 systemd-logind[1176]: Removed session 22. Apr 12 18:55:34.550082 kubelet[2089]: E0412 18:55:34.550030 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:55:35.189627 systemd[1]: Started sshd@22-10.0.0.108:22-10.0.0.1:56052.service. Apr 12 18:55:35.224672 sshd[3712]: Accepted publickey for core from 10.0.0.1 port 56052 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:55:35.226105 sshd[3712]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:55:35.230176 systemd-logind[1176]: New session 23 of user core. Apr 12 18:55:35.231293 systemd[1]: Started session-23.scope. Apr 12 18:55:35.346415 sshd[3712]: pam_unix(sshd:session): session closed for user core Apr 12 18:55:35.349513 systemd[1]: sshd@22-10.0.0.108:22-10.0.0.1:56052.service: Deactivated successfully. Apr 12 18:55:35.350626 systemd-logind[1176]: Session 23 logged out. Waiting for processes to exit. Apr 12 18:55:35.350687 systemd[1]: session-23.scope: Deactivated successfully. Apr 12 18:55:35.351577 systemd-logind[1176]: Removed session 23. Apr 12 18:55:36.550014 kubelet[2089]: E0412 18:55:36.549961 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:55:40.349817 systemd[1]: Started sshd@23-10.0.0.108:22-10.0.0.1:47596.service. Apr 12 18:55:40.380810 sshd[3726]: Accepted publickey for core from 10.0.0.1 port 47596 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:55:40.381883 sshd[3726]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:55:40.385655 systemd-logind[1176]: New session 24 of user core. Apr 12 18:55:40.386814 systemd[1]: Started session-24.scope. Apr 12 18:55:40.494610 sshd[3726]: pam_unix(sshd:session): session closed for user core Apr 12 18:55:40.497217 systemd[1]: sshd@23-10.0.0.108:22-10.0.0.1:47596.service: Deactivated successfully. Apr 12 18:55:40.498470 systemd-logind[1176]: Session 24 logged out. Waiting for processes to exit. Apr 12 18:55:40.498529 systemd[1]: session-24.scope: Deactivated successfully. Apr 12 18:55:40.499343 systemd-logind[1176]: Removed session 24. Apr 12 18:55:45.497833 systemd[1]: Started sshd@24-10.0.0.108:22-10.0.0.1:47604.service. Apr 12 18:55:45.527892 sshd[3742]: Accepted publickey for core from 10.0.0.1 port 47604 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:55:45.528854 sshd[3742]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:55:45.531888 systemd-logind[1176]: New session 25 of user core. Apr 12 18:55:45.532610 systemd[1]: Started session-25.scope. Apr 12 18:55:45.630094 sshd[3742]: pam_unix(sshd:session): session closed for user core Apr 12 18:55:45.633177 systemd[1]: Started sshd@25-10.0.0.108:22-10.0.0.1:47610.service. Apr 12 18:55:45.633644 systemd[1]: sshd@24-10.0.0.108:22-10.0.0.1:47604.service: Deactivated successfully. Apr 12 18:55:45.634884 systemd[1]: session-25.scope: Deactivated successfully. Apr 12 18:55:45.635232 systemd-logind[1176]: Session 25 logged out. Waiting for processes to exit. Apr 12 18:55:45.637560 systemd-logind[1176]: Removed session 25. Apr 12 18:55:45.665086 sshd[3756]: Accepted publickey for core from 10.0.0.1 port 47610 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:55:45.666080 sshd[3756]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:55:45.668950 systemd-logind[1176]: New session 26 of user core. Apr 12 18:55:45.669691 systemd[1]: Started session-26.scope. Apr 12 18:55:47.081424 env[1199]: time="2024-04-12T18:55:47.081345180Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 12 18:55:47.086319 env[1199]: time="2024-04-12T18:55:47.086286484Z" level=info msg="StopContainer for \"0b7b8826e9d2b0a629a8158793e92bc08f653ecae0496e91c288ae2f2e4d5707\" with timeout 1 (s)" Apr 12 18:55:47.086598 env[1199]: time="2024-04-12T18:55:47.086567407Z" level=info msg="Stop container \"0b7b8826e9d2b0a629a8158793e92bc08f653ecae0496e91c288ae2f2e4d5707\" with signal terminated" Apr 12 18:55:47.092435 systemd-networkd[1073]: lxc_health: Link DOWN Apr 12 18:55:47.092443 systemd-networkd[1073]: lxc_health: Lost carrier Apr 12 18:55:47.148912 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b7b8826e9d2b0a629a8158793e92bc08f653ecae0496e91c288ae2f2e4d5707-rootfs.mount: Deactivated successfully. Apr 12 18:55:47.169393 env[1199]: time="2024-04-12T18:55:47.168246998Z" level=info msg="StopContainer for \"8317e9657a7021869448d05671c40c56bd3e12ba5c5de6199aa013ec04ff8187\" with timeout 30 (s)" Apr 12 18:55:47.169939 env[1199]: time="2024-04-12T18:55:47.169906135Z" level=info msg="Stop container \"8317e9657a7021869448d05671c40c56bd3e12ba5c5de6199aa013ec04ff8187\" with signal terminated" Apr 12 18:55:47.194840 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8317e9657a7021869448d05671c40c56bd3e12ba5c5de6199aa013ec04ff8187-rootfs.mount: Deactivated successfully. Apr 12 18:55:47.391770 env[1199]: time="2024-04-12T18:55:47.391726714Z" level=info msg="shim disconnected" id=0b7b8826e9d2b0a629a8158793e92bc08f653ecae0496e91c288ae2f2e4d5707 Apr 12 18:55:47.391956 env[1199]: time="2024-04-12T18:55:47.391771428Z" level=warning msg="cleaning up after shim disconnected" id=0b7b8826e9d2b0a629a8158793e92bc08f653ecae0496e91c288ae2f2e4d5707 namespace=k8s.io Apr 12 18:55:47.391956 env[1199]: time="2024-04-12T18:55:47.391785926Z" level=info msg="cleaning up dead shim" Apr 12 18:55:47.391956 env[1199]: time="2024-04-12T18:55:47.391840049Z" level=info msg="shim disconnected" id=8317e9657a7021869448d05671c40c56bd3e12ba5c5de6199aa013ec04ff8187 Apr 12 18:55:47.391956 env[1199]: time="2024-04-12T18:55:47.391901094Z" level=warning msg="cleaning up after shim disconnected" id=8317e9657a7021869448d05671c40c56bd3e12ba5c5de6199aa013ec04ff8187 namespace=k8s.io Apr 12 18:55:47.391956 env[1199]: time="2024-04-12T18:55:47.391913088Z" level=info msg="cleaning up dead shim" Apr 12 18:55:47.398260 env[1199]: time="2024-04-12T18:55:47.398204002Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:55:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3829 runtime=io.containerd.runc.v2\n" Apr 12 18:55:47.398942 env[1199]: time="2024-04-12T18:55:47.398912105Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:55:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3830 runtime=io.containerd.runc.v2\n" Apr 12 18:55:47.412866 env[1199]: time="2024-04-12T18:55:47.412821382Z" level=info msg="StopContainer for \"0b7b8826e9d2b0a629a8158793e92bc08f653ecae0496e91c288ae2f2e4d5707\" returns successfully" Apr 12 18:55:47.413587 env[1199]: time="2024-04-12T18:55:47.413552378Z" level=info msg="StopPodSandbox for \"471141734105913d7bb42e7e937cd57ff553f986551f7a39270615680a06347c\"" Apr 12 18:55:47.413654 env[1199]: time="2024-04-12T18:55:47.413636558Z" level=info msg="Container to stop \"d3745b9d861854f16aba830fc6026f492d27619e7e85ac50fbc76ca71b32e805\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:55:47.413691 env[1199]: time="2024-04-12T18:55:47.413658290Z" level=info msg="Container to stop \"5d07f23a87a020c0bb00d0996558c5397aa3494bcc3a40f45a88c9d65ad5144a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:55:47.413691 env[1199]: time="2024-04-12T18:55:47.413673388Z" level=info msg="Container to stop \"369282fb8e9cc7c2a53e74f3e681ae82ec91be5d8cf15f526ed56b5430ff587b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:55:47.413754 env[1199]: time="2024-04-12T18:55:47.413687925Z" level=info msg="Container to stop \"8e91b26d5737c6d7a18cce6b14a2cde6fe87340f33fd3689287cf08f13dbdd12\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:55:47.413754 env[1199]: time="2024-04-12T18:55:47.413706261Z" level=info msg="Container to stop \"0b7b8826e9d2b0a629a8158793e92bc08f653ecae0496e91c288ae2f2e4d5707\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:55:47.415415 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-471141734105913d7bb42e7e937cd57ff553f986551f7a39270615680a06347c-shm.mount: Deactivated successfully. Apr 12 18:55:47.419950 env[1199]: time="2024-04-12T18:55:47.419916312Z" level=info msg="StopContainer for \"8317e9657a7021869448d05671c40c56bd3e12ba5c5de6199aa013ec04ff8187\" returns successfully" Apr 12 18:55:47.420524 env[1199]: time="2024-04-12T18:55:47.420489058Z" level=info msg="StopPodSandbox for \"982d52cb6ef4acf249d4a43e54a5022b65826178307e1f55d945c966f85b5ba1\"" Apr 12 18:55:47.420576 env[1199]: time="2024-04-12T18:55:47.420566404Z" level=info msg="Container to stop \"8317e9657a7021869448d05671c40c56bd3e12ba5c5de6199aa013ec04ff8187\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:55:47.423893 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-982d52cb6ef4acf249d4a43e54a5022b65826178307e1f55d945c966f85b5ba1-shm.mount: Deactivated successfully. Apr 12 18:55:47.437611 env[1199]: time="2024-04-12T18:55:47.437554342Z" level=info msg="shim disconnected" id=471141734105913d7bb42e7e937cd57ff553f986551f7a39270615680a06347c Apr 12 18:55:47.437611 env[1199]: time="2024-04-12T18:55:47.437604998Z" level=warning msg="cleaning up after shim disconnected" id=471141734105913d7bb42e7e937cd57ff553f986551f7a39270615680a06347c namespace=k8s.io Apr 12 18:55:47.437611 env[1199]: time="2024-04-12T18:55:47.437614185Z" level=info msg="cleaning up dead shim" Apr 12 18:55:47.447148 env[1199]: time="2024-04-12T18:55:47.447082157Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:55:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3889 runtime=io.containerd.runc.v2\n" Apr 12 18:55:47.447534 env[1199]: time="2024-04-12T18:55:47.447503116Z" level=info msg="TearDown network for sandbox \"471141734105913d7bb42e7e937cd57ff553f986551f7a39270615680a06347c\" successfully" Apr 12 18:55:47.447534 env[1199]: time="2024-04-12T18:55:47.447531839Z" level=info msg="StopPodSandbox for \"471141734105913d7bb42e7e937cd57ff553f986551f7a39270615680a06347c\" returns successfully" Apr 12 18:55:47.447892 env[1199]: time="2024-04-12T18:55:47.447859331Z" level=info msg="shim disconnected" id=982d52cb6ef4acf249d4a43e54a5022b65826178307e1f55d945c966f85b5ba1 Apr 12 18:55:47.447943 env[1199]: time="2024-04-12T18:55:47.447892714Z" level=warning msg="cleaning up after shim disconnected" id=982d52cb6ef4acf249d4a43e54a5022b65826178307e1f55d945c966f85b5ba1 namespace=k8s.io Apr 12 18:55:47.447943 env[1199]: time="2024-04-12T18:55:47.447900579Z" level=info msg="cleaning up dead shim" Apr 12 18:55:47.461813 env[1199]: time="2024-04-12T18:55:47.461766755Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:55:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3909 runtime=io.containerd.runc.v2\n" Apr 12 18:55:47.462118 env[1199]: time="2024-04-12T18:55:47.462093464Z" level=info msg="TearDown network for sandbox \"982d52cb6ef4acf249d4a43e54a5022b65826178307e1f55d945c966f85b5ba1\" successfully" Apr 12 18:55:47.462118 env[1199]: time="2024-04-12T18:55:47.462117529Z" level=info msg="StopPodSandbox for \"982d52cb6ef4acf249d4a43e54a5022b65826178307e1f55d945c966f85b5ba1\" returns successfully" Apr 12 18:55:47.638192 kubelet[2089]: I0412 18:55:47.638152 2089 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-cni-path\") pod \"2df9ace5-4433-4936-8c79-e49b42acc0e9\" (UID: \"2df9ace5-4433-4936-8c79-e49b42acc0e9\") " Apr 12 18:55:47.639319 kubelet[2089]: I0412 18:55:47.638216 2089 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-664kj\" (UniqueName: \"kubernetes.io/projected/646f34bf-a3b1-4462-abb0-f9846f7ecc24-kube-api-access-664kj\") pod \"646f34bf-a3b1-4462-abb0-f9846f7ecc24\" (UID: \"646f34bf-a3b1-4462-abb0-f9846f7ecc24\") " Apr 12 18:55:47.639319 kubelet[2089]: I0412 18:55:47.638246 2089 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-hostproc\") pod \"2df9ace5-4433-4936-8c79-e49b42acc0e9\" (UID: \"2df9ace5-4433-4936-8c79-e49b42acc0e9\") " Apr 12 18:55:47.639319 kubelet[2089]: I0412 18:55:47.638268 2089 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-lib-modules\") pod \"2df9ace5-4433-4936-8c79-e49b42acc0e9\" (UID: \"2df9ace5-4433-4936-8c79-e49b42acc0e9\") " Apr 12 18:55:47.639319 kubelet[2089]: I0412 18:55:47.638282 2089 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-cni-path" (OuterVolumeSpecName: "cni-path") pod "2df9ace5-4433-4936-8c79-e49b42acc0e9" (UID: "2df9ace5-4433-4936-8c79-e49b42acc0e9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:47.639319 kubelet[2089]: I0412 18:55:47.638300 2089 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2df9ace5-4433-4936-8c79-e49b42acc0e9-clustermesh-secrets\") pod \"2df9ace5-4433-4936-8c79-e49b42acc0e9\" (UID: \"2df9ace5-4433-4936-8c79-e49b42acc0e9\") " Apr 12 18:55:47.639319 kubelet[2089]: I0412 18:55:47.638385 2089 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/646f34bf-a3b1-4462-abb0-f9846f7ecc24-cilium-config-path\") pod \"646f34bf-a3b1-4462-abb0-f9846f7ecc24\" (UID: \"646f34bf-a3b1-4462-abb0-f9846f7ecc24\") " Apr 12 18:55:47.639561 kubelet[2089]: I0412 18:55:47.638412 2089 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-cilium-run\") pod \"2df9ace5-4433-4936-8c79-e49b42acc0e9\" (UID: \"2df9ace5-4433-4936-8c79-e49b42acc0e9\") " Apr 12 18:55:47.639561 kubelet[2089]: I0412 18:55:47.638429 2089 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-host-proc-sys-kernel\") pod \"2df9ace5-4433-4936-8c79-e49b42acc0e9\" (UID: \"2df9ace5-4433-4936-8c79-e49b42acc0e9\") " Apr 12 18:55:47.639561 kubelet[2089]: I0412 18:55:47.638448 2089 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-xtables-lock\") pod \"2df9ace5-4433-4936-8c79-e49b42acc0e9\" (UID: \"2df9ace5-4433-4936-8c79-e49b42acc0e9\") " Apr 12 18:55:47.639561 kubelet[2089]: I0412 18:55:47.638467 2089 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-bpf-maps\") pod \"2df9ace5-4433-4936-8c79-e49b42acc0e9\" (UID: \"2df9ace5-4433-4936-8c79-e49b42acc0e9\") " Apr 12 18:55:47.639561 kubelet[2089]: I0412 18:55:47.638485 2089 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2df9ace5-4433-4936-8c79-e49b42acc0e9-hubble-tls\") pod \"2df9ace5-4433-4936-8c79-e49b42acc0e9\" (UID: \"2df9ace5-4433-4936-8c79-e49b42acc0e9\") " Apr 12 18:55:47.639561 kubelet[2089]: I0412 18:55:47.638501 2089 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-etc-cni-netd\") pod \"2df9ace5-4433-4936-8c79-e49b42acc0e9\" (UID: \"2df9ace5-4433-4936-8c79-e49b42acc0e9\") " Apr 12 18:55:47.639891 kubelet[2089]: I0412 18:55:47.638519 2089 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ntkc\" (UniqueName: \"kubernetes.io/projected/2df9ace5-4433-4936-8c79-e49b42acc0e9-kube-api-access-4ntkc\") pod \"2df9ace5-4433-4936-8c79-e49b42acc0e9\" (UID: \"2df9ace5-4433-4936-8c79-e49b42acc0e9\") " Apr 12 18:55:47.639891 kubelet[2089]: I0412 18:55:47.638537 2089 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2df9ace5-4433-4936-8c79-e49b42acc0e9-cilium-config-path\") pod \"2df9ace5-4433-4936-8c79-e49b42acc0e9\" (UID: \"2df9ace5-4433-4936-8c79-e49b42acc0e9\") " Apr 12 18:55:47.639891 kubelet[2089]: I0412 18:55:47.638553 2089 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-cilium-cgroup\") pod \"2df9ace5-4433-4936-8c79-e49b42acc0e9\" (UID: \"2df9ace5-4433-4936-8c79-e49b42acc0e9\") " Apr 12 18:55:47.639891 kubelet[2089]: I0412 18:55:47.638571 2089 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-host-proc-sys-net\") pod \"2df9ace5-4433-4936-8c79-e49b42acc0e9\" (UID: \"2df9ace5-4433-4936-8c79-e49b42acc0e9\") " Apr 12 18:55:47.639891 kubelet[2089]: I0412 18:55:47.638600 2089 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:47.639891 kubelet[2089]: I0412 18:55:47.638618 2089 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2df9ace5-4433-4936-8c79-e49b42acc0e9" (UID: "2df9ace5-4433-4936-8c79-e49b42acc0e9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:47.640126 kubelet[2089]: I0412 18:55:47.638866 2089 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-hostproc" (OuterVolumeSpecName: "hostproc") pod "2df9ace5-4433-4936-8c79-e49b42acc0e9" (UID: "2df9ace5-4433-4936-8c79-e49b42acc0e9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:47.640126 kubelet[2089]: I0412 18:55:47.639161 2089 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2df9ace5-4433-4936-8c79-e49b42acc0e9" (UID: "2df9ace5-4433-4936-8c79-e49b42acc0e9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:47.640126 kubelet[2089]: I0412 18:55:47.639197 2089 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2df9ace5-4433-4936-8c79-e49b42acc0e9" (UID: "2df9ace5-4433-4936-8c79-e49b42acc0e9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:47.640126 kubelet[2089]: I0412 18:55:47.639213 2089 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2df9ace5-4433-4936-8c79-e49b42acc0e9" (UID: "2df9ace5-4433-4936-8c79-e49b42acc0e9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:47.640126 kubelet[2089]: I0412 18:55:47.639225 2089 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2df9ace5-4433-4936-8c79-e49b42acc0e9" (UID: "2df9ace5-4433-4936-8c79-e49b42acc0e9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:47.640295 kubelet[2089]: I0412 18:55:47.639241 2089 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2df9ace5-4433-4936-8c79-e49b42acc0e9" (UID: "2df9ace5-4433-4936-8c79-e49b42acc0e9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:47.640295 kubelet[2089]: W0412 18:55:47.639227 2089 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/646f34bf-a3b1-4462-abb0-f9846f7ecc24/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Apr 12 18:55:47.640295 kubelet[2089]: I0412 18:55:47.639370 2089 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2df9ace5-4433-4936-8c79-e49b42acc0e9" (UID: "2df9ace5-4433-4936-8c79-e49b42acc0e9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:47.640295 kubelet[2089]: W0412 18:55:47.639481 2089 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/2df9ace5-4433-4936-8c79-e49b42acc0e9/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Apr 12 18:55:47.641694 kubelet[2089]: I0412 18:55:47.641656 2089 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2df9ace5-4433-4936-8c79-e49b42acc0e9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2df9ace5-4433-4936-8c79-e49b42acc0e9" (UID: "2df9ace5-4433-4936-8c79-e49b42acc0e9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:55:47.643462 kubelet[2089]: I0412 18:55:47.641692 2089 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/646f34bf-a3b1-4462-abb0-f9846f7ecc24-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "646f34bf-a3b1-4462-abb0-f9846f7ecc24" (UID: "646f34bf-a3b1-4462-abb0-f9846f7ecc24"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:55:47.643462 kubelet[2089]: I0412 18:55:47.641720 2089 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2df9ace5-4433-4936-8c79-e49b42acc0e9" (UID: "2df9ace5-4433-4936-8c79-e49b42acc0e9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:47.643462 kubelet[2089]: I0412 18:55:47.641930 2089 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2df9ace5-4433-4936-8c79-e49b42acc0e9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2df9ace5-4433-4936-8c79-e49b42acc0e9" (UID: "2df9ace5-4433-4936-8c79-e49b42acc0e9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:55:47.643954 kubelet[2089]: I0412 18:55:47.643918 2089 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2df9ace5-4433-4936-8c79-e49b42acc0e9-kube-api-access-4ntkc" (OuterVolumeSpecName: "kube-api-access-4ntkc") pod "2df9ace5-4433-4936-8c79-e49b42acc0e9" (UID: "2df9ace5-4433-4936-8c79-e49b42acc0e9"). InnerVolumeSpecName "kube-api-access-4ntkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:55:47.644315 kubelet[2089]: I0412 18:55:47.644281 2089 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2df9ace5-4433-4936-8c79-e49b42acc0e9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2df9ace5-4433-4936-8c79-e49b42acc0e9" (UID: "2df9ace5-4433-4936-8c79-e49b42acc0e9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:55:47.644504 kubelet[2089]: I0412 18:55:47.644466 2089 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/646f34bf-a3b1-4462-abb0-f9846f7ecc24-kube-api-access-664kj" (OuterVolumeSpecName: "kube-api-access-664kj") pod "646f34bf-a3b1-4462-abb0-f9846f7ecc24" (UID: "646f34bf-a3b1-4462-abb0-f9846f7ecc24"). InnerVolumeSpecName "kube-api-access-664kj". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:55:47.738721 kubelet[2089]: I0412 18:55:47.738696 2089 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4ntkc\" (UniqueName: \"kubernetes.io/projected/2df9ace5-4433-4936-8c79-e49b42acc0e9-kube-api-access-4ntkc\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:47.738721 kubelet[2089]: I0412 18:55:47.738719 2089 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2df9ace5-4433-4936-8c79-e49b42acc0e9-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:47.738878 kubelet[2089]: I0412 18:55:47.738729 2089 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:47.738878 kubelet[2089]: I0412 18:55:47.738737 2089 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:47.738878 kubelet[2089]: I0412 18:55:47.738746 2089 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:47.738878 kubelet[2089]: I0412 18:55:47.738755 2089 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-664kj\" (UniqueName: \"kubernetes.io/projected/646f34bf-a3b1-4462-abb0-f9846f7ecc24-kube-api-access-664kj\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:47.738878 kubelet[2089]: I0412 18:55:47.738763 2089 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:47.738878 kubelet[2089]: I0412 18:55:47.738771 2089 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2df9ace5-4433-4936-8c79-e49b42acc0e9-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:47.738878 kubelet[2089]: I0412 18:55:47.738780 2089 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/646f34bf-a3b1-4462-abb0-f9846f7ecc24-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:47.738878 kubelet[2089]: I0412 18:55:47.738787 2089 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:47.739090 kubelet[2089]: I0412 18:55:47.738796 2089 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:47.739090 kubelet[2089]: I0412 18:55:47.738805 2089 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2df9ace5-4433-4936-8c79-e49b42acc0e9-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:47.739090 kubelet[2089]: I0412 18:55:47.738814 2089 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:47.739090 kubelet[2089]: I0412 18:55:47.738823 2089 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:47.739090 kubelet[2089]: I0412 18:55:47.738831 2089 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2df9ace5-4433-4936-8c79-e49b42acc0e9-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:47.911424 kubelet[2089]: I0412 18:55:47.911336 2089 scope.go:115] "RemoveContainer" containerID="8317e9657a7021869448d05671c40c56bd3e12ba5c5de6199aa013ec04ff8187" Apr 12 18:55:47.912932 env[1199]: time="2024-04-12T18:55:47.912895405Z" level=info msg="RemoveContainer for \"8317e9657a7021869448d05671c40c56bd3e12ba5c5de6199aa013ec04ff8187\"" Apr 12 18:55:47.916497 env[1199]: time="2024-04-12T18:55:47.916456410Z" level=info msg="RemoveContainer for \"8317e9657a7021869448d05671c40c56bd3e12ba5c5de6199aa013ec04ff8187\" returns successfully" Apr 12 18:55:47.916783 kubelet[2089]: I0412 18:55:47.916643 2089 scope.go:115] "RemoveContainer" containerID="8317e9657a7021869448d05671c40c56bd3e12ba5c5de6199aa013ec04ff8187" Apr 12 18:55:47.917212 env[1199]: time="2024-04-12T18:55:47.917133404Z" level=error msg="ContainerStatus for \"8317e9657a7021869448d05671c40c56bd3e12ba5c5de6199aa013ec04ff8187\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8317e9657a7021869448d05671c40c56bd3e12ba5c5de6199aa013ec04ff8187\": not found" Apr 12 18:55:47.917326 kubelet[2089]: E0412 18:55:47.917304 2089 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8317e9657a7021869448d05671c40c56bd3e12ba5c5de6199aa013ec04ff8187\": not found" containerID="8317e9657a7021869448d05671c40c56bd3e12ba5c5de6199aa013ec04ff8187" Apr 12 18:55:47.917366 kubelet[2089]: I0412 18:55:47.917353 2089 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:8317e9657a7021869448d05671c40c56bd3e12ba5c5de6199aa013ec04ff8187} err="failed to get container status \"8317e9657a7021869448d05671c40c56bd3e12ba5c5de6199aa013ec04ff8187\": rpc error: code = NotFound desc = an error occurred when try to find container \"8317e9657a7021869448d05671c40c56bd3e12ba5c5de6199aa013ec04ff8187\": not found" Apr 12 18:55:47.917366 kubelet[2089]: I0412 18:55:47.917362 2089 scope.go:115] "RemoveContainer" containerID="0b7b8826e9d2b0a629a8158793e92bc08f653ecae0496e91c288ae2f2e4d5707" Apr 12 18:55:47.919343 env[1199]: time="2024-04-12T18:55:47.919281048Z" level=info msg="RemoveContainer for \"0b7b8826e9d2b0a629a8158793e92bc08f653ecae0496e91c288ae2f2e4d5707\"" Apr 12 18:55:47.922766 env[1199]: time="2024-04-12T18:55:47.922728689Z" level=info msg="RemoveContainer for \"0b7b8826e9d2b0a629a8158793e92bc08f653ecae0496e91c288ae2f2e4d5707\" returns successfully" Apr 12 18:55:47.922925 kubelet[2089]: I0412 18:55:47.922896 2089 scope.go:115] "RemoveContainer" containerID="369282fb8e9cc7c2a53e74f3e681ae82ec91be5d8cf15f526ed56b5430ff587b" Apr 12 18:55:47.923987 env[1199]: time="2024-04-12T18:55:47.923957129Z" level=info msg="RemoveContainer for \"369282fb8e9cc7c2a53e74f3e681ae82ec91be5d8cf15f526ed56b5430ff587b\"" Apr 12 18:55:47.928348 env[1199]: time="2024-04-12T18:55:47.928315458Z" level=info msg="RemoveContainer for \"369282fb8e9cc7c2a53e74f3e681ae82ec91be5d8cf15f526ed56b5430ff587b\" returns successfully" Apr 12 18:55:47.928959 kubelet[2089]: I0412 18:55:47.928933 2089 scope.go:115] "RemoveContainer" containerID="5d07f23a87a020c0bb00d0996558c5397aa3494bcc3a40f45a88c9d65ad5144a" Apr 12 18:55:47.930439 env[1199]: time="2024-04-12T18:55:47.930397267Z" level=info msg="RemoveContainer for \"5d07f23a87a020c0bb00d0996558c5397aa3494bcc3a40f45a88c9d65ad5144a\"" Apr 12 18:55:47.932916 env[1199]: time="2024-04-12T18:55:47.932895696Z" level=info msg="RemoveContainer for \"5d07f23a87a020c0bb00d0996558c5397aa3494bcc3a40f45a88c9d65ad5144a\" returns successfully" Apr 12 18:55:47.933093 kubelet[2089]: I0412 18:55:47.933054 2089 scope.go:115] "RemoveContainer" containerID="8e91b26d5737c6d7a18cce6b14a2cde6fe87340f33fd3689287cf08f13dbdd12" Apr 12 18:55:47.933964 env[1199]: time="2024-04-12T18:55:47.933927023Z" level=info msg="RemoveContainer for \"8e91b26d5737c6d7a18cce6b14a2cde6fe87340f33fd3689287cf08f13dbdd12\"" Apr 12 18:55:47.936702 env[1199]: time="2024-04-12T18:55:47.936670006Z" level=info msg="RemoveContainer for \"8e91b26d5737c6d7a18cce6b14a2cde6fe87340f33fd3689287cf08f13dbdd12\" returns successfully" Apr 12 18:55:47.936833 kubelet[2089]: I0412 18:55:47.936814 2089 scope.go:115] "RemoveContainer" containerID="d3745b9d861854f16aba830fc6026f492d27619e7e85ac50fbc76ca71b32e805" Apr 12 18:55:47.937668 env[1199]: time="2024-04-12T18:55:47.937643373Z" level=info msg="RemoveContainer for \"d3745b9d861854f16aba830fc6026f492d27619e7e85ac50fbc76ca71b32e805\"" Apr 12 18:55:47.940109 env[1199]: time="2024-04-12T18:55:47.940068534Z" level=info msg="RemoveContainer for \"d3745b9d861854f16aba830fc6026f492d27619e7e85ac50fbc76ca71b32e805\" returns successfully" Apr 12 18:55:47.940233 kubelet[2089]: I0412 18:55:47.940209 2089 scope.go:115] "RemoveContainer" containerID="0b7b8826e9d2b0a629a8158793e92bc08f653ecae0496e91c288ae2f2e4d5707" Apr 12 18:55:47.940415 env[1199]: time="2024-04-12T18:55:47.940362541Z" level=error msg="ContainerStatus for \"0b7b8826e9d2b0a629a8158793e92bc08f653ecae0496e91c288ae2f2e4d5707\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0b7b8826e9d2b0a629a8158793e92bc08f653ecae0496e91c288ae2f2e4d5707\": not found" Apr 12 18:55:47.940524 kubelet[2089]: E0412 18:55:47.940504 2089 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0b7b8826e9d2b0a629a8158793e92bc08f653ecae0496e91c288ae2f2e4d5707\": not found" containerID="0b7b8826e9d2b0a629a8158793e92bc08f653ecae0496e91c288ae2f2e4d5707" Apr 12 18:55:47.940579 kubelet[2089]: I0412 18:55:47.940541 2089 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:0b7b8826e9d2b0a629a8158793e92bc08f653ecae0496e91c288ae2f2e4d5707} err="failed to get container status \"0b7b8826e9d2b0a629a8158793e92bc08f653ecae0496e91c288ae2f2e4d5707\": rpc error: code = NotFound desc = an error occurred when try to find container \"0b7b8826e9d2b0a629a8158793e92bc08f653ecae0496e91c288ae2f2e4d5707\": not found" Apr 12 18:55:47.940579 kubelet[2089]: I0412 18:55:47.940554 2089 scope.go:115] "RemoveContainer" containerID="369282fb8e9cc7c2a53e74f3e681ae82ec91be5d8cf15f526ed56b5430ff587b" Apr 12 18:55:47.940799 env[1199]: time="2024-04-12T18:55:47.940743394Z" level=error msg="ContainerStatus for \"369282fb8e9cc7c2a53e74f3e681ae82ec91be5d8cf15f526ed56b5430ff587b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"369282fb8e9cc7c2a53e74f3e681ae82ec91be5d8cf15f526ed56b5430ff587b\": not found" Apr 12 18:55:47.940954 kubelet[2089]: E0412 18:55:47.940934 2089 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"369282fb8e9cc7c2a53e74f3e681ae82ec91be5d8cf15f526ed56b5430ff587b\": not found" containerID="369282fb8e9cc7c2a53e74f3e681ae82ec91be5d8cf15f526ed56b5430ff587b" Apr 12 18:55:47.940954 kubelet[2089]: I0412 18:55:47.940959 2089 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:369282fb8e9cc7c2a53e74f3e681ae82ec91be5d8cf15f526ed56b5430ff587b} err="failed to get container status \"369282fb8e9cc7c2a53e74f3e681ae82ec91be5d8cf15f526ed56b5430ff587b\": rpc error: code = NotFound desc = an error occurred when try to find container \"369282fb8e9cc7c2a53e74f3e681ae82ec91be5d8cf15f526ed56b5430ff587b\": not found" Apr 12 18:55:47.941075 kubelet[2089]: I0412 18:55:47.940968 2089 scope.go:115] "RemoveContainer" containerID="5d07f23a87a020c0bb00d0996558c5397aa3494bcc3a40f45a88c9d65ad5144a" Apr 12 18:55:47.941147 env[1199]: time="2024-04-12T18:55:47.941112063Z" level=error msg="ContainerStatus for \"5d07f23a87a020c0bb00d0996558c5397aa3494bcc3a40f45a88c9d65ad5144a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5d07f23a87a020c0bb00d0996558c5397aa3494bcc3a40f45a88c9d65ad5144a\": not found" Apr 12 18:55:47.941217 kubelet[2089]: E0412 18:55:47.941208 2089 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5d07f23a87a020c0bb00d0996558c5397aa3494bcc3a40f45a88c9d65ad5144a\": not found" containerID="5d07f23a87a020c0bb00d0996558c5397aa3494bcc3a40f45a88c9d65ad5144a" Apr 12 18:55:47.941262 kubelet[2089]: I0412 18:55:47.941230 2089 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:5d07f23a87a020c0bb00d0996558c5397aa3494bcc3a40f45a88c9d65ad5144a} err="failed to get container status \"5d07f23a87a020c0bb00d0996558c5397aa3494bcc3a40f45a88c9d65ad5144a\": rpc error: code = NotFound desc = an error occurred when try to find container \"5d07f23a87a020c0bb00d0996558c5397aa3494bcc3a40f45a88c9d65ad5144a\": not found" Apr 12 18:55:47.941262 kubelet[2089]: I0412 18:55:47.941239 2089 scope.go:115] "RemoveContainer" containerID="8e91b26d5737c6d7a18cce6b14a2cde6fe87340f33fd3689287cf08f13dbdd12" Apr 12 18:55:47.941386 env[1199]: time="2024-04-12T18:55:47.941349934Z" level=error msg="ContainerStatus for \"8e91b26d5737c6d7a18cce6b14a2cde6fe87340f33fd3689287cf08f13dbdd12\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8e91b26d5737c6d7a18cce6b14a2cde6fe87340f33fd3689287cf08f13dbdd12\": not found" Apr 12 18:55:47.941539 kubelet[2089]: E0412 18:55:47.941521 2089 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8e91b26d5737c6d7a18cce6b14a2cde6fe87340f33fd3689287cf08f13dbdd12\": not found" containerID="8e91b26d5737c6d7a18cce6b14a2cde6fe87340f33fd3689287cf08f13dbdd12" Apr 12 18:55:47.941596 kubelet[2089]: I0412 18:55:47.941555 2089 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:8e91b26d5737c6d7a18cce6b14a2cde6fe87340f33fd3689287cf08f13dbdd12} err="failed to get container status \"8e91b26d5737c6d7a18cce6b14a2cde6fe87340f33fd3689287cf08f13dbdd12\": rpc error: code = NotFound desc = an error occurred when try to find container \"8e91b26d5737c6d7a18cce6b14a2cde6fe87340f33fd3689287cf08f13dbdd12\": not found" Apr 12 18:55:47.941596 kubelet[2089]: I0412 18:55:47.941565 2089 scope.go:115] "RemoveContainer" containerID="d3745b9d861854f16aba830fc6026f492d27619e7e85ac50fbc76ca71b32e805" Apr 12 18:55:47.941789 env[1199]: time="2024-04-12T18:55:47.941726639Z" level=error msg="ContainerStatus for \"d3745b9d861854f16aba830fc6026f492d27619e7e85ac50fbc76ca71b32e805\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d3745b9d861854f16aba830fc6026f492d27619e7e85ac50fbc76ca71b32e805\": not found" Apr 12 18:55:47.941907 kubelet[2089]: E0412 18:55:47.941894 2089 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d3745b9d861854f16aba830fc6026f492d27619e7e85ac50fbc76ca71b32e805\": not found" containerID="d3745b9d861854f16aba830fc6026f492d27619e7e85ac50fbc76ca71b32e805" Apr 12 18:55:47.941954 kubelet[2089]: I0412 18:55:47.941914 2089 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:d3745b9d861854f16aba830fc6026f492d27619e7e85ac50fbc76ca71b32e805} err="failed to get container status \"d3745b9d861854f16aba830fc6026f492d27619e7e85ac50fbc76ca71b32e805\": rpc error: code = NotFound desc = an error occurred when try to find container \"d3745b9d861854f16aba830fc6026f492d27619e7e85ac50fbc76ca71b32e805\": not found" Apr 12 18:55:48.068801 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-982d52cb6ef4acf249d4a43e54a5022b65826178307e1f55d945c966f85b5ba1-rootfs.mount: Deactivated successfully. Apr 12 18:55:48.068949 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-471141734105913d7bb42e7e937cd57ff553f986551f7a39270615680a06347c-rootfs.mount: Deactivated successfully. Apr 12 18:55:48.069048 systemd[1]: var-lib-kubelet-pods-646f34bf\x2da3b1\x2d4462\x2dabb0\x2df9846f7ecc24-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d664kj.mount: Deactivated successfully. Apr 12 18:55:48.069142 systemd[1]: var-lib-kubelet-pods-2df9ace5\x2d4433\x2d4936\x2d8c79\x2de49b42acc0e9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4ntkc.mount: Deactivated successfully. Apr 12 18:55:48.069223 systemd[1]: var-lib-kubelet-pods-2df9ace5\x2d4433\x2d4936\x2d8c79\x2de49b42acc0e9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 12 18:55:48.069307 systemd[1]: var-lib-kubelet-pods-2df9ace5\x2d4433\x2d4936\x2d8c79\x2de49b42acc0e9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 12 18:55:48.551797 kubelet[2089]: I0412 18:55:48.551755 2089 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=2df9ace5-4433-4936-8c79-e49b42acc0e9 path="/var/lib/kubelet/pods/2df9ace5-4433-4936-8c79-e49b42acc0e9/volumes" Apr 12 18:55:48.552710 kubelet[2089]: I0412 18:55:48.552699 2089 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=646f34bf-a3b1-4462-abb0-f9846f7ecc24 path="/var/lib/kubelet/pods/646f34bf-a3b1-4462-abb0-f9846f7ecc24/volumes" Apr 12 18:55:48.983036 sshd[3756]: pam_unix(sshd:session): session closed for user core Apr 12 18:55:48.986084 systemd[1]: Started sshd@26-10.0.0.108:22-10.0.0.1:41514.service. Apr 12 18:55:48.986794 systemd[1]: sshd@25-10.0.0.108:22-10.0.0.1:47610.service: Deactivated successfully. Apr 12 18:55:48.988744 systemd[1]: session-26.scope: Deactivated successfully. Apr 12 18:55:48.990170 systemd-logind[1176]: Session 26 logged out. Waiting for processes to exit. Apr 12 18:55:48.991143 systemd-logind[1176]: Removed session 26. Apr 12 18:55:49.019738 sshd[3927]: Accepted publickey for core from 10.0.0.1 port 41514 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:55:49.021361 sshd[3927]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:55:49.025224 systemd-logind[1176]: New session 27 of user core. Apr 12 18:55:49.026065 systemd[1]: Started session-27.scope. Apr 12 18:55:49.803340 systemd[1]: Started sshd@27-10.0.0.108:22-10.0.0.1:41518.service. Apr 12 18:55:49.812825 sshd[3927]: pam_unix(sshd:session): session closed for user core Apr 12 18:55:49.821462 kubelet[2089]: I0412 18:55:49.817449 2089 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:55:49.821462 kubelet[2089]: E0412 18:55:49.817516 2089 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="646f34bf-a3b1-4462-abb0-f9846f7ecc24" containerName="cilium-operator" Apr 12 18:55:49.821462 kubelet[2089]: E0412 18:55:49.817526 2089 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2df9ace5-4433-4936-8c79-e49b42acc0e9" containerName="mount-cgroup" Apr 12 18:55:49.821462 kubelet[2089]: E0412 18:55:49.817532 2089 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2df9ace5-4433-4936-8c79-e49b42acc0e9" containerName="apply-sysctl-overwrites" Apr 12 18:55:49.821462 kubelet[2089]: E0412 18:55:49.817540 2089 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2df9ace5-4433-4936-8c79-e49b42acc0e9" containerName="mount-bpf-fs" Apr 12 18:55:49.821462 kubelet[2089]: E0412 18:55:49.817547 2089 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2df9ace5-4433-4936-8c79-e49b42acc0e9" containerName="clean-cilium-state" Apr 12 18:55:49.821462 kubelet[2089]: E0412 18:55:49.817554 2089 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2df9ace5-4433-4936-8c79-e49b42acc0e9" containerName="cilium-agent" Apr 12 18:55:49.821462 kubelet[2089]: I0412 18:55:49.817575 2089 memory_manager.go:346] "RemoveStaleState removing state" podUID="646f34bf-a3b1-4462-abb0-f9846f7ecc24" containerName="cilium-operator" Apr 12 18:55:49.821462 kubelet[2089]: I0412 18:55:49.817581 2089 memory_manager.go:346] "RemoveStaleState removing state" podUID="2df9ace5-4433-4936-8c79-e49b42acc0e9" containerName="cilium-agent" Apr 12 18:55:49.823394 systemd[1]: sshd@26-10.0.0.108:22-10.0.0.1:41514.service: Deactivated successfully. Apr 12 18:55:49.827239 systemd[1]: session-27.scope: Deactivated successfully. Apr 12 18:55:49.829244 systemd-logind[1176]: Session 27 logged out. Waiting for processes to exit. Apr 12 18:55:49.834374 systemd-logind[1176]: Removed session 27. Apr 12 18:55:49.857571 sshd[3939]: Accepted publickey for core from 10.0.0.1 port 41518 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:55:49.858797 sshd[3939]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:55:49.862909 systemd-logind[1176]: New session 28 of user core. Apr 12 18:55:49.863981 systemd[1]: Started session-28.scope. Apr 12 18:55:49.949342 kubelet[2089]: I0412 18:55:49.949295 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-hostproc\") pod \"cilium-h4t4d\" (UID: \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\") " pod="kube-system/cilium-h4t4d" Apr 12 18:55:49.949342 kubelet[2089]: I0412 18:55:49.949343 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-lib-modules\") pod \"cilium-h4t4d\" (UID: \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\") " pod="kube-system/cilium-h4t4d" Apr 12 18:55:49.949342 kubelet[2089]: I0412 18:55:49.949364 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-bpf-maps\") pod \"cilium-h4t4d\" (UID: \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\") " pod="kube-system/cilium-h4t4d" Apr 12 18:55:49.949635 kubelet[2089]: I0412 18:55:49.949384 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-etc-cni-netd\") pod \"cilium-h4t4d\" (UID: \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\") " pod="kube-system/cilium-h4t4d" Apr 12 18:55:49.949635 kubelet[2089]: I0412 18:55:49.949406 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-clustermesh-secrets\") pod \"cilium-h4t4d\" (UID: \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\") " pod="kube-system/cilium-h4t4d" Apr 12 18:55:49.949635 kubelet[2089]: I0412 18:55:49.949423 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-cilium-ipsec-secrets\") pod \"cilium-h4t4d\" (UID: \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\") " pod="kube-system/cilium-h4t4d" Apr 12 18:55:49.949635 kubelet[2089]: I0412 18:55:49.949442 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlxgd\" (UniqueName: \"kubernetes.io/projected/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-kube-api-access-wlxgd\") pod \"cilium-h4t4d\" (UID: \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\") " pod="kube-system/cilium-h4t4d" Apr 12 18:55:49.949635 kubelet[2089]: I0412 18:55:49.949459 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-cilium-config-path\") pod \"cilium-h4t4d\" (UID: \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\") " pod="kube-system/cilium-h4t4d" Apr 12 18:55:49.949762 kubelet[2089]: I0412 18:55:49.949476 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-cilium-run\") pod \"cilium-h4t4d\" (UID: \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\") " pod="kube-system/cilium-h4t4d" Apr 12 18:55:49.949762 kubelet[2089]: I0412 18:55:49.949492 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-host-proc-sys-net\") pod \"cilium-h4t4d\" (UID: \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\") " pod="kube-system/cilium-h4t4d" Apr 12 18:55:49.949762 kubelet[2089]: I0412 18:55:49.949509 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-hubble-tls\") pod \"cilium-h4t4d\" (UID: \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\") " pod="kube-system/cilium-h4t4d" Apr 12 18:55:49.949762 kubelet[2089]: I0412 18:55:49.949530 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-cilium-cgroup\") pod \"cilium-h4t4d\" (UID: \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\") " pod="kube-system/cilium-h4t4d" Apr 12 18:55:49.949762 kubelet[2089]: I0412 18:55:49.949546 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-cni-path\") pod \"cilium-h4t4d\" (UID: \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\") " pod="kube-system/cilium-h4t4d" Apr 12 18:55:49.949762 kubelet[2089]: I0412 18:55:49.949564 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-xtables-lock\") pod \"cilium-h4t4d\" (UID: \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\") " pod="kube-system/cilium-h4t4d" Apr 12 18:55:49.949907 kubelet[2089]: I0412 18:55:49.949585 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-host-proc-sys-kernel\") pod \"cilium-h4t4d\" (UID: \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\") " pod="kube-system/cilium-h4t4d" Apr 12 18:55:49.984271 sshd[3939]: pam_unix(sshd:session): session closed for user core Apr 12 18:55:49.988193 systemd[1]: Started sshd@28-10.0.0.108:22-10.0.0.1:41532.service. Apr 12 18:55:49.992093 systemd[1]: sshd@27-10.0.0.108:22-10.0.0.1:41518.service: Deactivated successfully. Apr 12 18:55:49.992891 systemd[1]: session-28.scope: Deactivated successfully. Apr 12 18:55:49.999095 systemd-logind[1176]: Session 28 logged out. Waiting for processes to exit. Apr 12 18:55:50.000073 systemd-logind[1176]: Removed session 28. Apr 12 18:55:50.024546 sshd[3953]: Accepted publickey for core from 10.0.0.1 port 41532 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:55:50.025728 sshd[3953]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:55:50.029443 systemd-logind[1176]: New session 29 of user core. Apr 12 18:55:50.030395 systemd[1]: Started session-29.scope. Apr 12 18:55:50.124291 kubelet[2089]: E0412 18:55:50.124236 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:55:50.125103 env[1199]: time="2024-04-12T18:55:50.125040316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h4t4d,Uid:e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888,Namespace:kube-system,Attempt:0,}" Apr 12 18:55:50.155868 env[1199]: time="2024-04-12T18:55:50.155762819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:55:50.155868 env[1199]: time="2024-04-12T18:55:50.155811321Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:55:50.155868 env[1199]: time="2024-04-12T18:55:50.155826761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:55:50.156223 env[1199]: time="2024-04-12T18:55:50.156125577Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/54717cf0c0af0b96e1a6826ca1879a45270cb55525e1085db33680141224cb7b pid=3980 runtime=io.containerd.runc.v2 Apr 12 18:55:50.198681 env[1199]: time="2024-04-12T18:55:50.198617642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h4t4d,Uid:e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888,Namespace:kube-system,Attempt:0,} returns sandbox id \"54717cf0c0af0b96e1a6826ca1879a45270cb55525e1085db33680141224cb7b\"" Apr 12 18:55:50.199416 kubelet[2089]: E0412 18:55:50.199396 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:55:50.201554 env[1199]: time="2024-04-12T18:55:50.201508302Z" level=info msg="CreateContainer within sandbox \"54717cf0c0af0b96e1a6826ca1879a45270cb55525e1085db33680141224cb7b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 18:55:50.214536 env[1199]: time="2024-04-12T18:55:50.214451910Z" level=info msg="CreateContainer within sandbox \"54717cf0c0af0b96e1a6826ca1879a45270cb55525e1085db33680141224cb7b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3e24dfd8e5cb94cfb7daffdd87aa3dece7dd91dd0e166784b3fbeff2d3c9063c\"" Apr 12 18:55:50.215359 env[1199]: time="2024-04-12T18:55:50.215300178Z" level=info msg="StartContainer for \"3e24dfd8e5cb94cfb7daffdd87aa3dece7dd91dd0e166784b3fbeff2d3c9063c\"" Apr 12 18:55:50.258031 env[1199]: time="2024-04-12T18:55:50.257946085Z" level=info msg="StartContainer for \"3e24dfd8e5cb94cfb7daffdd87aa3dece7dd91dd0e166784b3fbeff2d3c9063c\" returns successfully" Apr 12 18:55:50.294397 env[1199]: time="2024-04-12T18:55:50.294336653Z" level=info msg="shim disconnected" id=3e24dfd8e5cb94cfb7daffdd87aa3dece7dd91dd0e166784b3fbeff2d3c9063c Apr 12 18:55:50.294397 env[1199]: time="2024-04-12T18:55:50.294397007Z" level=warning msg="cleaning up after shim disconnected" id=3e24dfd8e5cb94cfb7daffdd87aa3dece7dd91dd0e166784b3fbeff2d3c9063c namespace=k8s.io Apr 12 18:55:50.294397 env[1199]: time="2024-04-12T18:55:50.294406424Z" level=info msg="cleaning up dead shim" Apr 12 18:55:50.301446 env[1199]: time="2024-04-12T18:55:50.301417205Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:55:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4064 runtime=io.containerd.runc.v2\n" Apr 12 18:55:50.550019 kubelet[2089]: E0412 18:55:50.549877 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:55:50.653057 kubelet[2089]: E0412 18:55:50.653010 2089 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 12 18:55:50.924194 env[1199]: time="2024-04-12T18:55:50.924154697Z" level=info msg="StopPodSandbox for \"54717cf0c0af0b96e1a6826ca1879a45270cb55525e1085db33680141224cb7b\"" Apr 12 18:55:50.924383 env[1199]: time="2024-04-12T18:55:50.924225232Z" level=info msg="Container to stop \"3e24dfd8e5cb94cfb7daffdd87aa3dece7dd91dd0e166784b3fbeff2d3c9063c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:55:50.952453 env[1199]: time="2024-04-12T18:55:50.952380418Z" level=info msg="shim disconnected" id=54717cf0c0af0b96e1a6826ca1879a45270cb55525e1085db33680141224cb7b Apr 12 18:55:50.952453 env[1199]: time="2024-04-12T18:55:50.952453245Z" level=warning msg="cleaning up after shim disconnected" id=54717cf0c0af0b96e1a6826ca1879a45270cb55525e1085db33680141224cb7b namespace=k8s.io Apr 12 18:55:50.952654 env[1199]: time="2024-04-12T18:55:50.952467393Z" level=info msg="cleaning up dead shim" Apr 12 18:55:50.959154 env[1199]: time="2024-04-12T18:55:50.959096760Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:55:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4096 runtime=io.containerd.runc.v2\n" Apr 12 18:55:50.959406 env[1199]: time="2024-04-12T18:55:50.959379726Z" level=info msg="TearDown network for sandbox \"54717cf0c0af0b96e1a6826ca1879a45270cb55525e1085db33680141224cb7b\" successfully" Apr 12 18:55:50.959441 env[1199]: time="2024-04-12T18:55:50.959405616Z" level=info msg="StopPodSandbox for \"54717cf0c0af0b96e1a6826ca1879a45270cb55525e1085db33680141224cb7b\" returns successfully" Apr 12 18:55:51.056477 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54717cf0c0af0b96e1a6826ca1879a45270cb55525e1085db33680141224cb7b-rootfs.mount: Deactivated successfully. Apr 12 18:55:51.056614 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-54717cf0c0af0b96e1a6826ca1879a45270cb55525e1085db33680141224cb7b-shm.mount: Deactivated successfully. Apr 12 18:55:51.156601 kubelet[2089]: I0412 18:55:51.156537 2089 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-cilium-run\") pod \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\" (UID: \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\") " Apr 12 18:55:51.156601 kubelet[2089]: I0412 18:55:51.156584 2089 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-hostproc\") pod \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\" (UID: \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\") " Apr 12 18:55:51.156601 kubelet[2089]: I0412 18:55:51.156609 2089 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-cilium-ipsec-secrets\") pod \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\" (UID: \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\") " Apr 12 18:55:51.157097 kubelet[2089]: I0412 18:55:51.156631 2089 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-cni-path\") pod \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\" (UID: \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\") " Apr 12 18:55:51.157097 kubelet[2089]: I0412 18:55:51.156653 2089 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-clustermesh-secrets\") pod \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\" (UID: \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\") " Apr 12 18:55:51.157097 kubelet[2089]: I0412 18:55:51.156671 2089 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-lib-modules\") pod \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\" (UID: \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\") " Apr 12 18:55:51.157097 kubelet[2089]: I0412 18:55:51.156678 2089 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-hostproc" (OuterVolumeSpecName: "hostproc") pod "e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888" (UID: "e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:51.157097 kubelet[2089]: I0412 18:55:51.156677 2089 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888" (UID: "e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:51.157097 kubelet[2089]: I0412 18:55:51.156692 2089 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlxgd\" (UniqueName: \"kubernetes.io/projected/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-kube-api-access-wlxgd\") pod \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\" (UID: \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\") " Apr 12 18:55:51.157255 kubelet[2089]: I0412 18:55:51.156718 2089 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-cni-path" (OuterVolumeSpecName: "cni-path") pod "e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888" (UID: "e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:51.157255 kubelet[2089]: I0412 18:55:51.156755 2089 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-cilium-config-path\") pod \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\" (UID: \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\") " Apr 12 18:55:51.157255 kubelet[2089]: I0412 18:55:51.156822 2089 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-cilium-cgroup\") pod \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\" (UID: \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\") " Apr 12 18:55:51.157255 kubelet[2089]: I0412 18:55:51.156855 2089 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-host-proc-sys-kernel\") pod \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\" (UID: \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\") " Apr 12 18:55:51.157255 kubelet[2089]: I0412 18:55:51.156871 2089 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-bpf-maps\") pod \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\" (UID: \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\") " Apr 12 18:55:51.157255 kubelet[2089]: I0412 18:55:51.156888 2089 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-host-proc-sys-net\") pod \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\" (UID: \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\") " Apr 12 18:55:51.157405 kubelet[2089]: I0412 18:55:51.156908 2089 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-xtables-lock\") pod \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\" (UID: \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\") " Apr 12 18:55:51.157405 kubelet[2089]: I0412 18:55:51.156925 2089 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-etc-cni-netd\") pod \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\" (UID: \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\") " Apr 12 18:55:51.157405 kubelet[2089]: I0412 18:55:51.156957 2089 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-hubble-tls\") pod \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\" (UID: \"e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888\") " Apr 12 18:55:51.157405 kubelet[2089]: I0412 18:55:51.156984 2089 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:51.157405 kubelet[2089]: I0412 18:55:51.157013 2089 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:51.157405 kubelet[2089]: I0412 18:55:51.157021 2089 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888" (UID: "e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:51.157405 kubelet[2089]: I0412 18:55:51.157022 2089 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:51.157571 kubelet[2089]: I0412 18:55:51.157056 2089 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888" (UID: "e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:51.157571 kubelet[2089]: W0412 18:55:51.157163 2089 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Apr 12 18:55:51.160164 kubelet[2089]: I0412 18:55:51.160120 2089 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888" (UID: "e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:55:51.160613 kubelet[2089]: I0412 18:55:51.160273 2089 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888" (UID: "e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:51.160670 systemd[1]: var-lib-kubelet-pods-e920ebde\x2d2c4d\x2d43e1\x2da9c5\x2da5d4bcfe8888-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwlxgd.mount: Deactivated successfully. Apr 12 18:55:51.160784 kubelet[2089]: I0412 18:55:51.160294 2089 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888" (UID: "e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:51.160809 systemd[1]: var-lib-kubelet-pods-e920ebde\x2d2c4d\x2d43e1\x2da9c5\x2da5d4bcfe8888-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 12 18:55:51.161470 kubelet[2089]: I0412 18:55:51.160318 2089 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888" (UID: "e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:51.161633 kubelet[2089]: I0412 18:55:51.160335 2089 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888" (UID: "e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:51.161633 kubelet[2089]: I0412 18:55:51.161405 2089 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888" (UID: "e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:55:51.161633 kubelet[2089]: I0412 18:55:51.161443 2089 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888" (UID: "e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:55:51.161633 kubelet[2089]: I0412 18:55:51.161573 2089 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888" (UID: "e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:55:51.162069 kubelet[2089]: I0412 18:55:51.162042 2089 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888" (UID: "e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:51.162702 systemd[1]: var-lib-kubelet-pods-e920ebde\x2d2c4d\x2d43e1\x2da9c5\x2da5d4bcfe8888-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Apr 12 18:55:51.162808 kubelet[2089]: I0412 18:55:51.162699 2089 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-kube-api-access-wlxgd" (OuterVolumeSpecName: "kube-api-access-wlxgd") pod "e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888" (UID: "e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888"). InnerVolumeSpecName "kube-api-access-wlxgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:55:51.162811 systemd[1]: var-lib-kubelet-pods-e920ebde\x2d2c4d\x2d43e1\x2da9c5\x2da5d4bcfe8888-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 12 18:55:51.257821 kubelet[2089]: I0412 18:55:51.257675 2089 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:51.257821 kubelet[2089]: I0412 18:55:51.257719 2089 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wlxgd\" (UniqueName: \"kubernetes.io/projected/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-kube-api-access-wlxgd\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:51.257821 kubelet[2089]: I0412 18:55:51.257729 2089 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:51.257821 kubelet[2089]: I0412 18:55:51.257742 2089 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:51.257821 kubelet[2089]: I0412 18:55:51.257753 2089 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:51.257821 kubelet[2089]: I0412 18:55:51.257762 2089 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:51.257821 kubelet[2089]: I0412 18:55:51.257771 2089 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:51.257821 kubelet[2089]: I0412 18:55:51.257780 2089 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:51.258202 kubelet[2089]: I0412 18:55:51.257789 2089 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:51.258202 kubelet[2089]: I0412 18:55:51.257798 2089 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:51.258202 kubelet[2089]: I0412 18:55:51.257811 2089 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:51.258202 kubelet[2089]: I0412 18:55:51.257823 2089 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:51.926725 kubelet[2089]: I0412 18:55:51.926698 2089 scope.go:115] "RemoveContainer" containerID="3e24dfd8e5cb94cfb7daffdd87aa3dece7dd91dd0e166784b3fbeff2d3c9063c" Apr 12 18:55:51.927604 env[1199]: time="2024-04-12T18:55:51.927569390Z" level=info msg="RemoveContainer for \"3e24dfd8e5cb94cfb7daffdd87aa3dece7dd91dd0e166784b3fbeff2d3c9063c\"" Apr 12 18:55:51.931091 env[1199]: time="2024-04-12T18:55:51.931039376Z" level=info msg="RemoveContainer for \"3e24dfd8e5cb94cfb7daffdd87aa3dece7dd91dd0e166784b3fbeff2d3c9063c\" returns successfully" Apr 12 18:55:51.954179 kubelet[2089]: I0412 18:55:51.954141 2089 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:55:51.954469 kubelet[2089]: E0412 18:55:51.954446 2089 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888" containerName="mount-cgroup" Apr 12 18:55:51.954560 kubelet[2089]: I0412 18:55:51.954490 2089 memory_manager.go:346] "RemoveStaleState removing state" podUID="e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888" containerName="mount-cgroup" Apr 12 18:55:51.960901 kubelet[2089]: I0412 18:55:51.960864 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4bb73bae-4750-48f5-b5ce-19bd25c2ff85-cilium-cgroup\") pod \"cilium-f958q\" (UID: \"4bb73bae-4750-48f5-b5ce-19bd25c2ff85\") " pod="kube-system/cilium-f958q" Apr 12 18:55:51.961081 kubelet[2089]: I0412 18:55:51.960912 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcf5v\" (UniqueName: \"kubernetes.io/projected/4bb73bae-4750-48f5-b5ce-19bd25c2ff85-kube-api-access-dcf5v\") pod \"cilium-f958q\" (UID: \"4bb73bae-4750-48f5-b5ce-19bd25c2ff85\") " pod="kube-system/cilium-f958q" Apr 12 18:55:51.961081 kubelet[2089]: I0412 18:55:51.960939 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4bb73bae-4750-48f5-b5ce-19bd25c2ff85-host-proc-sys-kernel\") pod \"cilium-f958q\" (UID: \"4bb73bae-4750-48f5-b5ce-19bd25c2ff85\") " pod="kube-system/cilium-f958q" Apr 12 18:55:51.961081 kubelet[2089]: I0412 18:55:51.960965 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4bb73bae-4750-48f5-b5ce-19bd25c2ff85-hostproc\") pod \"cilium-f958q\" (UID: \"4bb73bae-4750-48f5-b5ce-19bd25c2ff85\") " pod="kube-system/cilium-f958q" Apr 12 18:55:51.961081 kubelet[2089]: I0412 18:55:51.961010 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4bb73bae-4750-48f5-b5ce-19bd25c2ff85-cilium-config-path\") pod \"cilium-f958q\" (UID: \"4bb73bae-4750-48f5-b5ce-19bd25c2ff85\") " pod="kube-system/cilium-f958q" Apr 12 18:55:51.961081 kubelet[2089]: I0412 18:55:51.961046 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4bb73bae-4750-48f5-b5ce-19bd25c2ff85-hubble-tls\") pod \"cilium-f958q\" (UID: \"4bb73bae-4750-48f5-b5ce-19bd25c2ff85\") " pod="kube-system/cilium-f958q" Apr 12 18:55:51.961081 kubelet[2089]: I0412 18:55:51.961071 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4bb73bae-4750-48f5-b5ce-19bd25c2ff85-xtables-lock\") pod \"cilium-f958q\" (UID: \"4bb73bae-4750-48f5-b5ce-19bd25c2ff85\") " pod="kube-system/cilium-f958q" Apr 12 18:55:51.961300 kubelet[2089]: I0412 18:55:51.961095 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4bb73bae-4750-48f5-b5ce-19bd25c2ff85-host-proc-sys-net\") pod \"cilium-f958q\" (UID: \"4bb73bae-4750-48f5-b5ce-19bd25c2ff85\") " pod="kube-system/cilium-f958q" Apr 12 18:55:51.961300 kubelet[2089]: I0412 18:55:51.961122 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4bb73bae-4750-48f5-b5ce-19bd25c2ff85-lib-modules\") pod \"cilium-f958q\" (UID: \"4bb73bae-4750-48f5-b5ce-19bd25c2ff85\") " pod="kube-system/cilium-f958q" Apr 12 18:55:51.961300 kubelet[2089]: I0412 18:55:51.961147 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4bb73bae-4750-48f5-b5ce-19bd25c2ff85-cilium-ipsec-secrets\") pod \"cilium-f958q\" (UID: \"4bb73bae-4750-48f5-b5ce-19bd25c2ff85\") " pod="kube-system/cilium-f958q" Apr 12 18:55:51.961300 kubelet[2089]: I0412 18:55:51.961171 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4bb73bae-4750-48f5-b5ce-19bd25c2ff85-cilium-run\") pod \"cilium-f958q\" (UID: \"4bb73bae-4750-48f5-b5ce-19bd25c2ff85\") " pod="kube-system/cilium-f958q" Apr 12 18:55:51.961300 kubelet[2089]: I0412 18:55:51.961202 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4bb73bae-4750-48f5-b5ce-19bd25c2ff85-bpf-maps\") pod \"cilium-f958q\" (UID: \"4bb73bae-4750-48f5-b5ce-19bd25c2ff85\") " pod="kube-system/cilium-f958q" Apr 12 18:55:51.961300 kubelet[2089]: I0412 18:55:51.961225 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4bb73bae-4750-48f5-b5ce-19bd25c2ff85-cni-path\") pod \"cilium-f958q\" (UID: \"4bb73bae-4750-48f5-b5ce-19bd25c2ff85\") " pod="kube-system/cilium-f958q" Apr 12 18:55:51.961502 kubelet[2089]: I0412 18:55:51.961249 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4bb73bae-4750-48f5-b5ce-19bd25c2ff85-etc-cni-netd\") pod \"cilium-f958q\" (UID: \"4bb73bae-4750-48f5-b5ce-19bd25c2ff85\") " pod="kube-system/cilium-f958q" Apr 12 18:55:51.961502 kubelet[2089]: I0412 18:55:51.961271 2089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4bb73bae-4750-48f5-b5ce-19bd25c2ff85-clustermesh-secrets\") pod \"cilium-f958q\" (UID: \"4bb73bae-4750-48f5-b5ce-19bd25c2ff85\") " pod="kube-system/cilium-f958q" Apr 12 18:55:52.261093 kubelet[2089]: E0412 18:55:52.260936 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:55:52.262116 env[1199]: time="2024-04-12T18:55:52.261908879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f958q,Uid:4bb73bae-4750-48f5-b5ce-19bd25c2ff85,Namespace:kube-system,Attempt:0,}" Apr 12 18:55:52.304392 env[1199]: time="2024-04-12T18:55:52.304331148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:55:52.304392 env[1199]: time="2024-04-12T18:55:52.304372747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:55:52.304392 env[1199]: time="2024-04-12T18:55:52.304384008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:55:52.304632 env[1199]: time="2024-04-12T18:55:52.304583607Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/aacbffbdba9088eb87c15a1f683adb75f2c41346bfc179177ee41f38a362feb6 pid=4126 runtime=io.containerd.runc.v2 Apr 12 18:55:52.344806 env[1199]: time="2024-04-12T18:55:52.344762454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f958q,Uid:4bb73bae-4750-48f5-b5ce-19bd25c2ff85,Namespace:kube-system,Attempt:0,} returns sandbox id \"aacbffbdba9088eb87c15a1f683adb75f2c41346bfc179177ee41f38a362feb6\"" Apr 12 18:55:52.345581 kubelet[2089]: E0412 18:55:52.345563 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:55:52.347448 env[1199]: time="2024-04-12T18:55:52.347418668Z" level=info msg="CreateContainer within sandbox \"aacbffbdba9088eb87c15a1f683adb75f2c41346bfc179177ee41f38a362feb6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 18:55:52.358706 env[1199]: time="2024-04-12T18:55:52.358647565Z" level=info msg="CreateContainer within sandbox \"aacbffbdba9088eb87c15a1f683adb75f2c41346bfc179177ee41f38a362feb6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"643dde12ff7bf6e2f94c8b15800e7b49978cd832cc7b5fba4f8c851cd3f73701\"" Apr 12 18:55:52.359173 env[1199]: time="2024-04-12T18:55:52.359135190Z" level=info msg="StartContainer for \"643dde12ff7bf6e2f94c8b15800e7b49978cd832cc7b5fba4f8c851cd3f73701\"" Apr 12 18:55:52.404193 env[1199]: time="2024-04-12T18:55:52.404126266Z" level=info msg="StartContainer for \"643dde12ff7bf6e2f94c8b15800e7b49978cd832cc7b5fba4f8c851cd3f73701\" returns successfully" Apr 12 18:55:52.441109 env[1199]: time="2024-04-12T18:55:52.441028344Z" level=info msg="shim disconnected" id=643dde12ff7bf6e2f94c8b15800e7b49978cd832cc7b5fba4f8c851cd3f73701 Apr 12 18:55:52.441109 env[1199]: time="2024-04-12T18:55:52.441089931Z" level=warning msg="cleaning up after shim disconnected" id=643dde12ff7bf6e2f94c8b15800e7b49978cd832cc7b5fba4f8c851cd3f73701 namespace=k8s.io Apr 12 18:55:52.441109 env[1199]: time="2024-04-12T18:55:52.441101763Z" level=info msg="cleaning up dead shim" Apr 12 18:55:52.454392 env[1199]: time="2024-04-12T18:55:52.454335641Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:55:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4210 runtime=io.containerd.runc.v2\n" Apr 12 18:55:52.551285 kubelet[2089]: I0412 18:55:52.551186 2089 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888 path="/var/lib/kubelet/pods/e920ebde-2c4d-43e1-a9c5-a5d4bcfe8888/volumes" Apr 12 18:55:52.661608 kubelet[2089]: I0412 18:55:52.661578 2089 setters.go:548] "Node became not ready" node="localhost" condition={Type:Ready Status:False LastHeartbeatTime:2024-04-12 18:55:52.661523053 +0000 UTC m=+112.202890033 LastTransitionTime:2024-04-12 18:55:52.661523053 +0000 UTC m=+112.202890033 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Apr 12 18:55:52.930521 kubelet[2089]: E0412 18:55:52.930494 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:55:52.932831 env[1199]: time="2024-04-12T18:55:52.932788941Z" level=info msg="CreateContainer within sandbox \"aacbffbdba9088eb87c15a1f683adb75f2c41346bfc179177ee41f38a362feb6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 12 18:55:52.947641 env[1199]: time="2024-04-12T18:55:52.947566283Z" level=info msg="CreateContainer within sandbox \"aacbffbdba9088eb87c15a1f683adb75f2c41346bfc179177ee41f38a362feb6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"be7df99d78c328c6f8c00d6d5020ab5f22ad529d34fefbe9ce5f2a0dd315416f\"" Apr 12 18:55:52.948249 env[1199]: time="2024-04-12T18:55:52.948203241Z" level=info msg="StartContainer for \"be7df99d78c328c6f8c00d6d5020ab5f22ad529d34fefbe9ce5f2a0dd315416f\"" Apr 12 18:55:52.987688 env[1199]: time="2024-04-12T18:55:52.987628049Z" level=info msg="StartContainer for \"be7df99d78c328c6f8c00d6d5020ab5f22ad529d34fefbe9ce5f2a0dd315416f\" returns successfully" Apr 12 18:55:53.009830 env[1199]: time="2024-04-12T18:55:53.009768000Z" level=info msg="shim disconnected" id=be7df99d78c328c6f8c00d6d5020ab5f22ad529d34fefbe9ce5f2a0dd315416f Apr 12 18:55:53.009830 env[1199]: time="2024-04-12T18:55:53.009825760Z" level=warning msg="cleaning up after shim disconnected" id=be7df99d78c328c6f8c00d6d5020ab5f22ad529d34fefbe9ce5f2a0dd315416f namespace=k8s.io Apr 12 18:55:53.009830 env[1199]: time="2024-04-12T18:55:53.009835278Z" level=info msg="cleaning up dead shim" Apr 12 18:55:53.017007 env[1199]: time="2024-04-12T18:55:53.016965356Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:55:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4272 runtime=io.containerd.runc.v2\n" Apr 12 18:55:53.935343 kubelet[2089]: E0412 18:55:53.935294 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:55:53.938015 env[1199]: time="2024-04-12T18:55:53.937951947Z" level=info msg="CreateContainer within sandbox \"aacbffbdba9088eb87c15a1f683adb75f2c41346bfc179177ee41f38a362feb6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 12 18:55:53.955848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3898358815.mount: Deactivated successfully. Apr 12 18:55:53.959480 env[1199]: time="2024-04-12T18:55:53.959433867Z" level=info msg="CreateContainer within sandbox \"aacbffbdba9088eb87c15a1f683adb75f2c41346bfc179177ee41f38a362feb6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d9348adeae6f5533529d1d2b7156f1427105ce5920e11472a72df3ca6e408e7e\"" Apr 12 18:55:53.960046 env[1199]: time="2024-04-12T18:55:53.960021400Z" level=info msg="StartContainer for \"d9348adeae6f5533529d1d2b7156f1427105ce5920e11472a72df3ca6e408e7e\"" Apr 12 18:55:54.008872 env[1199]: time="2024-04-12T18:55:54.008806223Z" level=info msg="StartContainer for \"d9348adeae6f5533529d1d2b7156f1427105ce5920e11472a72df3ca6e408e7e\" returns successfully" Apr 12 18:55:54.029934 env[1199]: time="2024-04-12T18:55:54.029873370Z" level=info msg="shim disconnected" id=d9348adeae6f5533529d1d2b7156f1427105ce5920e11472a72df3ca6e408e7e Apr 12 18:55:54.029934 env[1199]: time="2024-04-12T18:55:54.029921932Z" level=warning msg="cleaning up after shim disconnected" id=d9348adeae6f5533529d1d2b7156f1427105ce5920e11472a72df3ca6e408e7e namespace=k8s.io Apr 12 18:55:54.029934 env[1199]: time="2024-04-12T18:55:54.029930898Z" level=info msg="cleaning up dead shim" Apr 12 18:55:54.035750 env[1199]: time="2024-04-12T18:55:54.035696740Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:55:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4329 runtime=io.containerd.runc.v2\n" Apr 12 18:55:54.066562 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9348adeae6f5533529d1d2b7156f1427105ce5920e11472a72df3ca6e408e7e-rootfs.mount: Deactivated successfully. Apr 12 18:55:54.939513 kubelet[2089]: E0412 18:55:54.939481 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:55:54.941664 env[1199]: time="2024-04-12T18:55:54.941622753Z" level=info msg="CreateContainer within sandbox \"aacbffbdba9088eb87c15a1f683adb75f2c41346bfc179177ee41f38a362feb6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 12 18:55:54.955923 env[1199]: time="2024-04-12T18:55:54.955865211Z" level=info msg="CreateContainer within sandbox \"aacbffbdba9088eb87c15a1f683adb75f2c41346bfc179177ee41f38a362feb6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6ab114eaac1f6668c81562e3afd763712f6678fe3e315d214d9a61c5b4cb23b6\"" Apr 12 18:55:54.956517 env[1199]: time="2024-04-12T18:55:54.956480046Z" level=info msg="StartContainer for \"6ab114eaac1f6668c81562e3afd763712f6678fe3e315d214d9a61c5b4cb23b6\"" Apr 12 18:55:54.996175 env[1199]: time="2024-04-12T18:55:54.996129390Z" level=info msg="StartContainer for \"6ab114eaac1f6668c81562e3afd763712f6678fe3e315d214d9a61c5b4cb23b6\" returns successfully" Apr 12 18:55:55.014409 env[1199]: time="2024-04-12T18:55:55.014345621Z" level=info msg="shim disconnected" id=6ab114eaac1f6668c81562e3afd763712f6678fe3e315d214d9a61c5b4cb23b6 Apr 12 18:55:55.014409 env[1199]: time="2024-04-12T18:55:55.014413539Z" level=warning msg="cleaning up after shim disconnected" id=6ab114eaac1f6668c81562e3afd763712f6678fe3e315d214d9a61c5b4cb23b6 namespace=k8s.io Apr 12 18:55:55.014711 env[1199]: time="2024-04-12T18:55:55.014430441Z" level=info msg="cleaning up dead shim" Apr 12 18:55:55.021210 env[1199]: time="2024-04-12T18:55:55.021161849Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:55:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4382 runtime=io.containerd.runc.v2\n" Apr 12 18:55:55.066614 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ab114eaac1f6668c81562e3afd763712f6678fe3e315d214d9a61c5b4cb23b6-rootfs.mount: Deactivated successfully. Apr 12 18:55:55.654648 kubelet[2089]: E0412 18:55:55.654615 2089 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 12 18:55:55.944842 kubelet[2089]: E0412 18:55:55.944698 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:55:55.948622 env[1199]: time="2024-04-12T18:55:55.948501862Z" level=info msg="CreateContainer within sandbox \"aacbffbdba9088eb87c15a1f683adb75f2c41346bfc179177ee41f38a362feb6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 12 18:55:55.962827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3069508022.mount: Deactivated successfully. Apr 12 18:55:55.963911 env[1199]: time="2024-04-12T18:55:55.963851251Z" level=info msg="CreateContainer within sandbox \"aacbffbdba9088eb87c15a1f683adb75f2c41346bfc179177ee41f38a362feb6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7cd760f6eaa47ef21d81f3d624bb189f9d276ea76de78ecf8874f41ace4267c6\"" Apr 12 18:55:55.964472 env[1199]: time="2024-04-12T18:55:55.964432682Z" level=info msg="StartContainer for \"7cd760f6eaa47ef21d81f3d624bb189f9d276ea76de78ecf8874f41ace4267c6\"" Apr 12 18:55:56.013294 env[1199]: time="2024-04-12T18:55:56.013248628Z" level=info msg="StartContainer for \"7cd760f6eaa47ef21d81f3d624bb189f9d276ea76de78ecf8874f41ace4267c6\" returns successfully" Apr 12 18:55:56.300029 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 12 18:55:56.949507 kubelet[2089]: E0412 18:55:56.949474 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:55:56.963853 kubelet[2089]: I0412 18:55:56.963812 2089 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-f958q" podStartSLOduration=5.96375301 podCreationTimestamp="2024-04-12 18:55:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:55:56.963146623 +0000 UTC m=+116.504513613" watchObservedRunningTime="2024-04-12 18:55:56.96375301 +0000 UTC m=+116.505119990" Apr 12 18:55:58.262716 kubelet[2089]: E0412 18:55:58.262681 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:55:58.782815 systemd-networkd[1073]: lxc_health: Link UP Apr 12 18:55:58.790487 systemd-networkd[1073]: lxc_health: Gained carrier Apr 12 18:55:58.791028 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Apr 12 18:56:00.263217 kubelet[2089]: E0412 18:56:00.263184 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:56:00.494737 systemd-networkd[1073]: lxc_health: Gained IPv6LL Apr 12 18:56:00.547796 env[1199]: time="2024-04-12T18:56:00.547622136Z" level=info msg="StopPodSandbox for \"982d52cb6ef4acf249d4a43e54a5022b65826178307e1f55d945c966f85b5ba1\"" Apr 12 18:56:00.547796 env[1199]: time="2024-04-12T18:56:00.547729991Z" level=info msg="TearDown network for sandbox \"982d52cb6ef4acf249d4a43e54a5022b65826178307e1f55d945c966f85b5ba1\" successfully" Apr 12 18:56:00.547796 env[1199]: time="2024-04-12T18:56:00.547784915Z" level=info msg="StopPodSandbox for \"982d52cb6ef4acf249d4a43e54a5022b65826178307e1f55d945c966f85b5ba1\" returns successfully" Apr 12 18:56:00.548460 env[1199]: time="2024-04-12T18:56:00.548329285Z" level=info msg="RemovePodSandbox for \"982d52cb6ef4acf249d4a43e54a5022b65826178307e1f55d945c966f85b5ba1\"" Apr 12 18:56:00.548460 env[1199]: time="2024-04-12T18:56:00.548358550Z" level=info msg="Forcibly stopping sandbox \"982d52cb6ef4acf249d4a43e54a5022b65826178307e1f55d945c966f85b5ba1\"" Apr 12 18:56:00.548460 env[1199]: time="2024-04-12T18:56:00.548428573Z" level=info msg="TearDown network for sandbox \"982d52cb6ef4acf249d4a43e54a5022b65826178307e1f55d945c966f85b5ba1\" successfully" Apr 12 18:56:00.552128 env[1199]: time="2024-04-12T18:56:00.552055005Z" level=info msg="RemovePodSandbox \"982d52cb6ef4acf249d4a43e54a5022b65826178307e1f55d945c966f85b5ba1\" returns successfully" Apr 12 18:56:00.552428 env[1199]: time="2024-04-12T18:56:00.552388275Z" level=info msg="StopPodSandbox for \"471141734105913d7bb42e7e937cd57ff553f986551f7a39270615680a06347c\"" Apr 12 18:56:00.552519 env[1199]: time="2024-04-12T18:56:00.552468276Z" level=info msg="TearDown network for sandbox \"471141734105913d7bb42e7e937cd57ff553f986551f7a39270615680a06347c\" successfully" Apr 12 18:56:00.552598 env[1199]: time="2024-04-12T18:56:00.552521718Z" level=info msg="StopPodSandbox for \"471141734105913d7bb42e7e937cd57ff553f986551f7a39270615680a06347c\" returns successfully" Apr 12 18:56:00.553596 env[1199]: time="2024-04-12T18:56:00.552875657Z" level=info msg="RemovePodSandbox for \"471141734105913d7bb42e7e937cd57ff553f986551f7a39270615680a06347c\"" Apr 12 18:56:00.553596 env[1199]: time="2024-04-12T18:56:00.552905234Z" level=info msg="Forcibly stopping sandbox \"471141734105913d7bb42e7e937cd57ff553f986551f7a39270615680a06347c\"" Apr 12 18:56:00.553596 env[1199]: time="2024-04-12T18:56:00.552973252Z" level=info msg="TearDown network for sandbox \"471141734105913d7bb42e7e937cd57ff553f986551f7a39270615680a06347c\" successfully" Apr 12 18:56:00.556028 env[1199]: time="2024-04-12T18:56:00.555981514Z" level=info msg="RemovePodSandbox \"471141734105913d7bb42e7e937cd57ff553f986551f7a39270615680a06347c\" returns successfully" Apr 12 18:56:00.556372 env[1199]: time="2024-04-12T18:56:00.556338890Z" level=info msg="StopPodSandbox for \"54717cf0c0af0b96e1a6826ca1879a45270cb55525e1085db33680141224cb7b\"" Apr 12 18:56:00.556480 env[1199]: time="2024-04-12T18:56:00.556428691Z" level=info msg="TearDown network for sandbox \"54717cf0c0af0b96e1a6826ca1879a45270cb55525e1085db33680141224cb7b\" successfully" Apr 12 18:56:00.556578 env[1199]: time="2024-04-12T18:56:00.556481831Z" level=info msg="StopPodSandbox for \"54717cf0c0af0b96e1a6826ca1879a45270cb55525e1085db33680141224cb7b\" returns successfully" Apr 12 18:56:00.556929 env[1199]: time="2024-04-12T18:56:00.556895534Z" level=info msg="RemovePodSandbox for \"54717cf0c0af0b96e1a6826ca1879a45270cb55525e1085db33680141224cb7b\"" Apr 12 18:56:00.556987 env[1199]: time="2024-04-12T18:56:00.556928616Z" level=info msg="Forcibly stopping sandbox \"54717cf0c0af0b96e1a6826ca1879a45270cb55525e1085db33680141224cb7b\"" Apr 12 18:56:00.557049 env[1199]: time="2024-04-12T18:56:00.557008297Z" level=info msg="TearDown network for sandbox \"54717cf0c0af0b96e1a6826ca1879a45270cb55525e1085db33680141224cb7b\" successfully" Apr 12 18:56:00.559871 env[1199]: time="2024-04-12T18:56:00.559843131Z" level=info msg="RemovePodSandbox \"54717cf0c0af0b96e1a6826ca1879a45270cb55525e1085db33680141224cb7b\" returns successfully" Apr 12 18:56:00.956991 kubelet[2089]: E0412 18:56:00.956945 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:56:04.550275 kubelet[2089]: E0412 18:56:04.550233 2089 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:56:04.602418 sshd[3953]: pam_unix(sshd:session): session closed for user core Apr 12 18:56:04.604769 systemd[1]: sshd@28-10.0.0.108:22-10.0.0.1:41532.service: Deactivated successfully. Apr 12 18:56:04.605842 systemd[1]: session-29.scope: Deactivated successfully. Apr 12 18:56:04.606980 systemd-logind[1176]: Session 29 logged out. Waiting for processes to exit. Apr 12 18:56:04.608132 systemd-logind[1176]: Removed session 29.