May 8 00:49:29.170060 kernel: Linux version 5.15.180-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Wed May 7 23:10:51 -00 2025 May 8 00:49:29.170089 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a54efb5fced97d6fa50818abcad373184ba88ccc0f58664d2cd82270befba488 May 8 00:49:29.170103 kernel: BIOS-provided physical RAM map: May 8 00:49:29.170111 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 8 00:49:29.170134 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 8 00:49:29.170143 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 8 00:49:29.170152 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 8 00:49:29.170160 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 8 00:49:29.170168 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable May 8 00:49:29.170179 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS May 8 00:49:29.170202 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable May 8 00:49:29.170209 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved May 8 00:49:29.170217 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data May 8 00:49:29.170225 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 8 00:49:29.170236 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable May 8 00:49:29.170250 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved May 8 00:49:29.170258 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 8 00:49:29.170266 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 8 00:49:29.170273 kernel: NX (Execute Disable) protection: active May 8 00:49:29.170281 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable May 8 00:49:29.170290 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable May 8 00:49:29.170299 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable May 8 00:49:29.170308 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable May 8 00:49:29.170316 kernel: extended physical RAM map: May 8 00:49:29.170325 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 8 00:49:29.170336 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 8 00:49:29.170345 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 8 00:49:29.170353 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 8 00:49:29.170362 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 8 00:49:29.170371 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable May 8 00:49:29.170383 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS May 8 00:49:29.170391 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b438017] usable May 8 00:49:29.170399 kernel: reserve setup_data: [mem 0x000000009b438018-0x000000009b474e57] usable May 8 00:49:29.170407 kernel: reserve setup_data: [mem 0x000000009b474e58-0x000000009b475017] usable May 8 00:49:29.170415 kernel: reserve setup_data: [mem 0x000000009b475018-0x000000009b47ec57] usable May 8 00:49:29.170423 kernel: reserve setup_data: [mem 0x000000009b47ec58-0x000000009c8eefff] usable May 8 00:49:29.170432 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved May 8 00:49:29.170440 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data May 8 00:49:29.170448 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 8 00:49:29.170457 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable May 8 00:49:29.170468 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved May 8 00:49:29.170477 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 8 00:49:29.170485 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 8 00:49:29.170506 kernel: efi: EFI v2.70 by EDK II May 8 00:49:29.170515 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b673018 RNG=0x9cb73018 May 8 00:49:29.170540 kernel: random: crng init done May 8 00:49:29.170567 kernel: SMBIOS 2.8 present. May 8 00:49:29.170576 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 May 8 00:49:29.170585 kernel: Hypervisor detected: KVM May 8 00:49:29.170594 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 8 00:49:29.170603 kernel: kvm-clock: cpu 0, msr 38198001, primary cpu clock May 8 00:49:29.170611 kernel: kvm-clock: using sched offset of 5800672654 cycles May 8 00:49:29.170631 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 8 00:49:29.170640 kernel: tsc: Detected 2794.748 MHz processor May 8 00:49:29.170650 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 8 00:49:29.170659 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 8 00:49:29.170668 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 May 8 00:49:29.170678 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 8 00:49:29.170689 kernel: Using GB pages for direct mapping May 8 00:49:29.170700 kernel: Secure boot disabled May 8 00:49:29.170711 kernel: ACPI: Early table checksum verification disabled May 8 00:49:29.170725 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 8 00:49:29.170736 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 8 00:49:29.170748 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:49:29.170760 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:49:29.170772 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 8 00:49:29.170783 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:49:29.170794 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:49:29.170805 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:49:29.170816 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:49:29.170831 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 8 00:49:29.170842 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 8 00:49:29.170859 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 8 00:49:29.170871 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 8 00:49:29.170882 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 8 00:49:29.170893 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 8 00:49:29.170908 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 8 00:49:29.170918 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 8 00:49:29.170927 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 8 00:49:29.170939 kernel: No NUMA configuration found May 8 00:49:29.170949 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] May 8 00:49:29.170958 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] May 8 00:49:29.170968 kernel: Zone ranges: May 8 00:49:29.170978 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 8 00:49:29.170987 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] May 8 00:49:29.170997 kernel: Normal empty May 8 00:49:29.171006 kernel: Movable zone start for each node May 8 00:49:29.171016 kernel: Early memory node ranges May 8 00:49:29.171027 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 8 00:49:29.171037 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 8 00:49:29.171046 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 8 00:49:29.171056 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] May 8 00:49:29.171065 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] May 8 00:49:29.171075 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] May 8 00:49:29.171084 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] May 8 00:49:29.171094 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 00:49:29.171103 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 8 00:49:29.171125 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 8 00:49:29.171138 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 00:49:29.171147 kernel: On node 0, zone DMA: 240 pages in unavailable ranges May 8 00:49:29.171157 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 8 00:49:29.171167 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges May 8 00:49:29.171176 kernel: ACPI: PM-Timer IO Port: 0x608 May 8 00:49:29.171186 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 8 00:49:29.171195 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 8 00:49:29.171205 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 8 00:49:29.171214 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 8 00:49:29.171226 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 8 00:49:29.171236 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 8 00:49:29.171245 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 8 00:49:29.171259 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 8 00:49:29.171268 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 8 00:49:29.171277 kernel: TSC deadline timer available May 8 00:49:29.171287 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 8 00:49:29.171297 kernel: kvm-guest: KVM setup pv remote TLB flush May 8 00:49:29.171306 kernel: kvm-guest: setup PV sched yield May 8 00:49:29.171318 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices May 8 00:49:29.171328 kernel: Booting paravirtualized kernel on KVM May 8 00:49:29.171344 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 8 00:49:29.171356 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 May 8 00:49:29.171367 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 May 8 00:49:29.171377 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 May 8 00:49:29.171387 kernel: pcpu-alloc: [0] 0 1 2 3 May 8 00:49:29.171399 kernel: kvm-guest: setup async PF for cpu 0 May 8 00:49:29.171409 kernel: kvm-guest: stealtime: cpu 0, msr 9b21c0c0 May 8 00:49:29.171419 kernel: kvm-guest: PV spinlocks enabled May 8 00:49:29.171429 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 8 00:49:29.171439 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 May 8 00:49:29.171451 kernel: Policy zone: DMA32 May 8 00:49:29.171463 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a54efb5fced97d6fa50818abcad373184ba88ccc0f58664d2cd82270befba488 May 8 00:49:29.171473 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:49:29.171483 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 00:49:29.171495 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:49:29.171505 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:49:29.171516 kernel: Memory: 2397432K/2567000K available (12294K kernel code, 2279K rwdata, 13724K rodata, 47464K init, 4116K bss, 169308K reserved, 0K cma-reserved) May 8 00:49:29.171526 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 8 00:49:29.171536 kernel: ftrace: allocating 34584 entries in 136 pages May 8 00:49:29.171566 kernel: ftrace: allocated 136 pages with 2 groups May 8 00:49:29.171576 kernel: rcu: Hierarchical RCU implementation. May 8 00:49:29.171587 kernel: rcu: RCU event tracing is enabled. May 8 00:49:29.171598 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 8 00:49:29.171611 kernel: Rude variant of Tasks RCU enabled. May 8 00:49:29.171621 kernel: Tracing variant of Tasks RCU enabled. May 8 00:49:29.171632 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:49:29.171642 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 8 00:49:29.171652 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 8 00:49:29.171662 kernel: Console: colour dummy device 80x25 May 8 00:49:29.171672 kernel: printk: console [ttyS0] enabled May 8 00:49:29.171682 kernel: ACPI: Core revision 20210730 May 8 00:49:29.171693 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 8 00:49:29.171706 kernel: APIC: Switch to symmetric I/O mode setup May 8 00:49:29.171716 kernel: x2apic enabled May 8 00:49:29.171726 kernel: Switched APIC routing to physical x2apic. May 8 00:49:29.171736 kernel: kvm-guest: setup PV IPIs May 8 00:49:29.171745 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 8 00:49:29.171756 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 8 00:49:29.171766 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 8 00:49:29.171776 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 8 00:49:29.171786 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 8 00:49:29.171797 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 8 00:49:29.171807 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 8 00:49:29.171817 kernel: Spectre V2 : Mitigation: Retpolines May 8 00:49:29.171827 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 8 00:49:29.171837 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 8 00:49:29.171847 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 8 00:49:29.171857 kernel: RETBleed: Mitigation: untrained return thunk May 8 00:49:29.171871 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 8 00:49:29.171881 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 8 00:49:29.171893 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 8 00:49:29.171905 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 8 00:49:29.171915 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 8 00:49:29.171925 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 8 00:49:29.171936 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 8 00:49:29.171945 kernel: Freeing SMP alternatives memory: 32K May 8 00:49:29.171955 kernel: pid_max: default: 32768 minimum: 301 May 8 00:49:29.171965 kernel: LSM: Security Framework initializing May 8 00:49:29.171975 kernel: SELinux: Initializing. May 8 00:49:29.171990 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:49:29.172001 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:49:29.172011 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 8 00:49:29.172020 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 8 00:49:29.172030 kernel: ... version: 0 May 8 00:49:29.172040 kernel: ... bit width: 48 May 8 00:49:29.172050 kernel: ... generic registers: 6 May 8 00:49:29.172060 kernel: ... value mask: 0000ffffffffffff May 8 00:49:29.172070 kernel: ... max period: 00007fffffffffff May 8 00:49:29.172081 kernel: ... fixed-purpose events: 0 May 8 00:49:29.172091 kernel: ... event mask: 000000000000003f May 8 00:49:29.172101 kernel: signal: max sigframe size: 1776 May 8 00:49:29.172110 kernel: rcu: Hierarchical SRCU implementation. May 8 00:49:29.172132 kernel: smp: Bringing up secondary CPUs ... May 8 00:49:29.172142 kernel: x86: Booting SMP configuration: May 8 00:49:29.172152 kernel: .... node #0, CPUs: #1 May 8 00:49:29.172161 kernel: kvm-clock: cpu 1, msr 38198041, secondary cpu clock May 8 00:49:29.172171 kernel: kvm-guest: setup async PF for cpu 1 May 8 00:49:29.172184 kernel: kvm-guest: stealtime: cpu 1, msr 9b29c0c0 May 8 00:49:29.172193 kernel: #2 May 8 00:49:29.172204 kernel: kvm-clock: cpu 2, msr 38198081, secondary cpu clock May 8 00:49:29.172214 kernel: kvm-guest: setup async PF for cpu 2 May 8 00:49:29.172223 kernel: kvm-guest: stealtime: cpu 2, msr 9b31c0c0 May 8 00:49:29.172233 kernel: #3 May 8 00:49:29.172243 kernel: kvm-clock: cpu 3, msr 381980c1, secondary cpu clock May 8 00:49:29.172253 kernel: kvm-guest: setup async PF for cpu 3 May 8 00:49:29.172263 kernel: kvm-guest: stealtime: cpu 3, msr 9b39c0c0 May 8 00:49:29.172274 kernel: smp: Brought up 1 node, 4 CPUs May 8 00:49:29.172284 kernel: smpboot: Max logical packages: 1 May 8 00:49:29.172294 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 8 00:49:29.172304 kernel: devtmpfs: initialized May 8 00:49:29.172314 kernel: x86/mm: Memory block size: 128MB May 8 00:49:29.172324 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 8 00:49:29.172334 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 8 00:49:29.172344 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) May 8 00:49:29.172354 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 8 00:49:29.172371 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 8 00:49:29.172796 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:49:29.172812 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 8 00:49:29.172824 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:49:29.172836 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:49:29.172848 kernel: audit: initializing netlink subsys (disabled) May 8 00:49:29.172859 kernel: audit: type=2000 audit(1746665368.698:1): state=initialized audit_enabled=0 res=1 May 8 00:49:29.172870 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:49:29.172882 kernel: thermal_sys: Registered thermal governor 'user_space' May 8 00:49:29.172898 kernel: cpuidle: using governor menu May 8 00:49:29.172910 kernel: ACPI: bus type PCI registered May 8 00:49:29.172920 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:49:29.172930 kernel: dca service started, version 1.12.1 May 8 00:49:29.172939 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 8 00:49:29.172949 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 May 8 00:49:29.172959 kernel: PCI: Using configuration type 1 for base access May 8 00:49:29.172969 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 8 00:49:29.172978 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:49:29.172991 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:49:29.173000 kernel: ACPI: Added _OSI(Module Device) May 8 00:49:29.173010 kernel: ACPI: Added _OSI(Processor Device) May 8 00:49:29.173019 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:49:29.173028 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:49:29.173039 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 8 00:49:29.173049 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 8 00:49:29.173059 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 8 00:49:29.173069 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:49:29.173082 kernel: ACPI: Interpreter enabled May 8 00:49:29.173093 kernel: ACPI: PM: (supports S0 S3 S5) May 8 00:49:29.173103 kernel: ACPI: Using IOAPIC for interrupt routing May 8 00:49:29.173156 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 8 00:49:29.173178 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 8 00:49:29.173189 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 00:49:29.173618 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:49:29.179284 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 8 00:49:29.179420 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 8 00:49:29.179434 kernel: PCI host bridge to bus 0000:00 May 8 00:49:29.179542 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 8 00:49:29.179640 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 8 00:49:29.179717 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 8 00:49:29.179785 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 8 00:49:29.179852 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 8 00:49:29.179923 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] May 8 00:49:29.179992 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 00:49:29.180095 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 8 00:49:29.180212 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 8 00:49:29.180312 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 8 00:49:29.180452 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 8 00:49:29.180575 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 8 00:49:29.180659 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 8 00:49:29.180736 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 8 00:49:29.180851 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 8 00:49:29.180970 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 8 00:49:29.181070 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 8 00:49:29.181225 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] May 8 00:49:29.181346 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 8 00:49:29.181446 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 8 00:49:29.181547 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 8 00:49:29.181690 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] May 8 00:49:29.181802 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 8 00:49:29.181923 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 8 00:49:29.182030 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 8 00:49:29.182178 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] May 8 00:49:29.182297 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 8 00:49:29.182432 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 8 00:49:29.182541 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 8 00:49:29.182694 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 8 00:49:29.182826 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 8 00:49:29.182944 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 8 00:49:29.183067 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 8 00:49:29.183180 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 8 00:49:29.183194 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 8 00:49:29.183204 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 8 00:49:29.183214 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 8 00:49:29.183223 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 8 00:49:29.183232 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 8 00:49:29.183241 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 8 00:49:29.183254 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 8 00:49:29.183263 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 8 00:49:29.183272 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 8 00:49:29.183282 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 8 00:49:29.183292 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 8 00:49:29.183301 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 8 00:49:29.183311 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 8 00:49:29.183320 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 8 00:49:29.183330 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 8 00:49:29.183342 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 8 00:49:29.183352 kernel: iommu: Default domain type: Translated May 8 00:49:29.183361 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 8 00:49:29.183462 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 8 00:49:29.183569 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 8 00:49:29.183669 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 8 00:49:29.183683 kernel: vgaarb: loaded May 8 00:49:29.183694 kernel: pps_core: LinuxPPS API ver. 1 registered May 8 00:49:29.183709 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 8 00:49:29.183718 kernel: PTP clock support registered May 8 00:49:29.183728 kernel: Registered efivars operations May 8 00:49:29.183737 kernel: PCI: Using ACPI for IRQ routing May 8 00:49:29.183746 kernel: PCI: pci_cache_line_size set to 64 bytes May 8 00:49:29.183755 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 8 00:49:29.183764 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] May 8 00:49:29.183773 kernel: e820: reserve RAM buffer [mem 0x9b438018-0x9bffffff] May 8 00:49:29.183782 kernel: e820: reserve RAM buffer [mem 0x9b475018-0x9bffffff] May 8 00:49:29.183791 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] May 8 00:49:29.183802 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] May 8 00:49:29.183812 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 8 00:49:29.183821 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 8 00:49:29.183830 kernel: clocksource: Switched to clocksource kvm-clock May 8 00:49:29.183839 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:49:29.183849 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:49:29.183858 kernel: pnp: PnP ACPI init May 8 00:49:29.183962 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 8 00:49:29.183978 kernel: pnp: PnP ACPI: found 6 devices May 8 00:49:29.183988 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 8 00:49:29.184004 kernel: NET: Registered PF_INET protocol family May 8 00:49:29.184014 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 00:49:29.184023 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 00:49:29.184033 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:49:29.184043 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:49:29.184053 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 8 00:49:29.184066 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 00:49:29.184075 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:49:29.184085 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:49:29.184094 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:49:29.184103 kernel: NET: Registered PF_XDP protocol family May 8 00:49:29.184272 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 8 00:49:29.184381 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 8 00:49:29.184480 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 8 00:49:29.184588 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 8 00:49:29.184673 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 8 00:49:29.184744 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 8 00:49:29.184811 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 8 00:49:29.184879 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] May 8 00:49:29.184889 kernel: PCI: CLS 0 bytes, default 64 May 8 00:49:29.184897 kernel: Initialise system trusted keyrings May 8 00:49:29.184904 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 00:49:29.184911 kernel: Key type asymmetric registered May 8 00:49:29.184921 kernel: Asymmetric key parser 'x509' registered May 8 00:49:29.184929 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 8 00:49:29.184947 kernel: io scheduler mq-deadline registered May 8 00:49:29.184955 kernel: io scheduler kyber registered May 8 00:49:29.184963 kernel: io scheduler bfq registered May 8 00:49:29.184970 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 8 00:49:29.184979 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 8 00:49:29.184986 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 8 00:49:29.184994 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 8 00:49:29.185003 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:49:29.185010 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 8 00:49:29.185018 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 8 00:49:29.185025 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 8 00:49:29.185033 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 8 00:49:29.185041 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 8 00:49:29.185148 kernel: rtc_cmos 00:04: RTC can wake from S4 May 8 00:49:29.185221 kernel: rtc_cmos 00:04: registered as rtc0 May 8 00:49:29.185294 kernel: rtc_cmos 00:04: setting system clock to 2025-05-08T00:49:28 UTC (1746665368) May 8 00:49:29.185377 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 8 00:49:29.185390 kernel: efifb: probing for efifb May 8 00:49:29.185403 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 8 00:49:29.185413 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 8 00:49:29.185424 kernel: efifb: scrolling: redraw May 8 00:49:29.185433 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 8 00:49:29.185444 kernel: Console: switching to colour frame buffer device 160x50 May 8 00:49:29.185453 kernel: fb0: EFI VGA frame buffer device May 8 00:49:29.185466 kernel: pstore: Registered efi as persistent store backend May 8 00:49:29.185475 kernel: NET: Registered PF_INET6 protocol family May 8 00:49:29.185485 kernel: Segment Routing with IPv6 May 8 00:49:29.185498 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:49:29.185508 kernel: NET: Registered PF_PACKET protocol family May 8 00:49:29.185520 kernel: Key type dns_resolver registered May 8 00:49:29.185528 kernel: IPI shorthand broadcast: enabled May 8 00:49:29.185536 kernel: sched_clock: Marking stable (601001954, 152430810)->(810498503, -57065739) May 8 00:49:29.185543 kernel: registered taskstats version 1 May 8 00:49:29.185551 kernel: Loading compiled-in X.509 certificates May 8 00:49:29.185568 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.180-flatcar: c9ff13353458e6fa2786638fdd3dcad841d1075c' May 8 00:49:29.185576 kernel: Key type .fscrypt registered May 8 00:49:29.185584 kernel: Key type fscrypt-provisioning registered May 8 00:49:29.185592 kernel: pstore: Using crash dump compression: deflate May 8 00:49:29.185602 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:49:29.185609 kernel: ima: Allocated hash algorithm: sha1 May 8 00:49:29.185617 kernel: ima: No architecture policies found May 8 00:49:29.185624 kernel: clk: Disabling unused clocks May 8 00:49:29.185632 kernel: Freeing unused kernel image (initmem) memory: 47464K May 8 00:49:29.185639 kernel: Write protecting the kernel read-only data: 28672k May 8 00:49:29.185647 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 8 00:49:29.185654 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 8 00:49:29.185662 kernel: Run /init as init process May 8 00:49:29.185671 kernel: with arguments: May 8 00:49:29.185678 kernel: /init May 8 00:49:29.185685 kernel: with environment: May 8 00:49:29.185692 kernel: HOME=/ May 8 00:49:29.185700 kernel: TERM=linux May 8 00:49:29.185708 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:49:29.185720 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 8 00:49:29.185734 systemd[1]: Detected virtualization kvm. May 8 00:49:29.185745 systemd[1]: Detected architecture x86-64. May 8 00:49:29.185753 systemd[1]: Running in initrd. May 8 00:49:29.185761 systemd[1]: No hostname configured, using default hostname. May 8 00:49:29.185769 systemd[1]: Hostname set to . May 8 00:49:29.185777 systemd[1]: Initializing machine ID from VM UUID. May 8 00:49:29.185785 systemd[1]: Queued start job for default target initrd.target. May 8 00:49:29.185793 systemd[1]: Started systemd-ask-password-console.path. May 8 00:49:29.185801 systemd[1]: Reached target cryptsetup.target. May 8 00:49:29.185810 systemd[1]: Reached target paths.target. May 8 00:49:29.185817 systemd[1]: Reached target slices.target. May 8 00:49:29.185831 systemd[1]: Reached target swap.target. May 8 00:49:29.185841 systemd[1]: Reached target timers.target. May 8 00:49:29.185849 systemd[1]: Listening on iscsid.socket. May 8 00:49:29.185859 systemd[1]: Listening on iscsiuio.socket. May 8 00:49:29.185867 systemd[1]: Listening on systemd-journald-audit.socket. May 8 00:49:29.185876 systemd[1]: Listening on systemd-journald-dev-log.socket. May 8 00:49:29.185889 systemd[1]: Listening on systemd-journald.socket. May 8 00:49:29.185900 systemd[1]: Listening on systemd-networkd.socket. May 8 00:49:29.185911 systemd[1]: Listening on systemd-udevd-control.socket. May 8 00:49:29.185921 systemd[1]: Listening on systemd-udevd-kernel.socket. May 8 00:49:29.185932 systemd[1]: Reached target sockets.target. May 8 00:49:29.185943 systemd[1]: Starting kmod-static-nodes.service... May 8 00:49:29.185953 systemd[1]: Finished network-cleanup.service. May 8 00:49:29.185963 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:49:29.185974 systemd[1]: Starting systemd-journald.service... May 8 00:49:29.185987 systemd[1]: Starting systemd-modules-load.service... May 8 00:49:29.185997 systemd[1]: Starting systemd-resolved.service... May 8 00:49:29.186005 systemd[1]: Starting systemd-vconsole-setup.service... May 8 00:49:29.186013 systemd[1]: Finished kmod-static-nodes.service. May 8 00:49:29.186021 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:49:29.186030 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 8 00:49:29.186038 systemd[1]: Finished systemd-vconsole-setup.service. May 8 00:49:29.186047 kernel: audit: type=1130 audit(1746665369.176:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:29.186056 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 8 00:49:29.186064 kernel: audit: type=1130 audit(1746665369.182:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:29.186076 systemd-journald[197]: Journal started May 8 00:49:29.186139 systemd-journald[197]: Runtime Journal (/run/log/journal/22ca9d59ebe74ba9b304bb86fe8f3aac) is 6.0M, max 48.4M, 42.4M free. May 8 00:49:29.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:29.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:29.167359 systemd-modules-load[198]: Inserted module 'overlay' May 8 00:49:29.189768 systemd[1]: Started systemd-journald.service. May 8 00:49:29.189804 kernel: audit: type=1130 audit(1746665369.189:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:29.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:29.194329 systemd[1]: Starting dracut-cmdline-ask.service... May 8 00:49:29.202491 systemd-resolved[199]: Positive Trust Anchors: May 8 00:49:29.202508 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:49:29.206394 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:49:29.202535 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 8 00:49:29.205718 systemd-resolved[199]: Defaulting to hostname 'linux'. May 8 00:49:29.206697 systemd[1]: Started systemd-resolved.service. May 8 00:49:29.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:29.207067 systemd[1]: Reached target nss-lookup.target. May 8 00:49:29.211139 kernel: audit: type=1130 audit(1746665369.206:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:29.224434 systemd[1]: Finished dracut-cmdline-ask.service. May 8 00:49:29.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:29.225929 systemd[1]: Starting dracut-cmdline.service... May 8 00:49:29.232006 kernel: audit: type=1130 audit(1746665369.224:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:29.232038 kernel: Bridge firewalling registered May 8 00:49:29.230466 systemd-modules-load[198]: Inserted module 'br_netfilter' May 8 00:49:29.237104 dracut-cmdline[216]: dracut-dracut-053 May 8 00:49:29.239547 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a54efb5fced97d6fa50818abcad373184ba88ccc0f58664d2cd82270befba488 May 8 00:49:29.249151 kernel: SCSI subsystem initialized May 8 00:49:29.303245 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:49:29.303312 kernel: device-mapper: uevent: version 1.0.3 May 8 00:49:29.304737 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 8 00:49:29.308107 systemd-modules-load[198]: Inserted module 'dm_multipath' May 8 00:49:29.308957 systemd[1]: Finished systemd-modules-load.service. May 8 00:49:29.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:29.310885 systemd[1]: Starting systemd-sysctl.service... May 8 00:49:29.316829 kernel: audit: type=1130 audit(1746665369.309:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:29.320515 systemd[1]: Finished systemd-sysctl.service. May 8 00:49:29.325385 kernel: audit: type=1130 audit(1746665369.321:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:29.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:29.330145 kernel: Loading iSCSI transport class v2.0-870. May 8 00:49:29.350157 kernel: iscsi: registered transport (tcp) May 8 00:49:29.378246 kernel: iscsi: registered transport (qla4xxx) May 8 00:49:29.378345 kernel: QLogic iSCSI HBA Driver May 8 00:49:29.412036 systemd[1]: Finished dracut-cmdline.service. May 8 00:49:29.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:29.413573 systemd[1]: Starting dracut-pre-udev.service... May 8 00:49:29.418095 kernel: audit: type=1130 audit(1746665369.412:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:29.464185 kernel: raid6: avx2x4 gen() 29251 MB/s May 8 00:49:29.531167 kernel: raid6: avx2x4 xor() 7160 MB/s May 8 00:49:29.548159 kernel: raid6: avx2x2 gen() 21960 MB/s May 8 00:49:29.565149 kernel: raid6: avx2x2 xor() 15479 MB/s May 8 00:49:29.582152 kernel: raid6: avx2x1 gen() 20623 MB/s May 8 00:49:29.599176 kernel: raid6: avx2x1 xor() 13878 MB/s May 8 00:49:29.635164 kernel: raid6: sse2x4 gen() 14585 MB/s May 8 00:49:29.652179 kernel: raid6: sse2x4 xor() 6873 MB/s May 8 00:49:29.669173 kernel: raid6: sse2x2 gen() 15897 MB/s May 8 00:49:29.686164 kernel: raid6: sse2x2 xor() 9561 MB/s May 8 00:49:29.758164 kernel: raid6: sse2x1 gen() 12045 MB/s May 8 00:49:29.775569 kernel: raid6: sse2x1 xor() 7664 MB/s May 8 00:49:29.775651 kernel: raid6: using algorithm avx2x4 gen() 29251 MB/s May 8 00:49:29.775661 kernel: raid6: .... xor() 7160 MB/s, rmw enabled May 8 00:49:29.776265 kernel: raid6: using avx2x2 recovery algorithm May 8 00:49:29.790163 kernel: xor: automatically using best checksumming function avx May 8 00:49:29.893148 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 8 00:49:29.901292 systemd[1]: Finished dracut-pre-udev.service. May 8 00:49:29.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:29.905000 audit: BPF prog-id=7 op=LOAD May 8 00:49:29.905000 audit: BPF prog-id=8 op=LOAD May 8 00:49:29.906132 kernel: audit: type=1130 audit(1746665369.902:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:29.906296 systemd[1]: Starting systemd-udevd.service... May 8 00:49:29.949981 systemd-udevd[399]: Using default interface naming scheme 'v252'. May 8 00:49:29.954077 systemd[1]: Started systemd-udevd.service. May 8 00:49:29.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:29.955880 systemd[1]: Starting dracut-pre-trigger.service... May 8 00:49:29.966719 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation May 8 00:49:29.994316 systemd[1]: Finished dracut-pre-trigger.service. May 8 00:49:29.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:29.996922 systemd[1]: Starting systemd-udev-trigger.service... May 8 00:49:30.034457 systemd[1]: Finished systemd-udev-trigger.service. May 8 00:49:30.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:30.067155 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 8 00:49:30.134323 kernel: cryptd: max_cpu_qlen set to 1000 May 8 00:49:30.134340 kernel: libata version 3.00 loaded. May 8 00:49:30.134359 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 00:49:30.134368 kernel: AVX2 version of gcm_enc/dec engaged. May 8 00:49:30.134377 kernel: GPT:9289727 != 19775487 May 8 00:49:30.134386 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 00:49:30.134402 kernel: GPT:9289727 != 19775487 May 8 00:49:30.134410 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:49:30.134418 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:49:30.134427 kernel: AES CTR mode by8 optimization enabled May 8 00:49:30.134435 kernel: ahci 0000:00:1f.2: version 3.0 May 8 00:49:30.155682 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 8 00:49:30.155704 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 8 00:49:30.155802 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 8 00:49:30.155882 kernel: scsi host0: ahci May 8 00:49:30.156012 kernel: scsi host1: ahci May 8 00:49:30.156134 kernel: scsi host2: ahci May 8 00:49:30.156248 kernel: scsi host3: ahci May 8 00:49:30.156344 kernel: scsi host4: ahci May 8 00:49:30.156437 kernel: scsi host5: ahci May 8 00:49:30.156545 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 8 00:49:30.156558 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 8 00:49:30.156567 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 8 00:49:30.156576 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 8 00:49:30.156585 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 8 00:49:30.156593 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 8 00:49:30.159553 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 8 00:49:30.185454 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (442) May 8 00:49:30.193691 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 8 00:49:30.195063 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 8 00:49:30.201654 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 8 00:49:30.207156 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 8 00:49:30.208271 systemd[1]: Starting disk-uuid.service... May 8 00:49:30.465096 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 8 00:49:30.465209 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 8 00:49:30.466137 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 8 00:49:30.467148 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 8 00:49:30.468149 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 8 00:49:30.469142 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 8 00:49:30.470136 kernel: ata3.00: applying bridge limits May 8 00:49:30.471140 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 8 00:49:30.471164 kernel: ata3.00: configured for UDMA/100 May 8 00:49:30.474138 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 8 00:49:30.479578 disk-uuid[529]: Primary Header is updated. May 8 00:49:30.479578 disk-uuid[529]: Secondary Entries is updated. May 8 00:49:30.479578 disk-uuid[529]: Secondary Header is updated. May 8 00:49:30.483939 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:49:30.490139 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:49:30.609150 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 8 00:49:30.628865 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 8 00:49:30.628883 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 8 00:49:31.495149 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:49:31.495215 disk-uuid[530]: The operation has completed successfully. May 8 00:49:31.521443 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:49:31.521538 systemd[1]: Finished disk-uuid.service. May 8 00:49:31.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:31.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:31.538470 systemd[1]: Starting verity-setup.service... May 8 00:49:31.551141 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 8 00:49:31.577208 systemd[1]: Found device dev-mapper-usr.device. May 8 00:49:31.598921 systemd[1]: Mounting sysusr-usr.mount... May 8 00:49:31.602037 systemd[1]: Finished verity-setup.service. May 8 00:49:31.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:31.681152 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 8 00:49:31.681768 systemd[1]: Mounted sysusr-usr.mount. May 8 00:49:31.683618 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 8 00:49:31.686134 systemd[1]: Starting ignition-setup.service... May 8 00:49:31.688485 systemd[1]: Starting parse-ip-for-networkd.service... May 8 00:49:31.695859 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:49:31.695893 kernel: BTRFS info (device vda6): using free space tree May 8 00:49:31.695906 kernel: BTRFS info (device vda6): has skinny extents May 8 00:49:31.705297 systemd[1]: mnt-oem.mount: Deactivated successfully. May 8 00:49:31.866963 systemd[1]: Finished parse-ip-for-networkd.service. May 8 00:49:31.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:31.895000 audit: BPF prog-id=9 op=LOAD May 8 00:49:31.896494 systemd[1]: Starting systemd-networkd.service... May 8 00:49:31.917764 systemd-networkd[714]: lo: Link UP May 8 00:49:31.917774 systemd-networkd[714]: lo: Gained carrier May 8 00:49:31.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:31.918368 systemd-networkd[714]: Enumeration completed May 8 00:49:31.918476 systemd[1]: Started systemd-networkd.service. May 8 00:49:31.918650 systemd-networkd[714]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:49:31.920147 systemd-networkd[714]: eth0: Link UP May 8 00:49:31.920151 systemd-networkd[714]: eth0: Gained carrier May 8 00:49:31.920704 systemd[1]: Reached target network.target. May 8 00:49:31.923341 systemd[1]: Starting iscsiuio.service... May 8 00:49:31.960996 systemd[1]: Started iscsiuio.service. May 8 00:49:31.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:31.964300 systemd[1]: Starting iscsid.service... May 8 00:49:31.966266 systemd-networkd[714]: eth0: DHCPv4 address 10.0.0.121/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:49:31.968341 iscsid[719]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 8 00:49:31.968341 iscsid[719]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 8 00:49:31.968341 iscsid[719]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 8 00:49:31.968341 iscsid[719]: If using hardware iscsi like qla4xxx this message can be ignored. May 8 00:49:31.968341 iscsid[719]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 8 00:49:31.968341 iscsid[719]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 8 00:49:32.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:31.970873 systemd[1]: Started iscsid.service. May 8 00:49:32.030760 systemd[1]: Starting dracut-initqueue.service... May 8 00:49:32.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:32.044875 systemd[1]: Finished dracut-initqueue.service. May 8 00:49:32.052259 systemd[1]: Reached target remote-fs-pre.target. May 8 00:49:32.053242 systemd[1]: Reached target remote-cryptsetup.target. May 8 00:49:32.054190 systemd[1]: Reached target remote-fs.target. May 8 00:49:32.055698 systemd[1]: Starting dracut-pre-mount.service... May 8 00:49:32.068900 systemd[1]: Finished dracut-pre-mount.service. May 8 00:49:32.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:32.079584 systemd[1]: Finished ignition-setup.service. May 8 00:49:32.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:32.081865 systemd[1]: Starting ignition-fetch-offline.service... May 8 00:49:32.183577 ignition[734]: Ignition 2.14.0 May 8 00:49:32.183593 ignition[734]: Stage: fetch-offline May 8 00:49:32.183688 ignition[734]: no configs at "/usr/lib/ignition/base.d" May 8 00:49:32.183701 ignition[734]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:49:32.183859 ignition[734]: parsed url from cmdline: "" May 8 00:49:32.183864 ignition[734]: no config URL provided May 8 00:49:32.183871 ignition[734]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:49:32.183881 ignition[734]: no config at "/usr/lib/ignition/user.ign" May 8 00:49:32.183907 ignition[734]: op(1): [started] loading QEMU firmware config module May 8 00:49:32.183927 ignition[734]: op(1): executing: "modprobe" "qemu_fw_cfg" May 8 00:49:32.190241 ignition[734]: op(1): [finished] loading QEMU firmware config module May 8 00:49:32.242508 ignition[734]: parsing config with SHA512: 96656b46048de54074988d70242b8943c3cb17ceb5ad3b1a7646722a35d67a5c1666c0f0eb8b914acdb639444e268cefb275cba7f18f2281c177a2b982549547 May 8 00:49:32.254048 unknown[734]: fetched base config from "system" May 8 00:49:32.254067 unknown[734]: fetched user config from "qemu" May 8 00:49:32.254840 ignition[734]: fetch-offline: fetch-offline passed May 8 00:49:32.254929 ignition[734]: Ignition finished successfully May 8 00:49:32.258536 systemd[1]: Finished ignition-fetch-offline.service. May 8 00:49:32.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:32.260466 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 00:49:32.263221 systemd[1]: Starting ignition-kargs.service... May 8 00:49:32.278959 ignition[742]: Ignition 2.14.0 May 8 00:49:32.278982 ignition[742]: Stage: kargs May 8 00:49:32.279138 ignition[742]: no configs at "/usr/lib/ignition/base.d" May 8 00:49:32.279155 ignition[742]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:49:32.280607 ignition[742]: kargs: kargs passed May 8 00:49:32.280662 ignition[742]: Ignition finished successfully May 8 00:49:32.286028 systemd[1]: Finished ignition-kargs.service. May 8 00:49:32.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:32.287888 systemd[1]: Starting ignition-disks.service... May 8 00:49:32.300633 ignition[748]: Ignition 2.14.0 May 8 00:49:32.300645 ignition[748]: Stage: disks May 8 00:49:32.300757 ignition[748]: no configs at "/usr/lib/ignition/base.d" May 8 00:49:32.300766 ignition[748]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:49:32.304857 ignition[748]: disks: disks passed May 8 00:49:32.304907 ignition[748]: Ignition finished successfully May 8 00:49:32.307259 systemd[1]: Finished ignition-disks.service. May 8 00:49:32.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:32.309246 systemd[1]: Reached target initrd-root-device.target. May 8 00:49:32.311412 systemd[1]: Reached target local-fs-pre.target. May 8 00:49:32.313257 systemd[1]: Reached target local-fs.target. May 8 00:49:32.313827 systemd[1]: Reached target sysinit.target. May 8 00:49:32.315504 systemd[1]: Reached target basic.target. May 8 00:49:32.317930 systemd[1]: Starting systemd-fsck-root.service... May 8 00:49:32.335444 systemd-fsck[756]: ROOT: clean, 623/553520 files, 56023/553472 blocks May 8 00:49:32.344681 systemd[1]: Finished systemd-fsck-root.service. May 8 00:49:32.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:32.346916 systemd[1]: Mounting sysroot.mount... May 8 00:49:32.358157 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 8 00:49:32.358752 systemd[1]: Mounted sysroot.mount. May 8 00:49:32.359644 systemd[1]: Reached target initrd-root-fs.target. May 8 00:49:32.362630 systemd[1]: Mounting sysroot-usr.mount... May 8 00:49:32.364933 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 8 00:49:32.364992 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:49:32.365047 systemd[1]: Reached target ignition-diskful.target. May 8 00:49:32.368950 systemd[1]: Mounted sysroot-usr.mount. May 8 00:49:32.371905 systemd[1]: Starting initrd-setup-root.service... May 8 00:49:32.377779 initrd-setup-root[766]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:49:32.385548 initrd-setup-root[774]: cut: /sysroot/etc/group: No such file or directory May 8 00:49:32.398174 initrd-setup-root[782]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:49:32.402844 initrd-setup-root[790]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:49:32.442222 systemd[1]: Finished initrd-setup-root.service. May 8 00:49:32.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:32.485616 systemd[1]: Starting ignition-mount.service... May 8 00:49:32.488137 systemd[1]: Starting sysroot-boot.service... May 8 00:49:32.492904 bash[807]: umount: /sysroot/usr/share/oem: not mounted. May 8 00:49:32.506586 ignition[808]: INFO : Ignition 2.14.0 May 8 00:49:32.506586 ignition[808]: INFO : Stage: mount May 8 00:49:32.508604 ignition[808]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:49:32.508604 ignition[808]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:49:32.522182 ignition[808]: INFO : mount: mount passed May 8 00:49:32.523278 ignition[808]: INFO : Ignition finished successfully May 8 00:49:32.524752 systemd[1]: Finished ignition-mount.service. May 8 00:49:32.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:32.530865 systemd[1]: Finished sysroot-boot.service. May 8 00:49:32.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:32.609080 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 8 00:49:32.633000 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (817) May 8 00:49:32.633086 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:49:32.633099 kernel: BTRFS info (device vda6): using free space tree May 8 00:49:32.633850 kernel: BTRFS info (device vda6): has skinny extents May 8 00:49:32.638795 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 8 00:49:32.640674 systemd[1]: Starting ignition-files.service... May 8 00:49:32.733224 ignition[837]: INFO : Ignition 2.14.0 May 8 00:49:32.733224 ignition[837]: INFO : Stage: files May 8 00:49:32.750045 ignition[837]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:49:32.750045 ignition[837]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:49:32.750045 ignition[837]: DEBUG : files: compiled without relabeling support, skipping May 8 00:49:32.750045 ignition[837]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:49:32.750045 ignition[837]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:49:32.757805 ignition[837]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:49:32.757805 ignition[837]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:49:32.757805 ignition[837]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:49:32.757805 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 8 00:49:32.757805 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 8 00:49:32.757805 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 00:49:32.757805 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 8 00:49:32.753255 unknown[837]: wrote ssh authorized keys file for user: core May 8 00:49:32.791673 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 8 00:49:33.033582 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 00:49:33.035634 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 00:49:33.035634 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 8 00:49:33.393335 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK May 8 00:49:33.548257 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 00:49:33.548257 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" May 8 00:49:33.642823 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:49:33.642823 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:49:33.642823 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:49:33.642823 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:49:33.642823 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:49:33.642823 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:49:33.642823 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:49:33.642823 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:49:33.642823 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:49:33.642823 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:49:33.642823 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:49:33.642823 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:49:33.642823 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 8 00:49:33.721304 systemd-networkd[714]: eth0: Gained IPv6LL May 8 00:49:33.977269 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK May 8 00:49:34.716511 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:49:34.716511 ignition[837]: INFO : files: op(d): [started] processing unit "containerd.service" May 8 00:49:34.745926 ignition[837]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 8 00:49:34.745926 ignition[837]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 8 00:49:34.745926 ignition[837]: INFO : files: op(d): [finished] processing unit "containerd.service" May 8 00:49:34.745926 ignition[837]: INFO : files: op(f): [started] processing unit "prepare-helm.service" May 8 00:49:34.745926 ignition[837]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:49:34.745926 ignition[837]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:49:34.745926 ignition[837]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" May 8 00:49:34.745926 ignition[837]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" May 8 00:49:34.745926 ignition[837]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:49:34.745926 ignition[837]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:49:34.745926 ignition[837]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" May 8 00:49:34.745926 ignition[837]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" May 8 00:49:34.745926 ignition[837]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:49:34.745926 ignition[837]: INFO : files: op(14): [started] setting preset to disabled for "coreos-metadata.service" May 8 00:49:34.745926 ignition[837]: INFO : files: op(14): op(15): [started] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:49:34.923076 ignition[837]: INFO : files: op(14): op(15): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:49:35.022942 ignition[837]: INFO : files: op(14): [finished] setting preset to disabled for "coreos-metadata.service" May 8 00:49:35.022942 ignition[837]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:49:35.022942 ignition[837]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:49:35.022942 ignition[837]: INFO : files: files passed May 8 00:49:35.022942 ignition[837]: INFO : Ignition finished successfully May 8 00:49:35.030461 systemd[1]: Finished ignition-files.service. May 8 00:49:35.036016 kernel: kauditd_printk_skb: 23 callbacks suppressed May 8 00:49:35.036038 kernel: audit: type=1130 audit(1746665375.030:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.036227 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 8 00:49:35.038205 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 8 00:49:35.039206 systemd[1]: Starting ignition-quench.service... May 8 00:49:35.042955 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:49:35.043046 systemd[1]: Finished ignition-quench.service. May 8 00:49:35.187607 kernel: audit: type=1130 audit(1746665375.043:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.187650 kernel: audit: type=1131 audit(1746665375.043:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.191550 initrd-setup-root-after-ignition[862]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 8 00:49:35.195044 initrd-setup-root-after-ignition[864]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:49:35.197713 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 8 00:49:35.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.198564 systemd[1]: Reached target ignition-complete.target. May 8 00:49:35.205152 kernel: audit: type=1130 audit(1746665375.198:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.203905 systemd[1]: Starting initrd-parse-etc.service... May 8 00:49:35.223701 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:49:35.223807 systemd[1]: Finished initrd-parse-etc.service. May 8 00:49:35.364187 kernel: audit: type=1130 audit(1746665375.356:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.364226 kernel: audit: type=1131 audit(1746665375.356:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.356663 systemd[1]: Reached target initrd-fs.target. May 8 00:49:35.364715 systemd[1]: Reached target initrd.target. May 8 00:49:35.365082 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 8 00:49:35.366425 systemd[1]: Starting dracut-pre-pivot.service... May 8 00:49:35.378330 systemd[1]: Finished dracut-pre-pivot.service. May 8 00:49:35.383197 kernel: audit: type=1130 audit(1746665375.378:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.383194 systemd[1]: Starting initrd-cleanup.service... May 8 00:49:35.393267 systemd[1]: Stopped target nss-lookup.target. May 8 00:49:35.393671 systemd[1]: Stopped target remote-cryptsetup.target. May 8 00:49:35.396889 systemd[1]: Stopped target timers.target. May 8 00:49:35.397467 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:49:35.509566 kernel: audit: type=1131 audit(1746665375.398:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.397601 systemd[1]: Stopped dracut-pre-pivot.service. May 8 00:49:35.399264 systemd[1]: Stopped target initrd.target. May 8 00:49:35.510223 systemd[1]: Stopped target basic.target. May 8 00:49:35.510820 systemd[1]: Stopped target ignition-complete.target. May 8 00:49:35.512497 systemd[1]: Stopped target ignition-diskful.target. May 8 00:49:35.512804 systemd[1]: Stopped target initrd-root-device.target. May 8 00:49:35.513186 systemd[1]: Stopped target remote-fs.target. May 8 00:49:35.517133 systemd[1]: Stopped target remote-fs-pre.target. May 8 00:49:35.517740 systemd[1]: Stopped target sysinit.target. May 8 00:49:35.518063 systemd[1]: Stopped target local-fs.target. May 8 00:49:35.610600 kernel: audit: type=1131 audit(1746665375.606:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.518404 systemd[1]: Stopped target local-fs-pre.target. May 8 00:49:35.521920 systemd[1]: Stopped target swap.target. May 8 00:49:35.616762 kernel: audit: type=1131 audit(1746665375.612:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.522399 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:49:35.522517 systemd[1]: Stopped dracut-pre-mount.service. May 8 00:49:35.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.606859 systemd[1]: Stopped target cryptsetup.target. May 8 00:49:35.611016 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:49:35.611173 systemd[1]: Stopped dracut-initqueue.service. May 8 00:49:35.612765 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:49:35.612851 systemd[1]: Stopped ignition-fetch-offline.service. May 8 00:49:35.618928 systemd[1]: Stopped target paths.target. May 8 00:49:35.620356 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:49:35.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.622164 systemd[1]: Stopped systemd-ask-password-console.path. May 8 00:49:35.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.622811 systemd[1]: Stopped target slices.target. May 8 00:49:35.635835 iscsid[719]: iscsid shutting down. May 8 00:49:35.623162 systemd[1]: Stopped target sockets.target. May 8 00:49:35.627009 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:49:35.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.641856 ignition[877]: INFO : Ignition 2.14.0 May 8 00:49:35.641856 ignition[877]: INFO : Stage: umount May 8 00:49:35.641856 ignition[877]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:49:35.641856 ignition[877]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:49:35.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.627131 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 8 00:49:35.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.733243 ignition[877]: INFO : umount: umount passed May 8 00:49:35.733243 ignition[877]: INFO : Ignition finished successfully May 8 00:49:35.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.628378 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:49:35.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.628472 systemd[1]: Stopped ignition-files.service. May 8 00:49:35.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.631928 systemd[1]: Stopping ignition-mount.service... May 8 00:49:35.633346 systemd[1]: Stopping iscsid.service... May 8 00:49:35.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.635732 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:49:35.635910 systemd[1]: Stopped kmod-static-nodes.service. May 8 00:49:35.638291 systemd[1]: Stopping sysroot-boot.service... May 8 00:49:35.639442 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:49:35.639611 systemd[1]: Stopped systemd-udev-trigger.service. May 8 00:49:35.641932 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:49:35.642137 systemd[1]: Stopped dracut-pre-trigger.service. May 8 00:49:35.646432 systemd[1]: iscsid.service: Deactivated successfully. May 8 00:49:35.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.646529 systemd[1]: Stopped iscsid.service. May 8 00:49:35.731737 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:49:35.731845 systemd[1]: Stopped ignition-mount.service. May 8 00:49:35.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.733542 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:49:35.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.733612 systemd[1]: Closed iscsid.socket. May 8 00:49:35.734753 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:49:35.734820 systemd[1]: Stopped ignition-disks.service. May 8 00:49:35.736602 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:49:35.736634 systemd[1]: Stopped ignition-kargs.service. May 8 00:49:35.737544 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:49:35.737587 systemd[1]: Stopped ignition-setup.service. May 8 00:49:35.739719 systemd[1]: Stopping iscsiuio.service... May 8 00:49:35.740701 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:49:35.740808 systemd[1]: Finished initrd-cleanup.service. May 8 00:49:35.743395 systemd[1]: iscsiuio.service: Deactivated successfully. May 8 00:49:35.743507 systemd[1]: Stopped iscsiuio.service. May 8 00:49:35.745202 systemd[1]: Stopped target network.target. May 8 00:49:35.747056 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:49:35.747104 systemd[1]: Closed iscsiuio.socket. May 8 00:49:35.748820 systemd[1]: Stopping systemd-networkd.service... May 8 00:49:35.841718 systemd[1]: Stopping systemd-resolved.service... May 8 00:49:35.844199 systemd-networkd[714]: eth0: DHCPv6 lease lost May 8 00:49:35.994000 audit: BPF prog-id=9 op=UNLOAD May 8 00:49:35.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.846233 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:49:35.846373 systemd[1]: Stopped systemd-networkd.service. May 8 00:49:35.849392 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:49:35.849427 systemd[1]: Closed systemd-networkd.socket. May 8 00:49:35.851731 systemd[1]: Stopping network-cleanup.service... May 8 00:49:35.852891 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:49:35.852937 systemd[1]: Stopped parse-ip-for-networkd.service. May 8 00:49:35.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:35.853519 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:49:35.853556 systemd[1]: Stopped systemd-sysctl.service. May 8 00:49:35.854794 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:49:35.854831 systemd[1]: Stopped systemd-modules-load.service. May 8 00:49:35.856966 systemd[1]: Stopping systemd-udevd.service... May 8 00:49:35.931937 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 8 00:49:35.993731 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:49:35.994666 systemd[1]: Stopped systemd-resolved.service. May 8 00:49:35.998528 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:49:35.998665 systemd[1]: Stopped systemd-udevd.service. May 8 00:49:36.065024 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:49:36.066719 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:49:36.067860 systemd[1]: Stopped network-cleanup.service. May 8 00:49:36.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:36.069000 audit: BPF prog-id=6 op=UNLOAD May 8 00:49:36.069958 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:49:36.069994 systemd[1]: Closed systemd-udevd-control.socket. May 8 00:49:36.072037 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:49:36.072068 systemd[1]: Closed systemd-udevd-kernel.socket. May 8 00:49:36.073729 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:49:36.073773 systemd[1]: Stopped dracut-pre-udev.service. May 8 00:49:36.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:36.077899 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:49:36.077942 systemd[1]: Stopped dracut-cmdline.service. May 8 00:49:36.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:36.080460 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:49:36.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:36.080545 systemd[1]: Stopped dracut-cmdline-ask.service. May 8 00:49:36.082545 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 8 00:49:36.083026 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:49:36.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:36.083085 systemd[1]: Stopped systemd-vconsole-setup.service. May 8 00:49:36.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:36.085556 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:49:36.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:36.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:36.085662 systemd[1]: Stopped sysroot-boot.service. May 8 00:49:36.166517 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:49:36.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:36.166604 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 8 00:49:36.167874 systemd[1]: Reached target initrd-switch-root.target. May 8 00:49:36.169662 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:49:36.169720 systemd[1]: Stopped initrd-setup-root.service. May 8 00:49:36.172411 systemd[1]: Starting initrd-switch-root.service... May 8 00:49:36.233876 systemd[1]: Switching root. May 8 00:49:36.236000 audit: BPF prog-id=5 op=UNLOAD May 8 00:49:36.236000 audit: BPF prog-id=4 op=UNLOAD May 8 00:49:36.236000 audit: BPF prog-id=3 op=UNLOAD May 8 00:49:36.239000 audit: BPF prog-id=8 op=UNLOAD May 8 00:49:36.239000 audit: BPF prog-id=7 op=UNLOAD May 8 00:49:36.321203 systemd-journald[197]: Journal stopped May 8 00:49:41.261432 systemd-journald[197]: Received SIGTERM from PID 1 (n/a). May 8 00:49:41.261483 kernel: SELinux: Class mctp_socket not defined in policy. May 8 00:49:41.261496 kernel: SELinux: Class anon_inode not defined in policy. May 8 00:49:41.261511 kernel: SELinux: the above unknown classes and permissions will be allowed May 8 00:49:41.261521 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:49:41.261534 kernel: SELinux: policy capability open_perms=1 May 8 00:49:41.261546 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:49:41.261559 kernel: SELinux: policy capability always_check_network=0 May 8 00:49:41.261568 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:49:41.261578 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:49:41.261588 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:49:41.261597 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:49:41.261611 systemd[1]: Successfully loaded SELinux policy in 63.506ms. May 8 00:49:41.261631 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.931ms. May 8 00:49:41.261646 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 8 00:49:41.261657 systemd[1]: Detected virtualization kvm. May 8 00:49:41.261670 systemd[1]: Detected architecture x86-64. May 8 00:49:41.261680 systemd[1]: Detected first boot. May 8 00:49:41.261697 systemd[1]: Initializing machine ID from VM UUID. May 8 00:49:41.261713 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 8 00:49:41.261723 systemd[1]: Populated /etc with preset unit settings. May 8 00:49:41.261734 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 8 00:49:41.261746 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 8 00:49:41.261758 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:49:41.261769 systemd[1]: Queued start job for default target multi-user.target. May 8 00:49:41.261779 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 8 00:49:41.261789 systemd[1]: Created slice system-addon\x2dconfig.slice. May 8 00:49:41.261805 systemd[1]: Created slice system-addon\x2drun.slice. May 8 00:49:41.261820 systemd[1]: Created slice system-getty.slice. May 8 00:49:41.261833 systemd[1]: Created slice system-modprobe.slice. May 8 00:49:41.261843 systemd[1]: Created slice system-serial\x2dgetty.slice. May 8 00:49:41.261854 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 8 00:49:41.261864 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 8 00:49:41.261874 systemd[1]: Created slice user.slice. May 8 00:49:41.261889 systemd[1]: Started systemd-ask-password-console.path. May 8 00:49:41.261901 systemd[1]: Started systemd-ask-password-wall.path. May 8 00:49:41.261913 systemd[1]: Set up automount boot.automount. May 8 00:49:41.261923 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 8 00:49:41.261933 systemd[1]: Reached target integritysetup.target. May 8 00:49:41.261943 systemd[1]: Reached target remote-cryptsetup.target. May 8 00:49:41.261954 systemd[1]: Reached target remote-fs.target. May 8 00:49:41.261964 systemd[1]: Reached target slices.target. May 8 00:49:41.261974 systemd[1]: Reached target swap.target. May 8 00:49:41.261984 systemd[1]: Reached target torcx.target. May 8 00:49:41.261999 systemd[1]: Reached target veritysetup.target. May 8 00:49:41.262009 systemd[1]: Listening on systemd-coredump.socket. May 8 00:49:41.262019 systemd[1]: Listening on systemd-initctl.socket. May 8 00:49:41.262029 systemd[1]: Listening on systemd-journald-audit.socket. May 8 00:49:41.262040 kernel: kauditd_printk_skb: 47 callbacks suppressed May 8 00:49:41.262050 kernel: audit: type=1400 audit(1746665381.139:84): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 8 00:49:41.262060 kernel: audit: type=1335 audit(1746665381.139:85): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 8 00:49:41.262070 systemd[1]: Listening on systemd-journald-dev-log.socket. May 8 00:49:41.262080 systemd[1]: Listening on systemd-journald.socket. May 8 00:49:41.262095 systemd[1]: Listening on systemd-networkd.socket. May 8 00:49:41.262105 systemd[1]: Listening on systemd-udevd-control.socket. May 8 00:49:41.262138 systemd[1]: Listening on systemd-udevd-kernel.socket. May 8 00:49:41.262152 systemd[1]: Listening on systemd-userdbd.socket. May 8 00:49:41.262162 systemd[1]: Mounting dev-hugepages.mount... May 8 00:49:41.262173 systemd[1]: Mounting dev-mqueue.mount... May 8 00:49:41.262183 systemd[1]: Mounting media.mount... May 8 00:49:41.262193 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:49:41.262203 systemd[1]: Mounting sys-kernel-debug.mount... May 8 00:49:41.262215 systemd[1]: Mounting sys-kernel-tracing.mount... May 8 00:49:41.262226 systemd[1]: Mounting tmp.mount... May 8 00:49:41.262243 systemd[1]: Starting flatcar-tmpfiles.service... May 8 00:49:41.262273 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:49:41.262283 systemd[1]: Starting kmod-static-nodes.service... May 8 00:49:41.262293 systemd[1]: Starting modprobe@configfs.service... May 8 00:49:41.262305 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:49:41.262315 systemd[1]: Starting modprobe@drm.service... May 8 00:49:41.262325 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:49:41.262337 systemd[1]: Starting modprobe@fuse.service... May 8 00:49:41.262348 systemd[1]: Starting modprobe@loop.service... May 8 00:49:41.262358 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:49:41.262368 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 8 00:49:41.262379 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 8 00:49:41.262389 systemd[1]: Starting systemd-journald.service... May 8 00:49:41.262417 kernel: fuse: init (API version 7.34) May 8 00:49:41.262427 systemd[1]: Starting systemd-modules-load.service... May 8 00:49:41.262442 systemd[1]: Starting systemd-network-generator.service... May 8 00:49:41.262453 systemd[1]: Starting systemd-remount-fs.service... May 8 00:49:41.262465 kernel: loop: module loaded May 8 00:49:41.262486 systemd[1]: Starting systemd-udev-trigger.service... May 8 00:49:41.262497 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:49:41.262507 systemd[1]: Mounted dev-hugepages.mount. May 8 00:49:41.262517 systemd[1]: Mounted dev-mqueue.mount. May 8 00:49:41.262527 systemd[1]: Mounted media.mount. May 8 00:49:41.262537 systemd[1]: Mounted sys-kernel-debug.mount. May 8 00:49:41.262547 systemd[1]: Mounted sys-kernel-tracing.mount. May 8 00:49:41.262559 systemd[1]: Mounted tmp.mount. May 8 00:49:41.262574 kernel: audit: type=1305 audit(1746665381.260:86): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 8 00:49:41.262588 systemd-journald[1026]: Journal started May 8 00:49:41.262647 systemd-journald[1026]: Runtime Journal (/run/log/journal/22ca9d59ebe74ba9b304bb86fe8f3aac) is 6.0M, max 48.4M, 42.4M free. May 8 00:49:41.139000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 8 00:49:41.139000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 8 00:49:41.260000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 8 00:49:41.260000 audit[1026]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffc8c757150 a2=4000 a3=7ffc8c7571ec items=0 ppid=1 pid=1026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:49:41.268040 kernel: audit: type=1300 audit(1746665381.260:86): arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffc8c757150 a2=4000 a3=7ffc8c7571ec items=0 ppid=1 pid=1026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:49:41.260000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 8 00:49:41.271315 kernel: audit: type=1327 audit(1746665381.260:86): proctitle="/usr/lib/systemd/systemd-journald" May 8 00:49:41.271458 systemd[1]: Started systemd-journald.service. May 8 00:49:41.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:41.273567 systemd[1]: Finished flatcar-tmpfiles.service. May 8 00:49:41.277444 kernel: audit: type=1130 audit(1746665381.272:87): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:41.277554 kernel: audit: type=1130 audit(1746665381.277:88): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:41.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:41.277951 systemd[1]: Finished kmod-static-nodes.service. May 8 00:49:41.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:41.283180 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:49:41.283533 systemd[1]: Finished modprobe@configfs.service. May 8 00:49:41.287623 kernel: audit: type=1130 audit(1746665381.282:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:41.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:41.288242 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:49:41.288567 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:49:41.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:41.294888 kernel: audit: type=1130 audit(1746665381.287:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:41.295005 kernel: audit: type=1131 audit(1746665381.287:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:41.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:41.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:41.296441 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:49:41.296690 systemd[1]: Finished modprobe@drm.service. May 8 00:49:41.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:41.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:41.298086 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:49:41.298330 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:49:41.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:41.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:41.299763 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:49:41.300021 systemd[1]: Finished modprobe@fuse.service. May 8 00:49:41.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:41.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:41.301372 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:49:41.301602 systemd[1]: Finished modprobe@loop.service. May 8 00:49:41.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:41.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:41.303242 systemd[1]: Finished systemd-modules-load.service. May 8 00:49:41.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:41.304760 systemd[1]: Finished systemd-network-generator.service. May 8 00:49:41.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:41.306493 systemd[1]: Finished systemd-remount-fs.service. May 8 00:49:41.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:41.307932 systemd[1]: Reached target network-pre.target. May 8 00:49:41.310451 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 8 00:49:41.312767 systemd[1]: Mounting sys-kernel-config.mount... May 8 00:49:41.314126 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:49:41.316190 systemd[1]: Starting systemd-hwdb-update.service... May 8 00:49:41.318624 systemd[1]: Starting systemd-journal-flush.service... May 8 00:49:41.319726 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:49:41.320862 systemd[1]: Starting systemd-random-seed.service... May 8 00:49:41.322017 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 8 00:49:41.323625 systemd[1]: Starting systemd-sysctl.service... May 8 00:49:41.325388 systemd-journald[1026]: Time spent on flushing to /var/log/journal/22ca9d59ebe74ba9b304bb86fe8f3aac is 23.126ms for 1115 entries. May 8 00:49:41.325388 systemd-journald[1026]: System Journal (/var/log/journal/22ca9d59ebe74ba9b304bb86fe8f3aac) is 8.0M, max 195.6M, 187.6M free. May 8 00:49:41.366574 systemd-journald[1026]: Received client request to flush runtime journal. May 8 00:49:41.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:41.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:41.326144 systemd[1]: Starting systemd-sysusers.service... May 8 00:49:41.332519 systemd[1]: Finished systemd-udev-trigger.service. May 8 00:49:41.333948 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 8 00:49:41.335274 systemd[1]: Mounted sys-kernel-config.mount. May 8 00:49:41.338323 systemd[1]: Starting systemd-udev-settle.service... May 8 00:49:41.340681 systemd[1]: Finished systemd-random-seed.service. May 8 00:49:41.341793 systemd[1]: Reached target first-boot-complete.target. May 8 00:49:41.368864 systemd[1]: Finished systemd-journal-flush.service. May 8 00:49:41.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:41.386471 udevadm[1060]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 8 00:49:41.386578 systemd[1]: Finished systemd-sysusers.service. May 8 00:49:41.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:41.389631 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 8 00:49:41.400264 systemd[1]: Finished systemd-sysctl.service. May 8 00:49:41.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:41.420328 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 8 00:49:41.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:42.418345 systemd[1]: Finished systemd-hwdb-update.service. May 8 00:49:42.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:42.430950 systemd[1]: Starting systemd-udevd.service... May 8 00:49:42.448449 systemd-udevd[1070]: Using default interface naming scheme 'v252'. May 8 00:49:42.461641 systemd[1]: Started systemd-udevd.service. May 8 00:49:42.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:42.525307 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 8 00:49:42.525575 systemd[1]: Found device dev-ttyS0.device. May 8 00:49:42.531259 kernel: ACPI: button: Power Button [PWRF] May 8 00:49:42.536000 audit[1079]: AVC avc: denied { confidentiality } for pid=1079 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 8 00:49:42.542144 systemd[1]: Starting systemd-networkd.service... May 8 00:49:42.536000 audit[1079]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=557ccc759720 a1=338ac a2=7f66f25fbbc5 a3=5 items=110 ppid=1070 pid=1079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:49:42.536000 audit: CWD cwd="/" May 8 00:49:42.536000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=1 name=(null) inode=14564 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=2 name=(null) inode=14564 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=3 name=(null) inode=14565 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=4 name=(null) inode=14564 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=5 name=(null) inode=14566 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=6 name=(null) inode=14564 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=7 name=(null) inode=14567 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=8 name=(null) inode=14567 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=9 name=(null) inode=14568 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=10 name=(null) inode=14567 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=11 name=(null) inode=14569 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=12 name=(null) inode=14567 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=13 name=(null) inode=14570 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=14 name=(null) inode=14567 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=15 name=(null) inode=14571 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=16 name=(null) inode=14567 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=17 name=(null) inode=14572 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=18 name=(null) inode=14564 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=19 name=(null) inode=14573 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=20 name=(null) inode=14573 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=21 name=(null) inode=14574 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=22 name=(null) inode=14573 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=23 name=(null) inode=14575 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=24 name=(null) inode=14573 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=25 name=(null) inode=14576 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=26 name=(null) inode=14573 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=27 name=(null) inode=14577 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=28 name=(null) inode=14573 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=29 name=(null) inode=14578 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=30 name=(null) inode=14564 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=31 name=(null) inode=14579 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=32 name=(null) inode=14579 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=33 name=(null) inode=14580 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=34 name=(null) inode=14579 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=35 name=(null) inode=14581 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=36 name=(null) inode=14579 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=37 name=(null) inode=14582 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=38 name=(null) inode=14579 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=39 name=(null) inode=14583 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=40 name=(null) inode=14579 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=41 name=(null) inode=14584 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=42 name=(null) inode=14564 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=43 name=(null) inode=14585 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=44 name=(null) inode=14585 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=45 name=(null) inode=14586 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=46 name=(null) inode=14585 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=47 name=(null) inode=14587 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=48 name=(null) inode=14585 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=49 name=(null) inode=14588 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=50 name=(null) inode=14585 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=51 name=(null) inode=14589 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=52 name=(null) inode=14585 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=53 name=(null) inode=14590 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=55 name=(null) inode=14591 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=56 name=(null) inode=14591 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=57 name=(null) inode=14592 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=58 name=(null) inode=14591 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=59 name=(null) inode=14593 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=60 name=(null) inode=14591 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=61 name=(null) inode=14594 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=62 name=(null) inode=14594 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=63 name=(null) inode=14595 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=64 name=(null) inode=14594 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=65 name=(null) inode=14596 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=66 name=(null) inode=14594 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=67 name=(null) inode=14597 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=68 name=(null) inode=14594 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=69 name=(null) inode=14598 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=70 name=(null) inode=14594 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=71 name=(null) inode=14599 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=72 name=(null) inode=14591 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=73 name=(null) inode=14600 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=74 name=(null) inode=14600 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=75 name=(null) inode=14601 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=76 name=(null) inode=14600 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=77 name=(null) inode=14602 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=78 name=(null) inode=14600 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=79 name=(null) inode=14603 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=80 name=(null) inode=14600 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=81 name=(null) inode=14604 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=82 name=(null) inode=14600 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=83 name=(null) inode=14605 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=84 name=(null) inode=14591 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=85 name=(null) inode=14606 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=86 name=(null) inode=14606 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=87 name=(null) inode=14607 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=88 name=(null) inode=14606 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=89 name=(null) inode=14608 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=90 name=(null) inode=14606 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=91 name=(null) inode=14609 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=92 name=(null) inode=14606 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=93 name=(null) inode=14610 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=94 name=(null) inode=14606 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=95 name=(null) inode=14611 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=96 name=(null) inode=14591 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=97 name=(null) inode=14612 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=98 name=(null) inode=14612 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=99 name=(null) inode=14613 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=100 name=(null) inode=14612 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=101 name=(null) inode=14614 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=102 name=(null) inode=14612 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=103 name=(null) inode=14615 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=104 name=(null) inode=14612 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=105 name=(null) inode=14616 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=106 name=(null) inode=14612 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=107 name=(null) inode=14617 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PATH item=109 name=(null) inode=12151 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:42.536000 audit: PROCTITLE proctitle="(udev-worker)" May 8 00:49:42.557171 systemd[1]: Starting systemd-userdbd.service... May 8 00:49:42.563078 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 8 00:49:42.583898 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 8 00:49:42.584092 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 8 00:49:42.584285 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 8 00:49:42.574693 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 8 00:49:42.601173 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 8 00:49:42.605141 kernel: mousedev: PS/2 mouse device common for all mice May 8 00:49:42.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:42.619039 systemd[1]: Started systemd-userdbd.service. May 8 00:49:42.734486 kernel: kvm: Nested Virtualization enabled May 8 00:49:42.734622 kernel: SVM: kvm: Nested Paging enabled May 8 00:49:42.734639 kernel: SVM: Virtual VMLOAD VMSAVE supported May 8 00:49:42.736132 kernel: SVM: Virtual GIF supported May 8 00:49:42.746331 systemd-networkd[1091]: lo: Link UP May 8 00:49:42.746342 systemd-networkd[1091]: lo: Gained carrier May 8 00:49:42.746816 systemd-networkd[1091]: Enumeration completed May 8 00:49:42.746948 systemd[1]: Started systemd-networkd.service. May 8 00:49:42.747592 systemd-networkd[1091]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:49:42.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:42.749034 systemd-networkd[1091]: eth0: Link UP May 8 00:49:42.749044 systemd-networkd[1091]: eth0: Gained carrier May 8 00:49:42.754156 kernel: EDAC MC: Ver: 3.0.0 May 8 00:49:42.761242 systemd-networkd[1091]: eth0: DHCPv4 address 10.0.0.121/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:49:42.778626 systemd[1]: Finished systemd-udev-settle.service. May 8 00:49:42.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:42.781432 systemd[1]: Starting lvm2-activation-early.service... May 8 00:49:42.792527 lvm[1108]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:49:42.830642 systemd[1]: Finished lvm2-activation-early.service. May 8 00:49:42.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:42.832032 systemd[1]: Reached target cryptsetup.target. May 8 00:49:42.834802 systemd[1]: Starting lvm2-activation.service... May 8 00:49:42.840310 lvm[1110]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:49:42.869595 systemd[1]: Finished lvm2-activation.service. May 8 00:49:42.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:42.870863 systemd[1]: Reached target local-fs-pre.target. May 8 00:49:42.871887 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:49:42.871919 systemd[1]: Reached target local-fs.target. May 8 00:49:42.872832 systemd[1]: Reached target machines.target. May 8 00:49:42.875479 systemd[1]: Starting ldconfig.service... May 8 00:49:42.876819 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:49:42.876884 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:49:42.878041 systemd[1]: Starting systemd-boot-update.service... May 8 00:49:42.880406 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 8 00:49:42.883044 systemd[1]: Starting systemd-machine-id-commit.service... May 8 00:49:42.885881 systemd[1]: Starting systemd-sysext.service... May 8 00:49:42.887599 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1113 (bootctl) May 8 00:49:42.888765 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 8 00:49:42.894593 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 8 00:49:42.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:42.905486 systemd[1]: Unmounting usr-share-oem.mount... May 8 00:49:42.910273 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 8 00:49:42.910612 systemd[1]: Unmounted usr-share-oem.mount. May 8 00:49:42.926154 kernel: loop0: detected capacity change from 0 to 210664 May 8 00:49:42.944771 systemd-fsck[1123]: fsck.fat 4.2 (2021-01-31) May 8 00:49:42.944771 systemd-fsck[1123]: /dev/vda1: 791 files, 120730/258078 clusters May 8 00:49:42.946344 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 8 00:49:42.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:42.950330 systemd[1]: Mounting boot.mount... May 8 00:49:43.764050 systemd[1]: Mounted boot.mount. May 8 00:49:43.861144 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:49:43.867684 systemd[1]: Finished systemd-boot-update.service. May 8 00:49:43.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:43.875828 kernel: loop1: detected capacity change from 0 to 210664 May 8 00:49:43.880739 (sd-sysext)[1133]: Using extensions 'kubernetes'. May 8 00:49:43.881188 (sd-sysext)[1133]: Merged extensions into '/usr'. May 8 00:49:43.916966 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:49:43.918552 systemd[1]: Mounting usr-share-oem.mount... May 8 00:49:43.919659 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:49:43.920773 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:49:43.922777 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:49:43.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:43.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:43.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:43.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:43.924963 systemd[1]: Starting modprobe@loop.service... May 8 00:49:43.925965 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:49:43.926154 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:49:43.926329 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:49:43.927474 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:49:43.927713 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:49:43.932052 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:49:43.932244 systemd[1]: Finished modprobe@loop.service. May 8 00:49:43.935958 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 8 00:49:43.943080 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:49:43.945046 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:49:43.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:43.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:43.949870 systemd[1]: Mounted usr-share-oem.mount. May 8 00:49:43.951759 systemd[1]: Finished systemd-sysext.service. May 8 00:49:43.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:43.954237 systemd[1]: Starting ensure-sysext.service... May 8 00:49:43.955170 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:49:43.956194 systemd[1]: Starting systemd-tmpfiles-setup.service... May 8 00:49:43.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:43.962151 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:49:43.962808 systemd[1]: Finished systemd-machine-id-commit.service. May 8 00:49:43.965955 systemd[1]: Reloading. May 8 00:49:43.969427 systemd-tmpfiles[1147]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 8 00:49:43.970656 systemd-tmpfiles[1147]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:49:43.972239 systemd-tmpfiles[1147]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:49:43.984100 ldconfig[1112]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:49:44.020518 /usr/lib/systemd/system-generators/torcx-generator[1169]: time="2025-05-08T00:49:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 8 00:49:44.020546 /usr/lib/systemd/system-generators/torcx-generator[1169]: time="2025-05-08T00:49:44Z" level=info msg="torcx already run" May 8 00:49:44.099751 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 8 00:49:44.099769 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 8 00:49:44.122854 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:49:44.194239 systemd[1]: Finished ldconfig.service. May 8 00:49:44.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:44.197599 systemd[1]: Finished systemd-tmpfiles-setup.service. May 8 00:49:44.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:44.202203 systemd[1]: Starting audit-rules.service... May 8 00:49:44.204936 systemd[1]: Starting clean-ca-certificates.service... May 8 00:49:44.208060 systemd[1]: Starting systemd-journal-catalog-update.service... May 8 00:49:44.211795 systemd[1]: Starting systemd-resolved.service... May 8 00:49:44.218025 systemd[1]: Starting systemd-timesyncd.service... May 8 00:49:44.221019 systemd[1]: Starting systemd-update-utmp.service... May 8 00:49:44.223188 systemd[1]: Finished clean-ca-certificates.service. May 8 00:49:44.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:44.227906 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:49:44.228000 audit[1232]: SYSTEM_BOOT pid=1232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 8 00:49:44.233748 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:49:44.234160 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:49:44.236908 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:49:44.253675 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:49:44.257146 systemd[1]: Starting modprobe@loop.service... May 8 00:49:44.258240 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:49:44.258479 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:49:44.258701 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:49:44.258882 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:49:44.261860 systemd[1]: Finished systemd-journal-catalog-update.service. May 8 00:49:44.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:44.263740 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:49:44.263929 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:49:44.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:44.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:44.265697 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:49:44.265863 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:49:44.267714 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:49:44.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:44.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:44.267918 systemd[1]: Finished modprobe@loop.service. May 8 00:49:44.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:44.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:44.269477 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:49:44.269627 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 8 00:49:44.271911 systemd[1]: Starting systemd-update-done.service... May 8 00:49:44.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:44.273788 systemd[1]: Finished systemd-update-utmp.service. May 8 00:49:44.277810 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:49:44.278081 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:49:44.279769 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:49:44.282221 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:49:44.285168 systemd[1]: Starting modprobe@loop.service... May 8 00:49:44.288872 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:49:44.289064 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:49:44.289305 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:49:44.289422 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:49:44.291084 systemd[1]: Finished systemd-update-done.service. May 8 00:49:44.292000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 8 00:49:44.292000 audit[1247]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd0191c440 a2=420 a3=0 items=0 ppid=1218 pid=1247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:49:44.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:44.292000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 8 00:49:44.292581 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:49:44.293811 augenrules[1247]: No rules May 8 00:49:44.292763 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:49:44.294848 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:49:44.295050 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:49:44.296550 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:49:44.296753 systemd[1]: Finished modprobe@loop.service. May 8 00:49:44.298324 systemd[1]: Finished audit-rules.service. May 8 00:49:44.299805 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:49:44.299927 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 8 00:49:44.303673 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:49:44.303985 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:49:44.306006 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:49:44.308625 systemd[1]: Starting modprobe@drm.service... May 8 00:49:44.311733 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:49:44.314767 systemd[1]: Starting modprobe@loop.service... May 8 00:49:44.316233 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:49:44.316622 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:49:44.319663 systemd[1]: Starting systemd-networkd-wait-online.service... May 8 00:49:44.321334 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:49:44.321549 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:49:44.324262 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:49:44.324740 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:49:44.326391 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:49:44.326757 systemd[1]: Finished modprobe@drm.service. May 8 00:49:44.328268 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:49:44.328641 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:49:44.330089 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:49:44.330420 systemd[1]: Finished modprobe@loop.service. May 8 00:49:44.334141 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:49:44.334244 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 8 00:49:44.336612 systemd[1]: Finished ensure-sysext.service. May 8 00:49:44.349532 systemd[1]: Started systemd-timesyncd.service. May 8 00:49:45.188094 systemd-timesyncd[1230]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 8 00:49:45.188172 systemd-timesyncd[1230]: Initial clock synchronization to Thu 2025-05-08 00:49:45.187950 UTC. May 8 00:49:45.188519 systemd[1]: Reached target time-set.target. May 8 00:49:45.200495 systemd-resolved[1226]: Positive Trust Anchors: May 8 00:49:45.200511 systemd-resolved[1226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:49:45.200538 systemd-resolved[1226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 8 00:49:45.211457 systemd-resolved[1226]: Defaulting to hostname 'linux'. May 8 00:49:45.213179 systemd[1]: Started systemd-resolved.service. May 8 00:49:45.215976 systemd[1]: Reached target network.target. May 8 00:49:45.216805 systemd[1]: Reached target nss-lookup.target. May 8 00:49:45.233151 systemd[1]: Reached target sysinit.target. May 8 00:49:45.234183 systemd[1]: Started motdgen.path. May 8 00:49:45.234964 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 8 00:49:45.236489 systemd[1]: Started logrotate.timer. May 8 00:49:45.237456 systemd[1]: Started mdadm.timer. May 8 00:49:45.238209 systemd[1]: Started systemd-tmpfiles-clean.timer. May 8 00:49:45.239165 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:49:45.239201 systemd[1]: Reached target paths.target. May 8 00:49:45.240051 systemd[1]: Reached target timers.target. May 8 00:49:45.241420 systemd[1]: Listening on dbus.socket. May 8 00:49:45.244173 systemd[1]: Starting docker.socket... May 8 00:49:45.246047 systemd[1]: Listening on sshd.socket. May 8 00:49:45.247051 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:49:45.247451 systemd[1]: Listening on docker.socket. May 8 00:49:45.248398 systemd[1]: Reached target sockets.target. May 8 00:49:45.249276 systemd[1]: Reached target basic.target. May 8 00:49:45.250358 systemd[1]: System is tainted: cgroupsv1 May 8 00:49:45.250413 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 8 00:49:45.250449 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 8 00:49:45.251929 systemd[1]: Starting containerd.service... May 8 00:49:45.253961 systemd[1]: Starting dbus.service... May 8 00:49:45.256050 systemd[1]: Starting enable-oem-cloudinit.service... May 8 00:49:45.258197 systemd[1]: Starting extend-filesystems.service... May 8 00:49:45.259217 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 8 00:49:45.260772 systemd[1]: Starting motdgen.service... May 8 00:49:45.262622 systemd[1]: Starting prepare-helm.service... May 8 00:49:45.265276 systemd[1]: Starting ssh-key-proc-cmdline.service... May 8 00:49:45.266824 jq[1281]: false May 8 00:49:45.268002 systemd[1]: Starting sshd-keygen.service... May 8 00:49:45.271855 systemd[1]: Starting systemd-logind.service... May 8 00:49:45.272844 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:49:45.272947 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:49:45.274453 systemd[1]: Starting update-engine.service... May 8 00:49:45.277069 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 8 00:49:45.284041 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:49:45.286722 jq[1295]: true May 8 00:49:45.284497 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 8 00:49:45.285731 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:49:45.286024 systemd[1]: Finished ssh-key-proc-cmdline.service. May 8 00:49:45.299470 extend-filesystems[1282]: Found loop1 May 8 00:49:45.299470 extend-filesystems[1282]: Found sr0 May 8 00:49:45.299470 extend-filesystems[1282]: Found vda May 8 00:49:45.299470 extend-filesystems[1282]: Found vda1 May 8 00:49:45.299470 extend-filesystems[1282]: Found vda2 May 8 00:49:45.299470 extend-filesystems[1282]: Found vda3 May 8 00:49:45.299470 extend-filesystems[1282]: Found usr May 8 00:49:45.299470 extend-filesystems[1282]: Found vda4 May 8 00:49:45.299470 extend-filesystems[1282]: Found vda6 May 8 00:49:45.299470 extend-filesystems[1282]: Found vda7 May 8 00:49:45.299470 extend-filesystems[1282]: Found vda9 May 8 00:49:45.299470 extend-filesystems[1282]: Checking size of /dev/vda9 May 8 00:49:45.308794 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:49:45.408967 update_engine[1292]: I0508 00:49:45.395568 1292 main.cc:92] Flatcar Update Engine starting May 8 00:49:45.409242 extend-filesystems[1282]: Resized partition /dev/vda9 May 8 00:49:45.411496 tar[1305]: linux-amd64/helm May 8 00:49:45.371208 dbus-daemon[1280]: [system] SELinux support is enabled May 8 00:49:45.412093 jq[1307]: true May 8 00:49:45.309100 systemd[1]: Finished motdgen.service. May 8 00:49:45.412224 extend-filesystems[1324]: resize2fs 1.46.5 (30-Dec-2021) May 8 00:49:45.318977 systemd-networkd[1091]: eth0: Gained IPv6LL May 8 00:49:45.437938 update_engine[1292]: I0508 00:49:45.432132 1292 update_check_scheduler.cc:74] Next update check in 9m32s May 8 00:49:45.381214 systemd[1]: Started dbus.service. May 8 00:49:45.384561 systemd[1]: Finished systemd-networkd-wait-online.service. May 8 00:49:45.388923 systemd[1]: Reached target network-online.target. May 8 00:49:45.391382 systemd[1]: Starting kubelet.service... May 8 00:49:45.393333 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:49:45.393440 systemd[1]: Reached target system-config.target. May 8 00:49:45.394081 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:49:45.394130 systemd[1]: Reached target user-config.target. May 8 00:49:45.432095 systemd[1]: Started update-engine.service. May 8 00:49:45.436012 systemd[1]: Started locksmithd.service. May 8 00:49:45.446007 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 8 00:49:45.484907 env[1309]: time="2025-05-08T00:49:45.484699197Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 8 00:49:45.518109 env[1309]: time="2025-05-08T00:49:45.518012864Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:49:45.518988 env[1309]: time="2025-05-08T00:49:45.518956444Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:49:45.521887 env[1309]: time="2025-05-08T00:49:45.521811850Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.180-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:49:45.521887 env[1309]: time="2025-05-08T00:49:45.521878805Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:49:45.522537 env[1309]: time="2025-05-08T00:49:45.522477488Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:49:45.522537 env[1309]: time="2025-05-08T00:49:45.522521310Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:49:45.522707 env[1309]: time="2025-05-08T00:49:45.522541799Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 8 00:49:45.522707 env[1309]: time="2025-05-08T00:49:45.522556096Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:49:45.522828 env[1309]: time="2025-05-08T00:49:45.522754308Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:49:45.523279 env[1309]: time="2025-05-08T00:49:45.523194233Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:49:45.523566 env[1309]: time="2025-05-08T00:49:45.523521647Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:49:45.523566 env[1309]: time="2025-05-08T00:49:45.523550110Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:49:45.523676 env[1309]: time="2025-05-08T00:49:45.523619440Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 8 00:49:45.523676 env[1309]: time="2025-05-08T00:49:45.523640891Z" level=info msg="metadata content store policy set" policy=shared May 8 00:49:45.593637 systemd-logind[1291]: Watching system buttons on /dev/input/event1 (Power Button) May 8 00:49:45.593672 systemd-logind[1291]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 8 00:49:45.595340 systemd-logind[1291]: New seat seat0. May 8 00:49:45.599977 systemd[1]: Started systemd-logind.service. May 8 00:49:45.927007 locksmithd[1340]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:49:45.951315 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 8 00:49:47.127828 extend-filesystems[1324]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 8 00:49:47.127828 extend-filesystems[1324]: old_desc_blocks = 1, new_desc_blocks = 1 May 8 00:49:47.127828 extend-filesystems[1324]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 8 00:49:47.144924 sshd_keygen[1308]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:49:47.128424 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:49:47.145298 extend-filesystems[1282]: Resized filesystem in /dev/vda9 May 8 00:49:47.128724 systemd[1]: Finished extend-filesystems.service. May 8 00:49:47.157271 systemd[1]: Finished sshd-keygen.service. May 8 00:49:47.160498 systemd[1]: Starting issuegen.service... May 8 00:49:47.168807 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:49:47.169106 systemd[1]: Finished issuegen.service. May 8 00:49:47.230876 systemd[1]: Starting systemd-user-sessions.service... May 8 00:49:47.291610 systemd[1]: Finished systemd-user-sessions.service. May 8 00:49:47.294384 systemd[1]: Started getty@tty1.service. May 8 00:49:47.296623 systemd[1]: Started serial-getty@ttyS0.service. May 8 00:49:47.299577 systemd[1]: Reached target getty.target. May 8 00:49:47.490136 tar[1305]: linux-amd64/LICENSE May 8 00:49:47.490807 tar[1305]: linux-amd64/README.md May 8 00:49:47.496962 systemd[1]: Finished prepare-helm.service. May 8 00:49:47.571562 env[1309]: time="2025-05-08T00:49:47.571451939Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:49:47.571562 env[1309]: time="2025-05-08T00:49:47.571565822Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:49:47.571562 env[1309]: time="2025-05-08T00:49:47.571584798Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:49:47.572047 env[1309]: time="2025-05-08T00:49:47.571646794Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:49:47.572047 env[1309]: time="2025-05-08T00:49:47.571672913Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:49:47.572047 env[1309]: time="2025-05-08T00:49:47.571686398Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:49:47.572047 env[1309]: time="2025-05-08T00:49:47.571701166Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:49:47.572047 env[1309]: time="2025-05-08T00:49:47.571714441Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:49:47.572047 env[1309]: time="2025-05-08T00:49:47.571726714Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 8 00:49:47.572047 env[1309]: time="2025-05-08T00:49:47.571747072Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:49:47.572047 env[1309]: time="2025-05-08T00:49:47.571760578Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:49:47.572047 env[1309]: time="2025-05-08T00:49:47.571773081Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:49:47.572047 env[1309]: time="2025-05-08T00:49:47.571967325Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:49:47.572417 env[1309]: time="2025-05-08T00:49:47.572079145Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:49:47.572739 env[1309]: time="2025-05-08T00:49:47.572675444Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:49:47.572807 env[1309]: time="2025-05-08T00:49:47.572763078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:49:47.572807 env[1309]: time="2025-05-08T00:49:47.572777946Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:49:47.572927 env[1309]: time="2025-05-08T00:49:47.572892300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:49:47.572927 env[1309]: time="2025-05-08T00:49:47.572920944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:49:47.573024 env[1309]: time="2025-05-08T00:49:47.572936133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:49:47.573024 env[1309]: time="2025-05-08T00:49:47.572951932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:49:47.573024 env[1309]: time="2025-05-08T00:49:47.572965408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:49:47.573024 env[1309]: time="2025-05-08T00:49:47.572984063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:49:47.573024 env[1309]: time="2025-05-08T00:49:47.572997127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:49:47.573024 env[1309]: time="2025-05-08T00:49:47.573009861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:49:47.573024 env[1309]: time="2025-05-08T00:49:47.573032153Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:49:47.573330 env[1309]: time="2025-05-08T00:49:47.573239151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:49:47.573330 env[1309]: time="2025-05-08T00:49:47.573254279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:49:47.573330 env[1309]: time="2025-05-08T00:49:47.573282532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:49:47.573330 env[1309]: time="2025-05-08T00:49:47.573295477Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:49:47.573330 env[1309]: time="2025-05-08T00:49:47.573311958Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 8 00:49:47.573330 env[1309]: time="2025-05-08T00:49:47.573323599Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:49:47.573499 env[1309]: time="2025-05-08T00:49:47.573344769Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 8 00:49:47.573499 env[1309]: time="2025-05-08T00:49:47.573401706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:49:47.573717 env[1309]: time="2025-05-08T00:49:47.573657285Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:49:47.739332 env[1309]: time="2025-05-08T00:49:47.573734811Z" level=info msg="Connect containerd service" May 8 00:49:47.739332 env[1309]: time="2025-05-08T00:49:47.573784354Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:49:47.739332 env[1309]: time="2025-05-08T00:49:47.574448590Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:49:47.739332 env[1309]: time="2025-05-08T00:49:47.574584274Z" level=info msg="Start subscribing containerd event" May 8 00:49:47.739332 env[1309]: time="2025-05-08T00:49:47.574650769Z" level=info msg="Start recovering state" May 8 00:49:47.739332 env[1309]: time="2025-05-08T00:49:47.575352365Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:49:47.739332 env[1309]: time="2025-05-08T00:49:47.575373244Z" level=info msg="Start event monitor" May 8 00:49:47.739332 env[1309]: time="2025-05-08T00:49:47.575455148Z" level=info msg="Start snapshots syncer" May 8 00:49:47.739332 env[1309]: time="2025-05-08T00:49:47.575488139Z" level=info msg="Start cni network conf syncer for default" May 8 00:49:47.739332 env[1309]: time="2025-05-08T00:49:47.575475486Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:49:47.739332 env[1309]: time="2025-05-08T00:49:47.575505672Z" level=info msg="Start streaming server" May 8 00:49:47.739332 env[1309]: time="2025-05-08T00:49:47.583321142Z" level=info msg="containerd successfully booted in 2.100556s" May 8 00:49:47.575672 systemd[1]: Started containerd.service. May 8 00:49:48.036499 bash[1343]: Updated "/home/core/.ssh/authorized_keys" May 8 00:49:48.037509 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 8 00:49:48.815719 systemd[1]: Started kubelet.service. May 8 00:49:48.834621 systemd[1]: Reached target multi-user.target. May 8 00:49:48.838121 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 8 00:49:48.847146 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 8 00:49:48.847526 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 8 00:49:48.882507 systemd[1]: Startup finished in 8.451s (kernel) + 11.668s (userspace) = 20.120s. May 8 00:49:50.016253 kubelet[1382]: E0508 00:49:50.016168 1382 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:49:50.018226 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:49:50.018401 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:49:53.940319 systemd[1]: Created slice system-sshd.slice. May 8 00:49:53.941766 systemd[1]: Started sshd@0-10.0.0.121:22-10.0.0.1:42006.service. May 8 00:49:53.982001 sshd[1393]: Accepted publickey for core from 10.0.0.1 port 42006 ssh2: RSA SHA256:1LBxu83eHkdm4X8dsk4zPTne32Wp9pee2vrXUZ4T9Dg May 8 00:49:53.984136 sshd[1393]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:49:53.993029 systemd-logind[1291]: New session 1 of user core. May 8 00:49:53.993846 systemd[1]: Created slice user-500.slice. May 8 00:49:53.995323 systemd[1]: Starting user-runtime-dir@500.service... May 8 00:49:54.005000 systemd[1]: Finished user-runtime-dir@500.service. May 8 00:49:54.006960 systemd[1]: Starting user@500.service... May 8 00:49:54.009996 (systemd)[1398]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:49:54.079800 systemd[1398]: Queued start job for default target default.target. May 8 00:49:54.080000 systemd[1398]: Reached target paths.target. May 8 00:49:54.080016 systemd[1398]: Reached target sockets.target. May 8 00:49:54.080028 systemd[1398]: Reached target timers.target. May 8 00:49:54.080040 systemd[1398]: Reached target basic.target. May 8 00:49:54.080085 systemd[1398]: Reached target default.target. May 8 00:49:54.080106 systemd[1398]: Startup finished in 64ms. May 8 00:49:54.080218 systemd[1]: Started user@500.service. May 8 00:49:54.081183 systemd[1]: Started session-1.scope. May 8 00:49:54.132177 systemd[1]: Started sshd@1-10.0.0.121:22-10.0.0.1:42010.service. May 8 00:49:54.164722 sshd[1407]: Accepted publickey for core from 10.0.0.1 port 42010 ssh2: RSA SHA256:1LBxu83eHkdm4X8dsk4zPTne32Wp9pee2vrXUZ4T9Dg May 8 00:49:54.166042 sshd[1407]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:49:54.169442 systemd-logind[1291]: New session 2 of user core. May 8 00:49:54.170150 systemd[1]: Started session-2.scope. May 8 00:49:54.223547 sshd[1407]: pam_unix(sshd:session): session closed for user core May 8 00:49:54.225669 systemd[1]: Started sshd@2-10.0.0.121:22-10.0.0.1:42026.service. May 8 00:49:54.226083 systemd[1]: sshd@1-10.0.0.121:22-10.0.0.1:42010.service: Deactivated successfully. May 8 00:49:54.227017 systemd[1]: session-2.scope: Deactivated successfully. May 8 00:49:54.227031 systemd-logind[1291]: Session 2 logged out. Waiting for processes to exit. May 8 00:49:54.227938 systemd-logind[1291]: Removed session 2. May 8 00:49:54.257233 sshd[1412]: Accepted publickey for core from 10.0.0.1 port 42026 ssh2: RSA SHA256:1LBxu83eHkdm4X8dsk4zPTne32Wp9pee2vrXUZ4T9Dg May 8 00:49:54.258358 sshd[1412]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:49:54.261681 systemd-logind[1291]: New session 3 of user core. May 8 00:49:54.262369 systemd[1]: Started session-3.scope. May 8 00:49:54.313109 sshd[1412]: pam_unix(sshd:session): session closed for user core May 8 00:49:54.315974 systemd[1]: Started sshd@3-10.0.0.121:22-10.0.0.1:42038.service. May 8 00:49:54.316507 systemd[1]: sshd@2-10.0.0.121:22-10.0.0.1:42026.service: Deactivated successfully. May 8 00:49:54.317342 systemd[1]: session-3.scope: Deactivated successfully. May 8 00:49:54.317370 systemd-logind[1291]: Session 3 logged out. Waiting for processes to exit. May 8 00:49:54.318342 systemd-logind[1291]: Removed session 3. May 8 00:49:54.346732 sshd[1420]: Accepted publickey for core from 10.0.0.1 port 42038 ssh2: RSA SHA256:1LBxu83eHkdm4X8dsk4zPTne32Wp9pee2vrXUZ4T9Dg May 8 00:49:54.347792 sshd[1420]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:49:54.351243 systemd-logind[1291]: New session 4 of user core. May 8 00:49:54.352008 systemd[1]: Started session-4.scope. May 8 00:49:54.405786 sshd[1420]: pam_unix(sshd:session): session closed for user core May 8 00:49:54.408192 systemd[1]: Started sshd@4-10.0.0.121:22-10.0.0.1:42044.service. May 8 00:49:54.408651 systemd[1]: sshd@3-10.0.0.121:22-10.0.0.1:42038.service: Deactivated successfully. May 8 00:49:54.409531 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:49:54.409568 systemd-logind[1291]: Session 4 logged out. Waiting for processes to exit. May 8 00:49:54.410536 systemd-logind[1291]: Removed session 4. May 8 00:49:54.439205 sshd[1427]: Accepted publickey for core from 10.0.0.1 port 42044 ssh2: RSA SHA256:1LBxu83eHkdm4X8dsk4zPTne32Wp9pee2vrXUZ4T9Dg May 8 00:49:54.440331 sshd[1427]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:49:54.443573 systemd-logind[1291]: New session 5 of user core. May 8 00:49:54.444322 systemd[1]: Started session-5.scope. May 8 00:49:54.499847 sudo[1432]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:49:54.500066 sudo[1432]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 8 00:49:54.539143 systemd[1]: Starting docker.service... May 8 00:49:54.755863 env[1444]: time="2025-05-08T00:49:54.755718099Z" level=info msg="Starting up" May 8 00:49:54.757114 env[1444]: time="2025-05-08T00:49:54.757062220Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 8 00:49:54.757114 env[1444]: time="2025-05-08T00:49:54.757087447Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 8 00:49:54.757114 env[1444]: time="2025-05-08T00:49:54.757106893Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 8 00:49:54.757114 env[1444]: time="2025-05-08T00:49:54.757117654Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 8 00:49:54.762016 env[1444]: time="2025-05-08T00:49:54.761968903Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 8 00:49:54.762016 env[1444]: time="2025-05-08T00:49:54.762005271Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 8 00:49:54.762136 env[1444]: time="2025-05-08T00:49:54.762029507Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 8 00:49:54.762136 env[1444]: time="2025-05-08T00:49:54.762060114Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 8 00:49:54.770689 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport445207529-merged.mount: Deactivated successfully. May 8 00:49:58.206501 env[1444]: time="2025-05-08T00:49:58.206420951Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 8 00:49:58.206501 env[1444]: time="2025-05-08T00:49:58.206477166Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 8 00:49:58.207001 env[1444]: time="2025-05-08T00:49:58.206710564Z" level=info msg="Loading containers: start." May 8 00:49:58.688281 kernel: Initializing XFRM netlink socket May 8 00:49:58.717700 env[1444]: time="2025-05-08T00:49:58.717645602Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 8 00:49:58.773414 systemd-networkd[1091]: docker0: Link UP May 8 00:49:58.801443 env[1444]: time="2025-05-08T00:49:58.801383851Z" level=info msg="Loading containers: done." May 8 00:49:58.819113 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1708157057-merged.mount: Deactivated successfully. May 8 00:49:58.836628 env[1444]: time="2025-05-08T00:49:58.836526198Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 00:49:58.836839 env[1444]: time="2025-05-08T00:49:58.836814399Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 8 00:49:58.836966 env[1444]: time="2025-05-08T00:49:58.836948851Z" level=info msg="Daemon has completed initialization" May 8 00:49:58.865738 systemd[1]: Started docker.service. May 8 00:49:58.869809 env[1444]: time="2025-05-08T00:49:58.869748834Z" level=info msg="API listen on /run/docker.sock" May 8 00:49:59.802985 env[1309]: time="2025-05-08T00:49:59.802915418Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 8 00:50:00.249849 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 00:50:00.250135 systemd[1]: Stopped kubelet.service. May 8 00:50:00.251801 systemd[1]: Starting kubelet.service... May 8 00:50:00.388841 systemd[1]: Started kubelet.service. May 8 00:50:00.722530 kubelet[1590]: E0508 00:50:00.722358 1590 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:50:00.726917 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:50:00.727111 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:50:04.379503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1051397298.mount: Deactivated successfully. May 8 00:50:10.703959 env[1309]: time="2025-05-08T00:50:10.703859493Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:10.708650 env[1309]: time="2025-05-08T00:50:10.708558066Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:10.711451 env[1309]: time="2025-05-08T00:50:10.711405858Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:10.713444 env[1309]: time="2025-05-08T00:50:10.713412241Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:10.714108 env[1309]: time="2025-05-08T00:50:10.714040390Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 8 00:50:10.756700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 00:50:10.756920 systemd[1]: Stopped kubelet.service. May 8 00:50:10.758974 systemd[1]: Starting kubelet.service... May 8 00:50:10.777885 env[1309]: time="2025-05-08T00:50:10.777485004Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 8 00:50:10.945397 systemd[1]: Started kubelet.service. May 8 00:50:11.004316 kubelet[1618]: E0508 00:50:11.004240 1618 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:50:11.006047 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:50:11.006201 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:50:14.285469 env[1309]: time="2025-05-08T00:50:14.285383769Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:14.414064 env[1309]: time="2025-05-08T00:50:14.413991891Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:14.481296 env[1309]: time="2025-05-08T00:50:14.481185207Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:14.504611 env[1309]: time="2025-05-08T00:50:14.504531648Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:14.505293 env[1309]: time="2025-05-08T00:50:14.505229296Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 8 00:50:14.515982 env[1309]: time="2025-05-08T00:50:14.515930349Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 8 00:50:18.197209 env[1309]: time="2025-05-08T00:50:18.197136331Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:18.199642 env[1309]: time="2025-05-08T00:50:18.199581497Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:18.201399 env[1309]: time="2025-05-08T00:50:18.201356430Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:18.203568 env[1309]: time="2025-05-08T00:50:18.203521575Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:18.204349 env[1309]: time="2025-05-08T00:50:18.204311387Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 8 00:50:18.214560 env[1309]: time="2025-05-08T00:50:18.214513881Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 8 00:50:20.196895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1284432318.mount: Deactivated successfully. May 8 00:50:21.213033 env[1309]: time="2025-05-08T00:50:21.212921597Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:21.215726 env[1309]: time="2025-05-08T00:50:21.215677625Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:21.217728 env[1309]: time="2025-05-08T00:50:21.217681561Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:21.219786 env[1309]: time="2025-05-08T00:50:21.219706949Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:21.220349 env[1309]: time="2025-05-08T00:50:21.220305227Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 8 00:50:21.240964 env[1309]: time="2025-05-08T00:50:21.240910971Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 8 00:50:21.249749 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 8 00:50:21.249994 systemd[1]: Stopped kubelet.service. May 8 00:50:21.251731 systemd[1]: Starting kubelet.service... May 8 00:50:21.349359 systemd[1]: Started kubelet.service. May 8 00:50:21.673741 kubelet[1653]: E0508 00:50:21.673610 1653 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:50:21.676388 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:50:21.676664 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:50:22.638791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1261337510.mount: Deactivated successfully. May 8 00:50:26.095490 env[1309]: time="2025-05-08T00:50:26.095415157Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:26.205941 env[1309]: time="2025-05-08T00:50:26.205834320Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:26.239961 env[1309]: time="2025-05-08T00:50:26.239884468Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:26.278917 env[1309]: time="2025-05-08T00:50:26.278843659Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:26.279906 env[1309]: time="2025-05-08T00:50:26.279856501Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 8 00:50:26.292471 env[1309]: time="2025-05-08T00:50:26.292419880Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 8 00:50:27.808690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount214943705.mount: Deactivated successfully. May 8 00:50:27.815652 env[1309]: time="2025-05-08T00:50:27.815573883Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:27.818943 env[1309]: time="2025-05-08T00:50:27.818885312Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:27.821531 env[1309]: time="2025-05-08T00:50:27.821474194Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:27.823090 env[1309]: time="2025-05-08T00:50:27.823042622Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:27.823404 env[1309]: time="2025-05-08T00:50:27.823367180Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 8 00:50:27.840105 env[1309]: time="2025-05-08T00:50:27.840039474Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 8 00:50:28.402802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3538485882.mount: Deactivated successfully. May 8 00:50:30.795347 update_engine[1292]: I0508 00:50:30.795210 1292 update_attempter.cc:509] Updating boot flags... May 8 00:50:31.749720 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 8 00:50:31.749906 systemd[1]: Stopped kubelet.service. May 8 00:50:31.751584 systemd[1]: Starting kubelet.service... May 8 00:50:31.839317 systemd[1]: Started kubelet.service. May 8 00:50:31.941867 kubelet[1695]: E0508 00:50:31.941791 1695 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:50:31.944368 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:50:31.944524 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:50:33.315470 env[1309]: time="2025-05-08T00:50:33.315391416Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:33.318343 env[1309]: time="2025-05-08T00:50:33.318283007Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:33.320829 env[1309]: time="2025-05-08T00:50:33.320757859Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:33.323027 env[1309]: time="2025-05-08T00:50:33.322929405Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:33.323901 env[1309]: time="2025-05-08T00:50:33.323855641Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 8 00:50:36.128707 systemd[1]: Stopped kubelet.service. May 8 00:50:36.130869 systemd[1]: Starting kubelet.service... May 8 00:50:36.145245 systemd[1]: Reloading. May 8 00:50:36.216568 /usr/lib/systemd/system-generators/torcx-generator[1808]: time="2025-05-08T00:50:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 8 00:50:36.217024 /usr/lib/systemd/system-generators/torcx-generator[1808]: time="2025-05-08T00:50:36Z" level=info msg="torcx already run" May 8 00:50:36.902583 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 8 00:50:36.902601 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 8 00:50:36.920816 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:50:37.006003 systemd[1]: Started kubelet.service. May 8 00:50:37.007747 systemd[1]: Stopping kubelet.service... May 8 00:50:37.008031 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:50:37.008343 systemd[1]: Stopped kubelet.service. May 8 00:50:37.010124 systemd[1]: Starting kubelet.service... May 8 00:50:37.097088 systemd[1]: Started kubelet.service. May 8 00:50:37.144042 kubelet[1870]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:50:37.144542 kubelet[1870]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:50:37.144542 kubelet[1870]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:50:37.144661 kubelet[1870]: I0508 00:50:37.144599 1870 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:50:37.643083 kubelet[1870]: I0508 00:50:37.642998 1870 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 00:50:37.643083 kubelet[1870]: I0508 00:50:37.643047 1870 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:50:37.643394 kubelet[1870]: I0508 00:50:37.643311 1870 server.go:927] "Client rotation is on, will bootstrap in background" May 8 00:50:37.686624 kubelet[1870]: I0508 00:50:37.686537 1870 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:50:37.688445 kubelet[1870]: E0508 00:50:37.688392 1870 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.121:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.121:6443: connect: connection refused May 8 00:50:37.704325 kubelet[1870]: I0508 00:50:37.704267 1870 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:50:37.706189 kubelet[1870]: I0508 00:50:37.706119 1870 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:50:37.706389 kubelet[1870]: I0508 00:50:37.706171 1870 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 00:50:37.706527 kubelet[1870]: I0508 00:50:37.706398 1870 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:50:37.706527 kubelet[1870]: I0508 00:50:37.706408 1870 container_manager_linux.go:301] "Creating device plugin manager" May 8 00:50:37.706626 kubelet[1870]: I0508 00:50:37.706567 1870 state_mem.go:36] "Initialized new in-memory state store" May 8 00:50:37.707607 kubelet[1870]: I0508 00:50:37.707568 1870 kubelet.go:400] "Attempting to sync node with API server" May 8 00:50:37.707607 kubelet[1870]: I0508 00:50:37.707593 1870 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:50:37.707713 kubelet[1870]: I0508 00:50:37.707630 1870 kubelet.go:312] "Adding apiserver pod source" May 8 00:50:37.707713 kubelet[1870]: I0508 00:50:37.707665 1870 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:50:37.709507 kubelet[1870]: W0508 00:50:37.708508 1870 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused May 8 00:50:37.709507 kubelet[1870]: E0508 00:50:37.708607 1870 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused May 8 00:50:37.722190 kubelet[1870]: W0508 00:50:37.722055 1870 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.121:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused May 8 00:50:37.722190 kubelet[1870]: E0508 00:50:37.722175 1870 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.121:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused May 8 00:50:37.725909 kubelet[1870]: I0508 00:50:37.725835 1870 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 8 00:50:37.727677 kubelet[1870]: I0508 00:50:37.727651 1870 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:50:37.727784 kubelet[1870]: W0508 00:50:37.727755 1870 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:50:37.729693 kubelet[1870]: I0508 00:50:37.729587 1870 server.go:1264] "Started kubelet" May 8 00:50:37.735644 kubelet[1870]: I0508 00:50:37.735141 1870 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:50:37.735846 kubelet[1870]: I0508 00:50:37.735774 1870 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:50:37.735846 kubelet[1870]: I0508 00:50:37.735831 1870 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:50:37.739034 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 8 00:50:37.739316 kubelet[1870]: I0508 00:50:37.739253 1870 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:50:37.739876 kubelet[1870]: I0508 00:50:37.739828 1870 server.go:455] "Adding debug handlers to kubelet server" May 8 00:50:37.742725 kubelet[1870]: E0508 00:50:37.742687 1870 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:50:37.742858 kubelet[1870]: I0508 00:50:37.742747 1870 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 00:50:37.742928 kubelet[1870]: I0508 00:50:37.742879 1870 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:50:37.742972 kubelet[1870]: I0508 00:50:37.742958 1870 reconciler.go:26] "Reconciler: start to sync state" May 8 00:50:37.743505 kubelet[1870]: W0508 00:50:37.743433 1870 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused May 8 00:50:37.743582 kubelet[1870]: E0508 00:50:37.743515 1870 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused May 8 00:50:37.743959 kubelet[1870]: E0508 00:50:37.743889 1870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.121:6443: connect: connection refused" interval="200ms" May 8 00:50:37.747745 kubelet[1870]: E0508 00:50:37.747689 1870 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:50:37.747930 kubelet[1870]: E0508 00:50:37.747798 1870 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.121:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.121:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183d66f61d524b70 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 00:50:37.729540976 +0000 UTC m=+0.627517888,LastTimestamp:2025-05-08 00:50:37.729540976 +0000 UTC m=+0.627517888,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 00:50:37.748729 kubelet[1870]: I0508 00:50:37.748699 1870 factory.go:221] Registration of the containerd container factory successfully May 8 00:50:37.748729 kubelet[1870]: I0508 00:50:37.748720 1870 factory.go:221] Registration of the systemd container factory successfully May 8 00:50:37.749212 kubelet[1870]: I0508 00:50:37.749169 1870 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:50:37.761666 kubelet[1870]: I0508 00:50:37.761552 1870 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:50:37.762824 kubelet[1870]: I0508 00:50:37.762793 1870 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:50:37.762878 kubelet[1870]: I0508 00:50:37.762844 1870 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:50:37.762920 kubelet[1870]: I0508 00:50:37.762884 1870 kubelet.go:2337] "Starting kubelet main sync loop" May 8 00:50:37.763012 kubelet[1870]: E0508 00:50:37.762955 1870 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:50:37.765115 kubelet[1870]: W0508 00:50:37.765019 1870 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused May 8 00:50:37.765115 kubelet[1870]: E0508 00:50:37.765115 1870 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused May 8 00:50:37.780633 kubelet[1870]: I0508 00:50:37.780584 1870 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:50:37.780633 kubelet[1870]: I0508 00:50:37.780614 1870 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:50:37.780633 kubelet[1870]: I0508 00:50:37.780649 1870 state_mem.go:36] "Initialized new in-memory state store" May 8 00:50:37.844434 kubelet[1870]: I0508 00:50:37.844376 1870 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:50:37.844888 kubelet[1870]: E0508 00:50:37.844843 1870 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.121:6443/api/v1/nodes\": dial tcp 10.0.0.121:6443: connect: connection refused" node="localhost" May 8 00:50:37.863167 kubelet[1870]: E0508 00:50:37.863085 1870 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 8 00:50:37.945367 kubelet[1870]: E0508 00:50:37.945172 1870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.121:6443: connect: connection refused" interval="400ms" May 8 00:50:38.046751 kubelet[1870]: I0508 00:50:38.046718 1870 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:50:38.047336 kubelet[1870]: E0508 00:50:38.047255 1870 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.121:6443/api/v1/nodes\": dial tcp 10.0.0.121:6443: connect: connection refused" node="localhost" May 8 00:50:38.063307 kubelet[1870]: E0508 00:50:38.063234 1870 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 8 00:50:38.346317 kubelet[1870]: E0508 00:50:38.346240 1870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.121:6443: connect: connection refused" interval="800ms" May 8 00:50:38.377445 kubelet[1870]: I0508 00:50:38.377350 1870 policy_none.go:49] "None policy: Start" May 8 00:50:38.378456 kubelet[1870]: I0508 00:50:38.378418 1870 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:50:38.378456 kubelet[1870]: I0508 00:50:38.378449 1870 state_mem.go:35] "Initializing new in-memory state store" May 8 00:50:38.392880 kubelet[1870]: I0508 00:50:38.392831 1870 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:50:38.393091 kubelet[1870]: I0508 00:50:38.393037 1870 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:50:38.393222 kubelet[1870]: I0508 00:50:38.393200 1870 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:50:38.395078 kubelet[1870]: E0508 00:50:38.395016 1870 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 8 00:50:38.449033 kubelet[1870]: I0508 00:50:38.448995 1870 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:50:38.449519 kubelet[1870]: E0508 00:50:38.449440 1870 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.121:6443/api/v1/nodes\": dial tcp 10.0.0.121:6443: connect: connection refused" node="localhost" May 8 00:50:38.463732 kubelet[1870]: I0508 00:50:38.463625 1870 topology_manager.go:215] "Topology Admit Handler" podUID="fde372d2ea881c147bf422aaf7ba5446" podNamespace="kube-system" podName="kube-apiserver-localhost" May 8 00:50:38.465128 kubelet[1870]: I0508 00:50:38.465105 1870 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 8 00:50:38.466053 kubelet[1870]: I0508 00:50:38.466017 1870 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 8 00:50:38.547345 kubelet[1870]: I0508 00:50:38.547279 1870 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fde372d2ea881c147bf422aaf7ba5446-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fde372d2ea881c147bf422aaf7ba5446\") " pod="kube-system/kube-apiserver-localhost" May 8 00:50:38.547345 kubelet[1870]: I0508 00:50:38.547335 1870 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:50:38.547603 kubelet[1870]: I0508 00:50:38.547360 1870 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:50:38.547603 kubelet[1870]: I0508 00:50:38.547407 1870 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:50:38.547603 kubelet[1870]: I0508 00:50:38.547433 1870 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:50:38.547603 kubelet[1870]: I0508 00:50:38.547456 1870 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 8 00:50:38.547603 kubelet[1870]: I0508 00:50:38.547484 1870 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fde372d2ea881c147bf422aaf7ba5446-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fde372d2ea881c147bf422aaf7ba5446\") " pod="kube-system/kube-apiserver-localhost" May 8 00:50:38.547827 kubelet[1870]: I0508 00:50:38.547505 1870 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fde372d2ea881c147bf422aaf7ba5446-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fde372d2ea881c147bf422aaf7ba5446\") " pod="kube-system/kube-apiserver-localhost" May 8 00:50:38.547827 kubelet[1870]: I0508 00:50:38.547525 1870 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:50:38.566243 kubelet[1870]: W0508 00:50:38.566136 1870 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.121:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused May 8 00:50:38.566243 kubelet[1870]: E0508 00:50:38.566241 1870 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.121:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused May 8 00:50:38.583835 kubelet[1870]: W0508 00:50:38.583740 1870 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused May 8 00:50:38.583835 kubelet[1870]: E0508 00:50:38.583829 1870 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused May 8 00:50:38.769585 kubelet[1870]: E0508 00:50:38.769520 1870 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:38.769767 kubelet[1870]: E0508 00:50:38.769707 1870 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:38.770318 env[1309]: time="2025-05-08T00:50:38.770268032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 8 00:50:38.770779 env[1309]: time="2025-05-08T00:50:38.770254086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fde372d2ea881c147bf422aaf7ba5446,Namespace:kube-system,Attempt:0,}" May 8 00:50:38.771545 kubelet[1870]: E0508 00:50:38.771523 1870 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:38.772041 env[1309]: time="2025-05-08T00:50:38.771987933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 8 00:50:38.850602 kubelet[1870]: W0508 00:50:38.850487 1870 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused May 8 00:50:38.850602 kubelet[1870]: E0508 00:50:38.850590 1870 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused May 8 00:50:39.147964 kubelet[1870]: E0508 00:50:39.147605 1870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.121:6443: connect: connection refused" interval="1.6s" May 8 00:50:39.240120 kubelet[1870]: W0508 00:50:39.239984 1870 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused May 8 00:50:39.240352 kubelet[1870]: E0508 00:50:39.240102 1870 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused May 8 00:50:39.251275 kubelet[1870]: I0508 00:50:39.251235 1870 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:50:39.251623 kubelet[1870]: E0508 00:50:39.251576 1870 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.121:6443/api/v1/nodes\": dial tcp 10.0.0.121:6443: connect: connection refused" node="localhost" May 8 00:50:39.476020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3197970628.mount: Deactivated successfully. May 8 00:50:39.486418 env[1309]: time="2025-05-08T00:50:39.486344195Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:39.491632 env[1309]: time="2025-05-08T00:50:39.491554980Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:39.492775 env[1309]: time="2025-05-08T00:50:39.492734809Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:39.493752 env[1309]: time="2025-05-08T00:50:39.493709180Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:39.495396 env[1309]: time="2025-05-08T00:50:39.495175450Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:39.497221 env[1309]: time="2025-05-08T00:50:39.497156883Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:39.498699 env[1309]: time="2025-05-08T00:50:39.498676764Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:39.500722 env[1309]: time="2025-05-08T00:50:39.500649972Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:39.502054 env[1309]: time="2025-05-08T00:50:39.501997628Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:39.503739 env[1309]: time="2025-05-08T00:50:39.503706427Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:39.504958 env[1309]: time="2025-05-08T00:50:39.504934196Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:39.506708 env[1309]: time="2025-05-08T00:50:39.506630751Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:39.567686 env[1309]: time="2025-05-08T00:50:39.567558030Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:50:39.567686 env[1309]: time="2025-05-08T00:50:39.567634855Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:50:39.568008 env[1309]: time="2025-05-08T00:50:39.567651137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:50:39.568361 env[1309]: time="2025-05-08T00:50:39.568308809Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee9c67956cf92c21b2413fa85ce3e129d1ec01e9ab881d4e70e863d798d7bd27 pid=1916 runtime=io.containerd.runc.v2 May 8 00:50:39.568423 env[1309]: time="2025-05-08T00:50:39.568306725Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:50:39.568423 env[1309]: time="2025-05-08T00:50:39.568355106Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:50:39.568423 env[1309]: time="2025-05-08T00:50:39.568369493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:50:39.568677 env[1309]: time="2025-05-08T00:50:39.568619386Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5c6ae3b9435b642ba023db96fb919b5efb1f6db3112176627e959442fe3f6656 pid=1920 runtime=io.containerd.runc.v2 May 8 00:50:39.592305 env[1309]: time="2025-05-08T00:50:39.592079248Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:50:39.592305 env[1309]: time="2025-05-08T00:50:39.592178776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:50:39.592305 env[1309]: time="2025-05-08T00:50:39.592203904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:50:39.592616 env[1309]: time="2025-05-08T00:50:39.592573893Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1beb4fc960e4b7e9cfa838cf8b9d6e50273e4618dd39b8c779851dbca7a38c38 pid=1958 runtime=io.containerd.runc.v2 May 8 00:50:39.752571 kubelet[1870]: E0508 00:50:39.752514 1870 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.121:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.121:6443: connect: connection refused May 8 00:50:39.833792 env[1309]: time="2025-05-08T00:50:39.833385305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee9c67956cf92c21b2413fa85ce3e129d1ec01e9ab881d4e70e863d798d7bd27\"" May 8 00:50:39.835815 kubelet[1870]: E0508 00:50:39.835779 1870 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:39.838084 env[1309]: time="2025-05-08T00:50:39.838044467Z" level=info msg="CreateContainer within sandbox \"ee9c67956cf92c21b2413fa85ce3e129d1ec01e9ab881d4e70e863d798d7bd27\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 00:50:39.845078 env[1309]: time="2025-05-08T00:50:39.845032310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fde372d2ea881c147bf422aaf7ba5446,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c6ae3b9435b642ba023db96fb919b5efb1f6db3112176627e959442fe3f6656\"" May 8 00:50:39.845945 kubelet[1870]: E0508 00:50:39.845920 1870 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:39.848070 env[1309]: time="2025-05-08T00:50:39.848006639Z" level=info msg="CreateContainer within sandbox \"5c6ae3b9435b642ba023db96fb919b5efb1f6db3112176627e959442fe3f6656\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 00:50:39.863482 env[1309]: time="2025-05-08T00:50:39.863409821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"1beb4fc960e4b7e9cfa838cf8b9d6e50273e4618dd39b8c779851dbca7a38c38\"" May 8 00:50:39.863646 env[1309]: time="2025-05-08T00:50:39.863431572Z" level=info msg="CreateContainer within sandbox \"ee9c67956cf92c21b2413fa85ce3e129d1ec01e9ab881d4e70e863d798d7bd27\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4b7d65e37bca3f219cca6e7a3abb65a2b8bcc78195d9a531a319b32c169dfc23\"" May 8 00:50:39.864357 env[1309]: time="2025-05-08T00:50:39.864334888Z" level=info msg="StartContainer for \"4b7d65e37bca3f219cca6e7a3abb65a2b8bcc78195d9a531a319b32c169dfc23\"" May 8 00:50:39.864407 kubelet[1870]: E0508 00:50:39.864391 1870 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:39.866390 env[1309]: time="2025-05-08T00:50:39.866343783Z" level=info msg="CreateContainer within sandbox \"1beb4fc960e4b7e9cfa838cf8b9d6e50273e4618dd39b8c779851dbca7a38c38\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 00:50:39.881531 env[1309]: time="2025-05-08T00:50:39.881416690Z" level=info msg="CreateContainer within sandbox \"5c6ae3b9435b642ba023db96fb919b5efb1f6db3112176627e959442fe3f6656\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1e6c3192561543a50e3bb82a2c7a9bb69064e96c10a968b54f6124d96d612ee3\"" May 8 00:50:39.882120 env[1309]: time="2025-05-08T00:50:39.882100403Z" level=info msg="StartContainer for \"1e6c3192561543a50e3bb82a2c7a9bb69064e96c10a968b54f6124d96d612ee3\"" May 8 00:50:40.197812 env[1309]: time="2025-05-08T00:50:40.197655531Z" level=info msg="StartContainer for \"4b7d65e37bca3f219cca6e7a3abb65a2b8bcc78195d9a531a319b32c169dfc23\" returns successfully" May 8 00:50:40.197990 env[1309]: time="2025-05-08T00:50:40.197679516Z" level=info msg="StartContainer for \"1e6c3192561543a50e3bb82a2c7a9bb69064e96c10a968b54f6124d96d612ee3\" returns successfully" May 8 00:50:40.369857 env[1309]: time="2025-05-08T00:50:40.369789386Z" level=info msg="CreateContainer within sandbox \"1beb4fc960e4b7e9cfa838cf8b9d6e50273e4618dd39b8c779851dbca7a38c38\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"898dfd3c37f3a7d16a296c1f3bb7505c5a067d06cc400646db2b0929bf45bbc2\"" May 8 00:50:40.370938 env[1309]: time="2025-05-08T00:50:40.370892429Z" level=info msg="StartContainer for \"898dfd3c37f3a7d16a296c1f3bb7505c5a067d06cc400646db2b0929bf45bbc2\"" May 8 00:50:40.655831 env[1309]: time="2025-05-08T00:50:40.655742182Z" level=info msg="StartContainer for \"898dfd3c37f3a7d16a296c1f3bb7505c5a067d06cc400646db2b0929bf45bbc2\" returns successfully" May 8 00:50:40.786499 kubelet[1870]: E0508 00:50:40.786456 1870 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:40.788161 kubelet[1870]: E0508 00:50:40.788132 1870 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:40.789969 kubelet[1870]: E0508 00:50:40.789940 1870 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:40.852944 kubelet[1870]: I0508 00:50:40.852899 1870 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:50:41.209626 kubelet[1870]: E0508 00:50:41.209574 1870 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 8 00:50:41.350482 kubelet[1870]: E0508 00:50:41.350323 1870 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.183d66f61d524b70 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 00:50:37.729540976 +0000 UTC m=+0.627517888,LastTimestamp:2025-05-08 00:50:37.729540976 +0000 UTC m=+0.627517888,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 00:50:41.506940 kubelet[1870]: I0508 00:50:41.506895 1870 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 8 00:50:41.511459 kubelet[1870]: E0508 00:50:41.511325 1870 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.183d66f61e66f7c3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 00:50:37.747673027 +0000 UTC m=+0.645649909,LastTimestamp:2025-05-08 00:50:37.747673027 +0000 UTC m=+0.645649909,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 00:50:41.660880 kubelet[1870]: E0508 00:50:41.660720 1870 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.183d66f620450ab9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 00:50:37.779004089 +0000 UTC m=+0.676980971,LastTimestamp:2025-05-08 00:50:37.779004089 +0000 UTC m=+0.676980971,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 00:50:41.711585 kubelet[1870]: I0508 00:50:41.711529 1870 apiserver.go:52] "Watching apiserver" May 8 00:50:41.743394 kubelet[1870]: I0508 00:50:41.743330 1870 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:50:41.939578 kubelet[1870]: E0508 00:50:41.939411 1870 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 8 00:50:41.940028 kubelet[1870]: E0508 00:50:41.939906 1870 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:42.167979 kubelet[1870]: E0508 00:50:42.167922 1870 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 8 00:50:42.168403 kubelet[1870]: E0508 00:50:42.168381 1870 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:42.802046 kubelet[1870]: E0508 00:50:42.801991 1870 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:43.794669 kubelet[1870]: E0508 00:50:43.794610 1870 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:44.072485 systemd[1]: Reloading. May 8 00:50:44.140563 /usr/lib/systemd/system-generators/torcx-generator[2157]: time="2025-05-08T00:50:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 8 00:50:44.140601 /usr/lib/systemd/system-generators/torcx-generator[2157]: time="2025-05-08T00:50:44Z" level=info msg="torcx already run" May 8 00:50:44.220148 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 8 00:50:44.220173 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 8 00:50:44.238038 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:50:44.322766 systemd[1]: Stopping kubelet.service... May 8 00:50:44.342571 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:50:44.342864 systemd[1]: Stopped kubelet.service. May 8 00:50:44.344604 systemd[1]: Starting kubelet.service... May 8 00:50:44.540890 systemd[1]: Started kubelet.service. May 8 00:50:44.588488 kubelet[2214]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:50:44.588488 kubelet[2214]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:50:44.588488 kubelet[2214]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:50:44.588488 kubelet[2214]: I0508 00:50:44.588410 2214 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:50:44.592879 kubelet[2214]: I0508 00:50:44.592829 2214 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 00:50:44.592879 kubelet[2214]: I0508 00:50:44.592867 2214 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:50:44.593110 kubelet[2214]: I0508 00:50:44.593079 2214 server.go:927] "Client rotation is on, will bootstrap in background" May 8 00:50:44.594303 kubelet[2214]: I0508 00:50:44.594276 2214 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 00:50:44.595322 kubelet[2214]: I0508 00:50:44.595295 2214 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:50:44.603563 kubelet[2214]: I0508 00:50:44.603516 2214 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:50:44.655643 kubelet[2214]: I0508 00:50:44.655565 2214 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:50:44.655843 kubelet[2214]: I0508 00:50:44.655620 2214 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 00:50:44.655843 kubelet[2214]: I0508 00:50:44.655823 2214 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:50:44.655843 kubelet[2214]: I0508 00:50:44.655836 2214 container_manager_linux.go:301] "Creating device plugin manager" May 8 00:50:44.656029 kubelet[2214]: I0508 00:50:44.655885 2214 state_mem.go:36] "Initialized new in-memory state store" May 8 00:50:44.656029 kubelet[2214]: I0508 00:50:44.655985 2214 kubelet.go:400] "Attempting to sync node with API server" May 8 00:50:44.656029 kubelet[2214]: I0508 00:50:44.655996 2214 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:50:44.656029 kubelet[2214]: I0508 00:50:44.656016 2214 kubelet.go:312] "Adding apiserver pod source" May 8 00:50:44.656029 kubelet[2214]: I0508 00:50:44.656030 2214 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:50:44.659199 kubelet[2214]: I0508 00:50:44.657495 2214 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 8 00:50:44.659199 kubelet[2214]: I0508 00:50:44.658437 2214 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:50:44.659199 kubelet[2214]: I0508 00:50:44.658844 2214 server.go:1264] "Started kubelet" May 8 00:50:44.660709 kubelet[2214]: I0508 00:50:44.660668 2214 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:50:44.660963 kubelet[2214]: I0508 00:50:44.660906 2214 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:50:44.662465 kubelet[2214]: I0508 00:50:44.662433 2214 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:50:44.663029 kubelet[2214]: I0508 00:50:44.663004 2214 server.go:455] "Adding debug handlers to kubelet server" May 8 00:50:44.664421 kubelet[2214]: I0508 00:50:44.664392 2214 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:50:44.670312 kubelet[2214]: E0508 00:50:44.666664 2214 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:50:44.670312 kubelet[2214]: I0508 00:50:44.666705 2214 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 00:50:44.670312 kubelet[2214]: I0508 00:50:44.666798 2214 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:50:44.670312 kubelet[2214]: I0508 00:50:44.666915 2214 reconciler.go:26] "Reconciler: start to sync state" May 8 00:50:44.670312 kubelet[2214]: E0508 00:50:44.668323 2214 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:50:44.670312 kubelet[2214]: I0508 00:50:44.669295 2214 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:50:44.670669 kubelet[2214]: I0508 00:50:44.670610 2214 factory.go:221] Registration of the containerd container factory successfully May 8 00:50:44.670669 kubelet[2214]: I0508 00:50:44.670669 2214 factory.go:221] Registration of the systemd container factory successfully May 8 00:50:44.672563 kubelet[2214]: I0508 00:50:44.672524 2214 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:50:44.673367 kubelet[2214]: I0508 00:50:44.673323 2214 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:50:44.673367 kubelet[2214]: I0508 00:50:44.673351 2214 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:50:44.673367 kubelet[2214]: I0508 00:50:44.673373 2214 kubelet.go:2337] "Starting kubelet main sync loop" May 8 00:50:44.673636 kubelet[2214]: E0508 00:50:44.673422 2214 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:50:44.709365 kubelet[2214]: I0508 00:50:44.709313 2214 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:50:44.709365 kubelet[2214]: I0508 00:50:44.709342 2214 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:50:44.709365 kubelet[2214]: I0508 00:50:44.709391 2214 state_mem.go:36] "Initialized new in-memory state store" May 8 00:50:44.709657 kubelet[2214]: I0508 00:50:44.709595 2214 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 00:50:44.709657 kubelet[2214]: I0508 00:50:44.709609 2214 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 00:50:44.709657 kubelet[2214]: I0508 00:50:44.709633 2214 policy_none.go:49] "None policy: Start" May 8 00:50:44.710319 kubelet[2214]: I0508 00:50:44.710285 2214 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:50:44.710319 kubelet[2214]: I0508 00:50:44.710312 2214 state_mem.go:35] "Initializing new in-memory state store" May 8 00:50:44.710574 kubelet[2214]: I0508 00:50:44.710483 2214 state_mem.go:75] "Updated machine memory state" May 8 00:50:44.711775 kubelet[2214]: I0508 00:50:44.711745 2214 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:50:44.711990 kubelet[2214]: I0508 00:50:44.711948 2214 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:50:44.712817 kubelet[2214]: I0508 00:50:44.712568 2214 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:50:44.771364 kubelet[2214]: I0508 00:50:44.771319 2214 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:50:44.774584 kubelet[2214]: I0508 00:50:44.774511 2214 topology_manager.go:215] "Topology Admit Handler" podUID="fde372d2ea881c147bf422aaf7ba5446" podNamespace="kube-system" podName="kube-apiserver-localhost" May 8 00:50:44.774702 kubelet[2214]: I0508 00:50:44.774685 2214 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 8 00:50:44.774748 kubelet[2214]: I0508 00:50:44.774738 2214 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 8 00:50:44.905321 kubelet[2214]: E0508 00:50:44.904432 2214 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 8 00:50:44.967914 kubelet[2214]: I0508 00:50:44.967860 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:50:44.967914 kubelet[2214]: I0508 00:50:44.967905 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:50:44.967914 kubelet[2214]: I0508 00:50:44.967926 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:50:44.968188 kubelet[2214]: I0508 00:50:44.967944 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fde372d2ea881c147bf422aaf7ba5446-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fde372d2ea881c147bf422aaf7ba5446\") " pod="kube-system/kube-apiserver-localhost" May 8 00:50:44.968188 kubelet[2214]: I0508 00:50:44.967968 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fde372d2ea881c147bf422aaf7ba5446-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fde372d2ea881c147bf422aaf7ba5446\") " pod="kube-system/kube-apiserver-localhost" May 8 00:50:44.968188 kubelet[2214]: I0508 00:50:44.968042 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fde372d2ea881c147bf422aaf7ba5446-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fde372d2ea881c147bf422aaf7ba5446\") " pod="kube-system/kube-apiserver-localhost" May 8 00:50:44.968188 kubelet[2214]: I0508 00:50:44.968109 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:50:44.968188 kubelet[2214]: I0508 00:50:44.968138 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:50:44.968360 kubelet[2214]: I0508 00:50:44.968156 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 8 00:50:45.071728 kubelet[2214]: I0508 00:50:45.071658 2214 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 8 00:50:45.071930 kubelet[2214]: I0508 00:50:45.071776 2214 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 8 00:50:45.099399 sudo[2248]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 8 00:50:45.099617 sudo[2248]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 8 00:50:45.203443 kubelet[2214]: E0508 00:50:45.203320 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:45.205187 kubelet[2214]: E0508 00:50:45.205170 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:45.205631 kubelet[2214]: E0508 00:50:45.205579 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:45.589528 sudo[2248]: pam_unix(sudo:session): session closed for user root May 8 00:50:45.657235 kubelet[2214]: I0508 00:50:45.657174 2214 apiserver.go:52] "Watching apiserver" May 8 00:50:45.667783 kubelet[2214]: I0508 00:50:45.667734 2214 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:50:45.688326 kubelet[2214]: E0508 00:50:45.687708 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:45.831301 kubelet[2214]: E0508 00:50:45.831234 2214 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 8 00:50:45.831630 kubelet[2214]: E0508 00:50:45.831607 2214 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 8 00:50:45.832040 kubelet[2214]: E0508 00:50:45.831626 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:45.832171 kubelet[2214]: E0508 00:50:45.832150 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:46.280540 kubelet[2214]: I0508 00:50:46.280412 2214 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.280370634 podStartE2EDuration="4.280370634s" podCreationTimestamp="2025-05-08 00:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:50:45.831893826 +0000 UTC m=+1.286191705" watchObservedRunningTime="2025-05-08 00:50:46.280370634 +0000 UTC m=+1.734668482" May 8 00:50:46.500103 kubelet[2214]: I0508 00:50:46.500030 2214 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.500010835 podStartE2EDuration="2.500010835s" podCreationTimestamp="2025-05-08 00:50:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:50:46.280860218 +0000 UTC m=+1.735158056" watchObservedRunningTime="2025-05-08 00:50:46.500010835 +0000 UTC m=+1.954308683" May 8 00:50:46.500420 kubelet[2214]: I0508 00:50:46.500115 2214 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.500110102 podStartE2EDuration="2.500110102s" podCreationTimestamp="2025-05-08 00:50:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:50:46.499778688 +0000 UTC m=+1.954076536" watchObservedRunningTime="2025-05-08 00:50:46.500110102 +0000 UTC m=+1.954407940" May 8 00:50:46.690057 kubelet[2214]: E0508 00:50:46.689912 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:46.690057 kubelet[2214]: E0508 00:50:46.689966 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:47.691552 kubelet[2214]: E0508 00:50:47.691510 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:47.736139 kubelet[2214]: E0508 00:50:47.736072 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:50.103304 kubelet[2214]: E0508 00:50:50.103220 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:50.420049 sudo[1432]: pam_unix(sudo:session): session closed for user root May 8 00:50:50.427410 sshd[1427]: pam_unix(sshd:session): session closed for user core May 8 00:50:50.429859 systemd[1]: sshd@4-10.0.0.121:22-10.0.0.1:42044.service: Deactivated successfully. May 8 00:50:50.430904 systemd-logind[1291]: Session 5 logged out. Waiting for processes to exit. May 8 00:50:50.430951 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:50:50.431916 systemd-logind[1291]: Removed session 5. May 8 00:50:50.696971 kubelet[2214]: E0508 00:50:50.695947 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:51.697683 kubelet[2214]: E0508 00:50:51.697638 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:57.051178 kubelet[2214]: E0508 00:50:57.051125 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:57.740422 kubelet[2214]: E0508 00:50:57.740386 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:00.943908 kubelet[2214]: I0508 00:51:00.943849 2214 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 00:51:00.944541 env[1309]: time="2025-05-08T00:51:00.944500212Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:51:00.944856 kubelet[2214]: I0508 00:51:00.944745 2214 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 00:51:03.156964 kubelet[2214]: I0508 00:51:03.156897 2214 topology_manager.go:215] "Topology Admit Handler" podUID="3d971778-ba9d-454c-9dc9-6db4de61e228" podNamespace="kube-system" podName="kube-proxy-fr4k8" May 8 00:51:03.157473 kubelet[2214]: I0508 00:51:03.157065 2214 topology_manager.go:215] "Topology Admit Handler" podUID="86d7eaa7-85f4-4d05-9af2-eedae9936a4f" podNamespace="kube-system" podName="cilium-dnmzq" May 8 00:51:03.275574 kubelet[2214]: I0508 00:51:03.275519 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d971778-ba9d-454c-9dc9-6db4de61e228-lib-modules\") pod \"kube-proxy-fr4k8\" (UID: \"3d971778-ba9d-454c-9dc9-6db4de61e228\") " pod="kube-system/kube-proxy-fr4k8" May 8 00:51:03.275574 kubelet[2214]: I0508 00:51:03.275565 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-etc-cni-netd\") pod \"cilium-dnmzq\" (UID: \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\") " pod="kube-system/cilium-dnmzq" May 8 00:51:03.275574 kubelet[2214]: I0508 00:51:03.275581 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-lib-modules\") pod \"cilium-dnmzq\" (UID: \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\") " pod="kube-system/cilium-dnmzq" May 8 00:51:03.275574 kubelet[2214]: I0508 00:51:03.275597 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-cilium-config-path\") pod \"cilium-dnmzq\" (UID: \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\") " pod="kube-system/cilium-dnmzq" May 8 00:51:03.275877 kubelet[2214]: I0508 00:51:03.275614 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-host-proc-sys-net\") pod \"cilium-dnmzq\" (UID: \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\") " pod="kube-system/cilium-dnmzq" May 8 00:51:03.275877 kubelet[2214]: I0508 00:51:03.275630 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-clustermesh-secrets\") pod \"cilium-dnmzq\" (UID: \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\") " pod="kube-system/cilium-dnmzq" May 8 00:51:03.275877 kubelet[2214]: I0508 00:51:03.275646 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-hostproc\") pod \"cilium-dnmzq\" (UID: \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\") " pod="kube-system/cilium-dnmzq" May 8 00:51:03.275877 kubelet[2214]: I0508 00:51:03.275660 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-xtables-lock\") pod \"cilium-dnmzq\" (UID: \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\") " pod="kube-system/cilium-dnmzq" May 8 00:51:03.275877 kubelet[2214]: I0508 00:51:03.275729 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-hubble-tls\") pod \"cilium-dnmzq\" (UID: \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\") " pod="kube-system/cilium-dnmzq" May 8 00:51:03.275877 kubelet[2214]: I0508 00:51:03.275799 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-bpf-maps\") pod \"cilium-dnmzq\" (UID: \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\") " pod="kube-system/cilium-dnmzq" May 8 00:51:03.276037 kubelet[2214]: I0508 00:51:03.275825 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfc56\" (UniqueName: \"kubernetes.io/projected/3d971778-ba9d-454c-9dc9-6db4de61e228-kube-api-access-kfc56\") pod \"kube-proxy-fr4k8\" (UID: \"3d971778-ba9d-454c-9dc9-6db4de61e228\") " pod="kube-system/kube-proxy-fr4k8" May 8 00:51:03.276037 kubelet[2214]: I0508 00:51:03.275853 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3d971778-ba9d-454c-9dc9-6db4de61e228-kube-proxy\") pod \"kube-proxy-fr4k8\" (UID: \"3d971778-ba9d-454c-9dc9-6db4de61e228\") " pod="kube-system/kube-proxy-fr4k8" May 8 00:51:03.276037 kubelet[2214]: I0508 00:51:03.275944 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d971778-ba9d-454c-9dc9-6db4de61e228-xtables-lock\") pod \"kube-proxy-fr4k8\" (UID: \"3d971778-ba9d-454c-9dc9-6db4de61e228\") " pod="kube-system/kube-proxy-fr4k8" May 8 00:51:03.276166 kubelet[2214]: I0508 00:51:03.276030 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-cni-path\") pod \"cilium-dnmzq\" (UID: \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\") " pod="kube-system/cilium-dnmzq" May 8 00:51:03.276166 kubelet[2214]: I0508 00:51:03.276065 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-host-proc-sys-kernel\") pod \"cilium-dnmzq\" (UID: \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\") " pod="kube-system/cilium-dnmzq" May 8 00:51:03.276166 kubelet[2214]: I0508 00:51:03.276088 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h859v\" (UniqueName: \"kubernetes.io/projected/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-kube-api-access-h859v\") pod \"cilium-dnmzq\" (UID: \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\") " pod="kube-system/cilium-dnmzq" May 8 00:51:03.276166 kubelet[2214]: I0508 00:51:03.276107 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-cilium-run\") pod \"cilium-dnmzq\" (UID: \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\") " pod="kube-system/cilium-dnmzq" May 8 00:51:03.276166 kubelet[2214]: I0508 00:51:03.276127 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-cilium-cgroup\") pod \"cilium-dnmzq\" (UID: \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\") " pod="kube-system/cilium-dnmzq" May 8 00:51:03.771323 kubelet[2214]: I0508 00:51:03.765999 2214 topology_manager.go:215] "Topology Admit Handler" podUID="6b0cf41d-071d-4d30-b83a-32bd2bdc33f6" podNamespace="kube-system" podName="cilium-operator-599987898-n6dpb" May 8 00:51:03.781034 kubelet[2214]: I0508 00:51:03.780952 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6b0cf41d-071d-4d30-b83a-32bd2bdc33f6-cilium-config-path\") pod \"cilium-operator-599987898-n6dpb\" (UID: \"6b0cf41d-071d-4d30-b83a-32bd2bdc33f6\") " pod="kube-system/cilium-operator-599987898-n6dpb" May 8 00:51:03.781223 kubelet[2214]: I0508 00:51:03.781025 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t79fv\" (UniqueName: \"kubernetes.io/projected/6b0cf41d-071d-4d30-b83a-32bd2bdc33f6-kube-api-access-t79fv\") pod \"cilium-operator-599987898-n6dpb\" (UID: \"6b0cf41d-071d-4d30-b83a-32bd2bdc33f6\") " pod="kube-system/cilium-operator-599987898-n6dpb" May 8 00:51:04.060227 kubelet[2214]: E0508 00:51:04.060100 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:04.060764 env[1309]: time="2025-05-08T00:51:04.060724241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fr4k8,Uid:3d971778-ba9d-454c-9dc9-6db4de61e228,Namespace:kube-system,Attempt:0,}" May 8 00:51:04.062391 kubelet[2214]: E0508 00:51:04.062354 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:04.062823 env[1309]: time="2025-05-08T00:51:04.062783539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dnmzq,Uid:86d7eaa7-85f4-4d05-9af2-eedae9936a4f,Namespace:kube-system,Attempt:0,}" May 8 00:51:04.370861 kubelet[2214]: E0508 00:51:04.370661 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:04.371414 env[1309]: time="2025-05-08T00:51:04.371375574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-n6dpb,Uid:6b0cf41d-071d-4d30-b83a-32bd2bdc33f6,Namespace:kube-system,Attempt:0,}" May 8 00:51:04.385807 env[1309]: time="2025-05-08T00:51:04.385691876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:51:04.385807 env[1309]: time="2025-05-08T00:51:04.385750075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:51:04.385807 env[1309]: time="2025-05-08T00:51:04.385786143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:51:04.386078 env[1309]: time="2025-05-08T00:51:04.386028388Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/53d554b68008e1100e3dfa73e779e103b33fdf9270420c29aacb1f92a1e7e95b pid=2315 runtime=io.containerd.runc.v2 May 8 00:51:04.415694 env[1309]: time="2025-05-08T00:51:04.414222197Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:51:04.415694 env[1309]: time="2025-05-08T00:51:04.414294353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:51:04.415694 env[1309]: time="2025-05-08T00:51:04.414304923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:51:04.415694 env[1309]: time="2025-05-08T00:51:04.414488017Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e7e5cc63b5f70e6f6a71e6642c0dd03399509864a9e23b45164dc0bce46e3aad pid=2340 runtime=io.containerd.runc.v2 May 8 00:51:04.447143 env[1309]: time="2025-05-08T00:51:04.447079888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fr4k8,Uid:3d971778-ba9d-454c-9dc9-6db4de61e228,Namespace:kube-system,Attempt:0,} returns sandbox id \"53d554b68008e1100e3dfa73e779e103b33fdf9270420c29aacb1f92a1e7e95b\"" May 8 00:51:04.448484 kubelet[2214]: E0508 00:51:04.448188 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:04.454022 env[1309]: time="2025-05-08T00:51:04.453956926Z" level=info msg="CreateContainer within sandbox \"53d554b68008e1100e3dfa73e779e103b33fdf9270420c29aacb1f92a1e7e95b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:51:04.463720 env[1309]: time="2025-05-08T00:51:04.463671203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dnmzq,Uid:86d7eaa7-85f4-4d05-9af2-eedae9936a4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7e5cc63b5f70e6f6a71e6642c0dd03399509864a9e23b45164dc0bce46e3aad\"" May 8 00:51:04.464553 kubelet[2214]: E0508 00:51:04.464528 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:04.465690 env[1309]: time="2025-05-08T00:51:04.465606038Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 8 00:51:04.904967 env[1309]: time="2025-05-08T00:51:04.904843169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:51:04.904967 env[1309]: time="2025-05-08T00:51:04.904892601Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:51:04.904967 env[1309]: time="2025-05-08T00:51:04.904905446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:51:04.905252 env[1309]: time="2025-05-08T00:51:04.905057812Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/572689e2e5a0284ac8f6e9a99e30030fbb9968321056090780d81112b7518433 pid=2396 runtime=io.containerd.runc.v2 May 8 00:51:04.947645 env[1309]: time="2025-05-08T00:51:04.947563085Z" level=info msg="CreateContainer within sandbox \"53d554b68008e1100e3dfa73e779e103b33fdf9270420c29aacb1f92a1e7e95b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ee67a0f1f347203dc8e5c44805485c68dd259c285a5411a47260d8758dcc0bfc\"" May 8 00:51:04.950033 env[1309]: time="2025-05-08T00:51:04.949573952Z" level=info msg="StartContainer for \"ee67a0f1f347203dc8e5c44805485c68dd259c285a5411a47260d8758dcc0bfc\"" May 8 00:51:04.960803 env[1309]: time="2025-05-08T00:51:04.960731301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-n6dpb,Uid:6b0cf41d-071d-4d30-b83a-32bd2bdc33f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"572689e2e5a0284ac8f6e9a99e30030fbb9968321056090780d81112b7518433\"" May 8 00:51:04.961632 kubelet[2214]: E0508 00:51:04.961602 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:05.022469 env[1309]: time="2025-05-08T00:51:05.022411157Z" level=info msg="StartContainer for \"ee67a0f1f347203dc8e5c44805485c68dd259c285a5411a47260d8758dcc0bfc\" returns successfully" May 8 00:51:05.721549 kubelet[2214]: E0508 00:51:05.721509 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:05.730899 kubelet[2214]: I0508 00:51:05.730828 2214 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fr4k8" podStartSLOduration=3.730807533 podStartE2EDuration="3.730807533s" podCreationTimestamp="2025-05-08 00:51:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:51:05.729987533 +0000 UTC m=+21.184285391" watchObservedRunningTime="2025-05-08 00:51:05.730807533 +0000 UTC m=+21.185105381" May 8 00:51:06.724779 kubelet[2214]: E0508 00:51:06.724731 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:13.444525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4041223486.mount: Deactivated successfully. May 8 00:51:21.470604 env[1309]: time="2025-05-08T00:51:21.470374198Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:51:21.498071 env[1309]: time="2025-05-08T00:51:21.497998448Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:51:21.500664 env[1309]: time="2025-05-08T00:51:21.500605934Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:51:21.501574 env[1309]: time="2025-05-08T00:51:21.501541525Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 8 00:51:21.513521 env[1309]: time="2025-05-08T00:51:21.513450857Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 8 00:51:21.514316 env[1309]: time="2025-05-08T00:51:21.514282023Z" level=info msg="CreateContainer within sandbox \"e7e5cc63b5f70e6f6a71e6642c0dd03399509864a9e23b45164dc0bce46e3aad\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:51:21.795066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2674469245.mount: Deactivated successfully. May 8 00:51:21.976497 env[1309]: time="2025-05-08T00:51:21.976425110Z" level=info msg="CreateContainer within sandbox \"e7e5cc63b5f70e6f6a71e6642c0dd03399509864a9e23b45164dc0bce46e3aad\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3ff42ee5df968dc5e412a1296bdc5d3035cf6b01d0f60ad5404272499e0d6583\"" May 8 00:51:21.976979 env[1309]: time="2025-05-08T00:51:21.976947835Z" level=info msg="StartContainer for \"3ff42ee5df968dc5e412a1296bdc5d3035cf6b01d0f60ad5404272499e0d6583\"" May 8 00:51:23.010579 env[1309]: time="2025-05-08T00:51:23.010502261Z" level=info msg="StartContainer for \"3ff42ee5df968dc5e412a1296bdc5d3035cf6b01d0f60ad5404272499e0d6583\" returns successfully" May 8 00:51:23.022980 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ff42ee5df968dc5e412a1296bdc5d3035cf6b01d0f60ad5404272499e0d6583-rootfs.mount: Deactivated successfully. May 8 00:51:24.077472 kubelet[2214]: E0508 00:51:24.015969 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:24.188274 env[1309]: time="2025-05-08T00:51:24.188181035Z" level=info msg="shim disconnected" id=3ff42ee5df968dc5e412a1296bdc5d3035cf6b01d0f60ad5404272499e0d6583 May 8 00:51:24.188274 env[1309]: time="2025-05-08T00:51:24.188240100Z" level=warning msg="cleaning up after shim disconnected" id=3ff42ee5df968dc5e412a1296bdc5d3035cf6b01d0f60ad5404272499e0d6583 namespace=k8s.io May 8 00:51:24.188274 env[1309]: time="2025-05-08T00:51:24.188251261Z" level=info msg="cleaning up dead shim" May 8 00:51:24.195280 env[1309]: time="2025-05-08T00:51:24.195156179Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:51:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2633 runtime=io.containerd.runc.v2\n" May 8 00:51:25.018693 kubelet[2214]: E0508 00:51:25.018635 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:25.021058 env[1309]: time="2025-05-08T00:51:25.020854193Z" level=info msg="CreateContainer within sandbox \"e7e5cc63b5f70e6f6a71e6642c0dd03399509864a9e23b45164dc0bce46e3aad\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:51:25.817965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3047043809.mount: Deactivated successfully. May 8 00:51:25.824857 env[1309]: time="2025-05-08T00:51:25.824773977Z" level=info msg="CreateContainer within sandbox \"e7e5cc63b5f70e6f6a71e6642c0dd03399509864a9e23b45164dc0bce46e3aad\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"471302692bb98d6e5caf12d413f6737420bc28b20e9deecd271581a108ffca3e\"" May 8 00:51:25.825836 env[1309]: time="2025-05-08T00:51:25.825483538Z" level=info msg="StartContainer for \"471302692bb98d6e5caf12d413f6737420bc28b20e9deecd271581a108ffca3e\"" May 8 00:51:25.951038 env[1309]: time="2025-05-08T00:51:25.950955709Z" level=info msg="StartContainer for \"471302692bb98d6e5caf12d413f6737420bc28b20e9deecd271581a108ffca3e\" returns successfully" May 8 00:51:25.955404 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:51:25.955744 systemd[1]: Stopped systemd-sysctl.service. May 8 00:51:25.957490 systemd[1]: Stopping systemd-sysctl.service... May 8 00:51:25.959680 systemd[1]: Starting systemd-sysctl.service... May 8 00:51:25.970082 systemd[1]: Finished systemd-sysctl.service. May 8 00:51:25.983530 env[1309]: time="2025-05-08T00:51:25.983466878Z" level=info msg="shim disconnected" id=471302692bb98d6e5caf12d413f6737420bc28b20e9deecd271581a108ffca3e May 8 00:51:25.983530 env[1309]: time="2025-05-08T00:51:25.983525271Z" level=warning msg="cleaning up after shim disconnected" id=471302692bb98d6e5caf12d413f6737420bc28b20e9deecd271581a108ffca3e namespace=k8s.io May 8 00:51:25.983792 env[1309]: time="2025-05-08T00:51:25.983541352Z" level=info msg="cleaning up dead shim" May 8 00:51:25.990709 env[1309]: time="2025-05-08T00:51:25.990641616Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:51:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2698 runtime=io.containerd.runc.v2\n" May 8 00:51:26.023225 kubelet[2214]: E0508 00:51:26.023166 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:26.025679 env[1309]: time="2025-05-08T00:51:26.025624052Z" level=info msg="CreateContainer within sandbox \"e7e5cc63b5f70e6f6a71e6642c0dd03399509864a9e23b45164dc0bce46e3aad\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:51:26.059934 env[1309]: time="2025-05-08T00:51:26.059818582Z" level=info msg="CreateContainer within sandbox \"e7e5cc63b5f70e6f6a71e6642c0dd03399509864a9e23b45164dc0bce46e3aad\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e1a7439f65b1bd147bec076ad7c7eda102fba0507c339034098e784fd4393af9\"" May 8 00:51:26.060644 env[1309]: time="2025-05-08T00:51:26.060607475Z" level=info msg="StartContainer for \"e1a7439f65b1bd147bec076ad7c7eda102fba0507c339034098e784fd4393af9\"" May 8 00:51:26.108942 env[1309]: time="2025-05-08T00:51:26.108123142Z" level=info msg="StartContainer for \"e1a7439f65b1bd147bec076ad7c7eda102fba0507c339034098e784fd4393af9\" returns successfully" May 8 00:51:26.131575 env[1309]: time="2025-05-08T00:51:26.131499759Z" level=info msg="shim disconnected" id=e1a7439f65b1bd147bec076ad7c7eda102fba0507c339034098e784fd4393af9 May 8 00:51:26.131575 env[1309]: time="2025-05-08T00:51:26.131552020Z" level=warning msg="cleaning up after shim disconnected" id=e1a7439f65b1bd147bec076ad7c7eda102fba0507c339034098e784fd4393af9 namespace=k8s.io May 8 00:51:26.131575 env[1309]: time="2025-05-08T00:51:26.131573651Z" level=info msg="cleaning up dead shim" May 8 00:51:26.139629 env[1309]: time="2025-05-08T00:51:26.139590725Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:51:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2754 runtime=io.containerd.runc.v2\n" May 8 00:51:26.814322 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-471302692bb98d6e5caf12d413f6737420bc28b20e9deecd271581a108ffca3e-rootfs.mount: Deactivated successfully. May 8 00:51:27.027579 kubelet[2214]: E0508 00:51:27.027543 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:27.029466 env[1309]: time="2025-05-08T00:51:27.029427863Z" level=info msg="CreateContainer within sandbox \"e7e5cc63b5f70e6f6a71e6642c0dd03399509864a9e23b45164dc0bce46e3aad\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:51:27.399540 systemd[1]: Started sshd@5-10.0.0.121:22-10.0.0.1:34652.service. May 8 00:51:27.439936 sshd[2766]: Accepted publickey for core from 10.0.0.1 port 34652 ssh2: RSA SHA256:1LBxu83eHkdm4X8dsk4zPTne32Wp9pee2vrXUZ4T9Dg May 8 00:51:27.441709 sshd[2766]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:51:27.448940 systemd-logind[1291]: New session 6 of user core. May 8 00:51:27.449471 systemd[1]: Started session-6.scope. May 8 00:51:27.459855 env[1309]: time="2025-05-08T00:51:27.459796866Z" level=info msg="CreateContainer within sandbox \"e7e5cc63b5f70e6f6a71e6642c0dd03399509864a9e23b45164dc0bce46e3aad\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"da8a0d75f3f85c247402ca44b1e05309dc1730130b9073bb14a69a55555c8409\"" May 8 00:51:27.460637 env[1309]: time="2025-05-08T00:51:27.460605787Z" level=info msg="StartContainer for \"da8a0d75f3f85c247402ca44b1e05309dc1730130b9073bb14a69a55555c8409\"" May 8 00:51:27.524995 env[1309]: time="2025-05-08T00:51:27.524945337Z" level=info msg="StartContainer for \"da8a0d75f3f85c247402ca44b1e05309dc1730130b9073bb14a69a55555c8409\" returns successfully" May 8 00:51:27.604691 sshd[2766]: pam_unix(sshd:session): session closed for user core May 8 00:51:27.607957 systemd[1]: sshd@5-10.0.0.121:22-10.0.0.1:34652.service: Deactivated successfully. May 8 00:51:27.609545 systemd[1]: session-6.scope: Deactivated successfully. May 8 00:51:27.609580 systemd-logind[1291]: Session 6 logged out. Waiting for processes to exit. May 8 00:51:27.610886 systemd-logind[1291]: Removed session 6. May 8 00:51:27.612104 env[1309]: time="2025-05-08T00:51:27.612036864Z" level=info msg="shim disconnected" id=da8a0d75f3f85c247402ca44b1e05309dc1730130b9073bb14a69a55555c8409 May 8 00:51:27.612104 env[1309]: time="2025-05-08T00:51:27.612092471Z" level=warning msg="cleaning up after shim disconnected" id=da8a0d75f3f85c247402ca44b1e05309dc1730130b9073bb14a69a55555c8409 namespace=k8s.io May 8 00:51:27.612104 env[1309]: time="2025-05-08T00:51:27.612102540Z" level=info msg="cleaning up dead shim" May 8 00:51:27.620752 env[1309]: time="2025-05-08T00:51:27.620672238Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:51:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2824 runtime=io.containerd.runc.v2\n" May 8 00:51:27.814492 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da8a0d75f3f85c247402ca44b1e05309dc1730130b9073bb14a69a55555c8409-rootfs.mount: Deactivated successfully. May 8 00:51:28.039924 kubelet[2214]: E0508 00:51:28.039888 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:28.046248 env[1309]: time="2025-05-08T00:51:28.046195221Z" level=info msg="CreateContainer within sandbox \"e7e5cc63b5f70e6f6a71e6642c0dd03399509864a9e23b45164dc0bce46e3aad\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:51:29.727094 env[1309]: time="2025-05-08T00:51:29.726989964Z" level=info msg="CreateContainer within sandbox \"e7e5cc63b5f70e6f6a71e6642c0dd03399509864a9e23b45164dc0bce46e3aad\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8c9d441d25ff10f6b052c9d3ae360cb22f6339b7c29084bb5e1c343c3a1020ab\"" May 8 00:51:29.727704 env[1309]: time="2025-05-08T00:51:29.727669423Z" level=info msg="StartContainer for \"8c9d441d25ff10f6b052c9d3ae360cb22f6339b7c29084bb5e1c343c3a1020ab\"" May 8 00:51:29.868451 env[1309]: time="2025-05-08T00:51:29.868374463Z" level=info msg="StartContainer for \"8c9d441d25ff10f6b052c9d3ae360cb22f6339b7c29084bb5e1c343c3a1020ab\" returns successfully" May 8 00:51:29.891645 env[1309]: time="2025-05-08T00:51:29.891577231Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:51:29.901861 env[1309]: time="2025-05-08T00:51:29.901813506Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:51:29.905657 env[1309]: time="2025-05-08T00:51:29.905045533Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:51:29.905657 env[1309]: time="2025-05-08T00:51:29.905551538Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 8 00:51:29.913346 env[1309]: time="2025-05-08T00:51:29.911227785Z" level=info msg="CreateContainer within sandbox \"572689e2e5a0284ac8f6e9a99e30030fbb9968321056090780d81112b7518433\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 8 00:51:29.931632 env[1309]: time="2025-05-08T00:51:29.931524244Z" level=info msg="CreateContainer within sandbox \"572689e2e5a0284ac8f6e9a99e30030fbb9968321056090780d81112b7518433\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"828caf098388f44c7860575f8ded4684c3a503b6827c2f248fae8b00141d075c\"" May 8 00:51:29.932642 env[1309]: time="2025-05-08T00:51:29.932585569Z" level=info msg="StartContainer for \"828caf098388f44c7860575f8ded4684c3a503b6827c2f248fae8b00141d075c\"" May 8 00:51:30.004173 kubelet[2214]: I0508 00:51:30.004115 2214 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 8 00:51:30.032101 kubelet[2214]: I0508 00:51:30.030688 2214 topology_manager.go:215] "Topology Admit Handler" podUID="bb8f87fb-8b13-4855-a0a9-74b987b3b895" podNamespace="kube-system" podName="coredns-7db6d8ff4d-tsj7r" May 8 00:51:30.041023 kubelet[2214]: I0508 00:51:30.040977 2214 topology_manager.go:215] "Topology Admit Handler" podUID="0c0aba97-120a-46c5-bdb6-fe214ec015e9" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6btcg" May 8 00:51:30.168974 kubelet[2214]: I0508 00:51:30.168924 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhpnp\" (UniqueName: \"kubernetes.io/projected/0c0aba97-120a-46c5-bdb6-fe214ec015e9-kube-api-access-nhpnp\") pod \"coredns-7db6d8ff4d-6btcg\" (UID: \"0c0aba97-120a-46c5-bdb6-fe214ec015e9\") " pod="kube-system/coredns-7db6d8ff4d-6btcg" May 8 00:51:30.169168 kubelet[2214]: I0508 00:51:30.168984 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5w8f\" (UniqueName: \"kubernetes.io/projected/bb8f87fb-8b13-4855-a0a9-74b987b3b895-kube-api-access-m5w8f\") pod \"coredns-7db6d8ff4d-tsj7r\" (UID: \"bb8f87fb-8b13-4855-a0a9-74b987b3b895\") " pod="kube-system/coredns-7db6d8ff4d-tsj7r" May 8 00:51:30.169168 kubelet[2214]: I0508 00:51:30.169020 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c0aba97-120a-46c5-bdb6-fe214ec015e9-config-volume\") pod \"coredns-7db6d8ff4d-6btcg\" (UID: \"0c0aba97-120a-46c5-bdb6-fe214ec015e9\") " pod="kube-system/coredns-7db6d8ff4d-6btcg" May 8 00:51:30.169168 kubelet[2214]: I0508 00:51:30.169051 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bb8f87fb-8b13-4855-a0a9-74b987b3b895-config-volume\") pod \"coredns-7db6d8ff4d-tsj7r\" (UID: \"bb8f87fb-8b13-4855-a0a9-74b987b3b895\") " pod="kube-system/coredns-7db6d8ff4d-tsj7r" May 8 00:51:30.446239 env[1309]: time="2025-05-08T00:51:30.446141016Z" level=info msg="StartContainer for \"828caf098388f44c7860575f8ded4684c3a503b6827c2f248fae8b00141d075c\" returns successfully" May 8 00:51:30.449525 kubelet[2214]: E0508 00:51:30.449489 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:30.451871 kubelet[2214]: E0508 00:51:30.451819 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:30.637702 kubelet[2214]: E0508 00:51:30.637662 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:30.639418 env[1309]: time="2025-05-08T00:51:30.639366219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tsj7r,Uid:bb8f87fb-8b13-4855-a0a9-74b987b3b895,Namespace:kube-system,Attempt:0,}" May 8 00:51:30.668075 kubelet[2214]: E0508 00:51:30.668030 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:30.668676 env[1309]: time="2025-05-08T00:51:30.668616821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6btcg,Uid:0c0aba97-120a-46c5-bdb6-fe214ec015e9,Namespace:kube-system,Attempt:0,}" May 8 00:51:30.756457 kubelet[2214]: I0508 00:51:30.756089 2214 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dnmzq" podStartSLOduration=11.718392694 podStartE2EDuration="28.756068237s" podCreationTimestamp="2025-05-08 00:51:02 +0000 UTC" firstStartedPulling="2025-05-08 00:51:04.465105127 +0000 UTC m=+19.919402985" lastFinishedPulling="2025-05-08 00:51:21.50278066 +0000 UTC m=+36.957078528" observedRunningTime="2025-05-08 00:51:30.594158462 +0000 UTC m=+46.048456330" watchObservedRunningTime="2025-05-08 00:51:30.756068237 +0000 UTC m=+46.210366085" May 8 00:51:30.756457 kubelet[2214]: I0508 00:51:30.756272 2214 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-n6dpb" podStartSLOduration=3.812140031 podStartE2EDuration="28.75626679s" podCreationTimestamp="2025-05-08 00:51:02 +0000 UTC" firstStartedPulling="2025-05-08 00:51:04.962552111 +0000 UTC m=+20.416849959" lastFinishedPulling="2025-05-08 00:51:29.90667887 +0000 UTC m=+45.360976718" observedRunningTime="2025-05-08 00:51:30.755904061 +0000 UTC m=+46.210201899" watchObservedRunningTime="2025-05-08 00:51:30.75626679 +0000 UTC m=+46.210564648" May 8 00:51:31.455728 kubelet[2214]: E0508 00:51:31.455681 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:31.456370 kubelet[2214]: E0508 00:51:31.456340 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:32.460535 kubelet[2214]: E0508 00:51:32.457856 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:32.606913 systemd[1]: Started sshd@6-10.0.0.121:22-10.0.0.1:34656.service. May 8 00:51:32.641461 sshd[3034]: Accepted publickey for core from 10.0.0.1 port 34656 ssh2: RSA SHA256:1LBxu83eHkdm4X8dsk4zPTne32Wp9pee2vrXUZ4T9Dg May 8 00:51:32.643054 sshd[3034]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:51:32.647794 systemd-logind[1291]: New session 7 of user core. May 8 00:51:32.648967 systemd[1]: Started session-7.scope. May 8 00:51:32.773214 sshd[3034]: pam_unix(sshd:session): session closed for user core May 8 00:51:32.775492 systemd[1]: sshd@6-10.0.0.121:22-10.0.0.1:34656.service: Deactivated successfully. May 8 00:51:32.776667 systemd-logind[1291]: Session 7 logged out. Waiting for processes to exit. May 8 00:51:32.776785 systemd[1]: session-7.scope: Deactivated successfully. May 8 00:51:32.777684 systemd-logind[1291]: Removed session 7. May 8 00:51:34.237033 systemd-networkd[1091]: cilium_host: Link UP May 8 00:51:34.237203 systemd-networkd[1091]: cilium_net: Link UP May 8 00:51:34.242667 systemd-networkd[1091]: cilium_net: Gained carrier May 8 00:51:34.243739 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 8 00:51:34.243817 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 8 00:51:34.245431 systemd-networkd[1091]: cilium_host: Gained carrier May 8 00:51:34.245596 systemd-networkd[1091]: cilium_net: Gained IPv6LL May 8 00:51:34.245984 systemd-networkd[1091]: cilium_host: Gained IPv6LL May 8 00:51:34.367148 systemd-networkd[1091]: cilium_vxlan: Link UP May 8 00:51:34.367158 systemd-networkd[1091]: cilium_vxlan: Gained carrier May 8 00:51:34.602313 kernel: NET: Registered PF_ALG protocol family May 8 00:51:35.187037 systemd-networkd[1091]: lxc_health: Link UP May 8 00:51:35.196270 systemd-networkd[1091]: lxc_health: Gained carrier May 8 00:51:35.198291 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 8 00:51:35.319090 systemd-networkd[1091]: lxc8dc57b079d3e: Link UP May 8 00:51:35.331302 kernel: eth0: renamed from tmp5eda1 May 8 00:51:35.340383 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 8 00:51:35.340501 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc8dc57b079d3e: link becomes ready May 8 00:51:35.340685 systemd-networkd[1091]: lxc8dc57b079d3e: Gained carrier May 8 00:51:35.341561 systemd-networkd[1091]: lxcf370b37ca5fc: Link UP May 8 00:51:35.355308 kernel: eth0: renamed from tmp5350e May 8 00:51:35.364454 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf370b37ca5fc: link becomes ready May 8 00:51:35.364779 systemd-networkd[1091]: lxcf370b37ca5fc: Gained carrier May 8 00:51:35.966593 systemd-networkd[1091]: cilium_vxlan: Gained IPv6LL May 8 00:51:36.307597 kubelet[2214]: E0508 00:51:36.307549 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:36.464083 kubelet[2214]: E0508 00:51:36.464029 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:36.738635 systemd-networkd[1091]: lxc_health: Gained IPv6LL May 8 00:51:36.990451 systemd-networkd[1091]: lxc8dc57b079d3e: Gained IPv6LL May 8 00:51:37.438487 systemd-networkd[1091]: lxcf370b37ca5fc: Gained IPv6LL May 8 00:51:37.466153 kubelet[2214]: E0508 00:51:37.466116 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:37.778715 systemd[1]: Started sshd@7-10.0.0.121:22-10.0.0.1:53338.service. May 8 00:51:37.816837 sshd[3431]: Accepted publickey for core from 10.0.0.1 port 53338 ssh2: RSA SHA256:1LBxu83eHkdm4X8dsk4zPTne32Wp9pee2vrXUZ4T9Dg May 8 00:51:37.818071 sshd[3431]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:51:37.821927 systemd-logind[1291]: New session 8 of user core. May 8 00:51:37.822737 systemd[1]: Started session-8.scope. May 8 00:51:37.991254 sshd[3431]: pam_unix(sshd:session): session closed for user core May 8 00:51:37.994223 systemd[1]: sshd@7-10.0.0.121:22-10.0.0.1:53338.service: Deactivated successfully. May 8 00:51:37.995425 systemd[1]: session-8.scope: Deactivated successfully. May 8 00:51:37.996121 systemd-logind[1291]: Session 8 logged out. Waiting for processes to exit. May 8 00:51:37.997003 systemd-logind[1291]: Removed session 8. May 8 00:51:39.078480 env[1309]: time="2025-05-08T00:51:39.078409330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:51:39.078480 env[1309]: time="2025-05-08T00:51:39.078449547Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:51:39.078480 env[1309]: time="2025-05-08T00:51:39.078459486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:51:39.078909 env[1309]: time="2025-05-08T00:51:39.078592091Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5eda10b1bf7234a9032cba4790c874ffda4e4ec7527c0d02c10a01a7bd1b4362 pid=3458 runtime=io.containerd.runc.v2 May 8 00:51:39.099708 systemd-resolved[1226]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:51:39.126363 env[1309]: time="2025-05-08T00:51:39.126311846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tsj7r,Uid:bb8f87fb-8b13-4855-a0a9-74b987b3b895,Namespace:kube-system,Attempt:0,} returns sandbox id \"5eda10b1bf7234a9032cba4790c874ffda4e4ec7527c0d02c10a01a7bd1b4362\"" May 8 00:51:39.128418 kubelet[2214]: E0508 00:51:39.128392 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:39.130230 env[1309]: time="2025-05-08T00:51:39.130202916Z" level=info msg="CreateContainer within sandbox \"5eda10b1bf7234a9032cba4790c874ffda4e4ec7527c0d02c10a01a7bd1b4362\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:51:39.179544 env[1309]: time="2025-05-08T00:51:39.179450016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:51:39.179544 env[1309]: time="2025-05-08T00:51:39.179505844Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:51:39.179544 env[1309]: time="2025-05-08T00:51:39.179524368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:51:39.179878 env[1309]: time="2025-05-08T00:51:39.179815366Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5350ecdefd76fa6ad90f81f67b920951fa418e9a040b19a9e024d0d76dc4370a pid=3499 runtime=io.containerd.runc.v2 May 8 00:51:39.203010 systemd-resolved[1226]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:51:39.227958 env[1309]: time="2025-05-08T00:51:39.227886604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6btcg,Uid:0c0aba97-120a-46c5-bdb6-fe214ec015e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"5350ecdefd76fa6ad90f81f67b920951fa418e9a040b19a9e024d0d76dc4370a\"" May 8 00:51:39.228588 kubelet[2214]: E0508 00:51:39.228561 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:39.230358 env[1309]: time="2025-05-08T00:51:39.230314774Z" level=info msg="CreateContainer within sandbox \"5350ecdefd76fa6ad90f81f67b920951fa418e9a040b19a9e024d0d76dc4370a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:51:40.070278 env[1309]: time="2025-05-08T00:51:40.070207118Z" level=info msg="CreateContainer within sandbox \"5eda10b1bf7234a9032cba4790c874ffda4e4ec7527c0d02c10a01a7bd1b4362\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"95d82511ac700616a16a015a151cd92a2227a38b360193754bccf7cb42a42306\"" May 8 00:51:40.070946 env[1309]: time="2025-05-08T00:51:40.070882690Z" level=info msg="StartContainer for \"95d82511ac700616a16a015a151cd92a2227a38b360193754bccf7cb42a42306\"" May 8 00:51:40.081962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3663860976.mount: Deactivated successfully. May 8 00:51:40.413447 env[1309]: time="2025-05-08T00:51:40.413290029Z" level=info msg="CreateContainer within sandbox \"5350ecdefd76fa6ad90f81f67b920951fa418e9a040b19a9e024d0d76dc4370a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7a6f0f0e6ce3e194fbd798e5da5cde17c317b8fa91cdf77fefd076cf264d2a0c\"" May 8 00:51:40.414188 env[1309]: time="2025-05-08T00:51:40.414153552Z" level=info msg="StartContainer for \"7a6f0f0e6ce3e194fbd798e5da5cde17c317b8fa91cdf77fefd076cf264d2a0c\"" May 8 00:51:40.682426 env[1309]: time="2025-05-08T00:51:40.682223901Z" level=info msg="StartContainer for \"95d82511ac700616a16a015a151cd92a2227a38b360193754bccf7cb42a42306\" returns successfully" May 8 00:51:40.835930 env[1309]: time="2025-05-08T00:51:40.835865575Z" level=info msg="StartContainer for \"7a6f0f0e6ce3e194fbd798e5da5cde17c317b8fa91cdf77fefd076cf264d2a0c\" returns successfully" May 8 00:51:40.838957 kubelet[2214]: E0508 00:51:40.838901 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:40.840203 kubelet[2214]: E0508 00:51:40.840179 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:41.414136 kubelet[2214]: I0508 00:51:41.414062 2214 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-tsj7r" podStartSLOduration=39.414037493 podStartE2EDuration="39.414037493s" podCreationTimestamp="2025-05-08 00:51:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:51:41.22202386 +0000 UTC m=+56.676321718" watchObservedRunningTime="2025-05-08 00:51:41.414037493 +0000 UTC m=+56.868335351" May 8 00:51:41.414415 kubelet[2214]: I0508 00:51:41.414174 2214 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6btcg" podStartSLOduration=39.414168073 podStartE2EDuration="39.414168073s" podCreationTimestamp="2025-05-08 00:51:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:51:41.413588324 +0000 UTC m=+56.867886192" watchObservedRunningTime="2025-05-08 00:51:41.414168073 +0000 UTC m=+56.868465941" May 8 00:51:41.842138 kubelet[2214]: E0508 00:51:41.842093 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:41.842853 kubelet[2214]: E0508 00:51:41.842826 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:42.845029 kubelet[2214]: E0508 00:51:42.844968 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:42.845029 kubelet[2214]: E0508 00:51:42.845001 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:42.995942 systemd[1]: Started sshd@8-10.0.0.121:22-10.0.0.1:53342.service. May 8 00:51:43.037188 sshd[3619]: Accepted publickey for core from 10.0.0.1 port 53342 ssh2: RSA SHA256:1LBxu83eHkdm4X8dsk4zPTne32Wp9pee2vrXUZ4T9Dg May 8 00:51:43.038648 sshd[3619]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:51:43.048598 systemd-logind[1291]: New session 9 of user core. May 8 00:51:43.050340 systemd[1]: Started session-9.scope. May 8 00:51:43.289921 sshd[3619]: pam_unix(sshd:session): session closed for user core May 8 00:51:43.294865 systemd[1]: sshd@8-10.0.0.121:22-10.0.0.1:53342.service: Deactivated successfully. May 8 00:51:43.296853 systemd[1]: session-9.scope: Deactivated successfully. May 8 00:51:43.297849 systemd-logind[1291]: Session 9 logged out. Waiting for processes to exit. May 8 00:51:43.299350 systemd-logind[1291]: Removed session 9. May 8 00:51:43.847242 kubelet[2214]: E0508 00:51:43.847179 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:43.847918 kubelet[2214]: E0508 00:51:43.847436 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:48.294478 systemd[1]: Started sshd@9-10.0.0.121:22-10.0.0.1:53090.service. May 8 00:51:48.328838 sshd[3637]: Accepted publickey for core from 10.0.0.1 port 53090 ssh2: RSA SHA256:1LBxu83eHkdm4X8dsk4zPTne32Wp9pee2vrXUZ4T9Dg May 8 00:51:48.330138 sshd[3637]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:51:48.334587 systemd-logind[1291]: New session 10 of user core. May 8 00:51:48.335319 systemd[1]: Started session-10.scope. May 8 00:51:48.455866 sshd[3637]: pam_unix(sshd:session): session closed for user core May 8 00:51:48.458542 systemd[1]: sshd@9-10.0.0.121:22-10.0.0.1:53090.service: Deactivated successfully. May 8 00:51:48.459861 systemd-logind[1291]: Session 10 logged out. Waiting for processes to exit. May 8 00:51:48.459930 systemd[1]: session-10.scope: Deactivated successfully. May 8 00:51:48.460850 systemd-logind[1291]: Removed session 10. May 8 00:51:53.460163 systemd[1]: Started sshd@10-10.0.0.121:22-10.0.0.1:53104.service. May 8 00:51:53.494934 sshd[3653]: Accepted publickey for core from 10.0.0.1 port 53104 ssh2: RSA SHA256:1LBxu83eHkdm4X8dsk4zPTne32Wp9pee2vrXUZ4T9Dg May 8 00:51:53.496201 sshd[3653]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:51:53.500566 systemd-logind[1291]: New session 11 of user core. May 8 00:51:53.501771 systemd[1]: Started session-11.scope. May 8 00:51:53.639820 sshd[3653]: pam_unix(sshd:session): session closed for user core May 8 00:51:53.642614 systemd[1]: sshd@10-10.0.0.121:22-10.0.0.1:53104.service: Deactivated successfully. May 8 00:51:53.643726 systemd-logind[1291]: Session 11 logged out. Waiting for processes to exit. May 8 00:51:53.643755 systemd[1]: session-11.scope: Deactivated successfully. May 8 00:51:53.644757 systemd-logind[1291]: Removed session 11. May 8 00:51:56.675247 kubelet[2214]: E0508 00:51:56.675195 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:58.646070 systemd[1]: Started sshd@11-10.0.0.121:22-10.0.0.1:40174.service. May 8 00:51:58.682811 sshd[3669]: Accepted publickey for core from 10.0.0.1 port 40174 ssh2: RSA SHA256:1LBxu83eHkdm4X8dsk4zPTne32Wp9pee2vrXUZ4T9Dg May 8 00:51:58.684339 sshd[3669]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:51:58.688961 systemd-logind[1291]: New session 12 of user core. May 8 00:51:58.690414 systemd[1]: Started session-12.scope. May 8 00:51:58.849888 sshd[3669]: pam_unix(sshd:session): session closed for user core May 8 00:51:58.852840 systemd[1]: Started sshd@12-10.0.0.121:22-10.0.0.1:40188.service. May 8 00:51:58.853400 systemd[1]: sshd@11-10.0.0.121:22-10.0.0.1:40174.service: Deactivated successfully. May 8 00:51:58.854747 systemd[1]: session-12.scope: Deactivated successfully. May 8 00:51:58.855281 systemd-logind[1291]: Session 12 logged out. Waiting for processes to exit. May 8 00:51:58.856215 systemd-logind[1291]: Removed session 12. May 8 00:51:58.889145 sshd[3683]: Accepted publickey for core from 10.0.0.1 port 40188 ssh2: RSA SHA256:1LBxu83eHkdm4X8dsk4zPTne32Wp9pee2vrXUZ4T9Dg May 8 00:51:58.891089 sshd[3683]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:51:58.898437 systemd-logind[1291]: New session 13 of user core. May 8 00:51:58.899207 systemd[1]: Started session-13.scope. May 8 00:51:59.058333 sshd[3683]: pam_unix(sshd:session): session closed for user core May 8 00:51:59.061636 systemd[1]: Started sshd@13-10.0.0.121:22-10.0.0.1:40196.service. May 8 00:51:59.062219 systemd[1]: sshd@12-10.0.0.121:22-10.0.0.1:40188.service: Deactivated successfully. May 8 00:51:59.063046 systemd[1]: session-13.scope: Deactivated successfully. May 8 00:51:59.065728 systemd-logind[1291]: Session 13 logged out. Waiting for processes to exit. May 8 00:51:59.067255 systemd-logind[1291]: Removed session 13. May 8 00:51:59.094709 sshd[3695]: Accepted publickey for core from 10.0.0.1 port 40196 ssh2: RSA SHA256:1LBxu83eHkdm4X8dsk4zPTne32Wp9pee2vrXUZ4T9Dg May 8 00:51:59.096054 sshd[3695]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:51:59.100115 systemd-logind[1291]: New session 14 of user core. May 8 00:51:59.100954 systemd[1]: Started session-14.scope. May 8 00:51:59.208249 sshd[3695]: pam_unix(sshd:session): session closed for user core May 8 00:51:59.210597 systemd[1]: sshd@13-10.0.0.121:22-10.0.0.1:40196.service: Deactivated successfully. May 8 00:51:59.211758 systemd[1]: session-14.scope: Deactivated successfully. May 8 00:51:59.211764 systemd-logind[1291]: Session 14 logged out. Waiting for processes to exit. May 8 00:51:59.212870 systemd-logind[1291]: Removed session 14. May 8 00:52:04.211906 systemd[1]: Started sshd@14-10.0.0.121:22-10.0.0.1:40198.service. May 8 00:52:04.243639 sshd[3712]: Accepted publickey for core from 10.0.0.1 port 40198 ssh2: RSA SHA256:1LBxu83eHkdm4X8dsk4zPTne32Wp9pee2vrXUZ4T9Dg May 8 00:52:04.244858 sshd[3712]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:52:04.249290 systemd-logind[1291]: New session 15 of user core. May 8 00:52:04.249756 systemd[1]: Started session-15.scope. May 8 00:52:04.705852 sshd[3712]: pam_unix(sshd:session): session closed for user core May 8 00:52:04.709404 systemd[1]: sshd@14-10.0.0.121:22-10.0.0.1:40198.service: Deactivated successfully. May 8 00:52:04.710242 systemd[1]: session-15.scope: Deactivated successfully. May 8 00:52:04.711097 systemd-logind[1291]: Session 15 logged out. Waiting for processes to exit. May 8 00:52:04.711920 systemd-logind[1291]: Removed session 15. May 8 00:52:05.674592 kubelet[2214]: E0508 00:52:05.674524 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:52:09.709400 systemd[1]: Started sshd@15-10.0.0.121:22-10.0.0.1:38924.service. May 8 00:52:09.744298 sshd[3728]: Accepted publickey for core from 10.0.0.1 port 38924 ssh2: RSA SHA256:1LBxu83eHkdm4X8dsk4zPTne32Wp9pee2vrXUZ4T9Dg May 8 00:52:09.745503 sshd[3728]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:52:09.749865 systemd-logind[1291]: New session 16 of user core. May 8 00:52:09.750783 systemd[1]: Started session-16.scope. May 8 00:52:09.885428 sshd[3728]: pam_unix(sshd:session): session closed for user core May 8 00:52:09.887789 systemd[1]: sshd@15-10.0.0.121:22-10.0.0.1:38924.service: Deactivated successfully. May 8 00:52:09.889137 systemd[1]: session-16.scope: Deactivated successfully. May 8 00:52:09.889374 systemd-logind[1291]: Session 16 logged out. Waiting for processes to exit. May 8 00:52:09.890474 systemd-logind[1291]: Removed session 16. May 8 00:52:11.674357 kubelet[2214]: E0508 00:52:11.674291 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:52:13.675405 kubelet[2214]: E0508 00:52:13.675321 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:52:14.889999 systemd[1]: Started sshd@16-10.0.0.121:22-10.0.0.1:38932.service. May 8 00:52:14.925866 sshd[3742]: Accepted publickey for core from 10.0.0.1 port 38932 ssh2: RSA SHA256:1LBxu83eHkdm4X8dsk4zPTne32Wp9pee2vrXUZ4T9Dg May 8 00:52:14.927385 sshd[3742]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:52:14.931926 systemd-logind[1291]: New session 17 of user core. May 8 00:52:14.932930 systemd[1]: Started session-17.scope. May 8 00:52:15.086630 sshd[3742]: pam_unix(sshd:session): session closed for user core May 8 00:52:15.089464 systemd[1]: Started sshd@17-10.0.0.121:22-10.0.0.1:47800.service. May 8 00:52:15.090066 systemd[1]: sshd@16-10.0.0.121:22-10.0.0.1:38932.service: Deactivated successfully. May 8 00:52:15.091383 systemd[1]: session-17.scope: Deactivated successfully. May 8 00:52:15.091954 systemd-logind[1291]: Session 17 logged out. Waiting for processes to exit. May 8 00:52:15.093444 systemd-logind[1291]: Removed session 17. May 8 00:52:15.125067 sshd[3754]: Accepted publickey for core from 10.0.0.1 port 47800 ssh2: RSA SHA256:1LBxu83eHkdm4X8dsk4zPTne32Wp9pee2vrXUZ4T9Dg May 8 00:52:15.126602 sshd[3754]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:52:15.130515 systemd-logind[1291]: New session 18 of user core. May 8 00:52:15.131621 systemd[1]: Started session-18.scope. May 8 00:52:15.611722 sshd[3754]: pam_unix(sshd:session): session closed for user core May 8 00:52:15.614892 systemd[1]: Started sshd@18-10.0.0.121:22-10.0.0.1:47812.service. May 8 00:52:15.615642 systemd[1]: sshd@17-10.0.0.121:22-10.0.0.1:47800.service: Deactivated successfully. May 8 00:52:15.616755 systemd-logind[1291]: Session 18 logged out. Waiting for processes to exit. May 8 00:52:15.616847 systemd[1]: session-18.scope: Deactivated successfully. May 8 00:52:15.617872 systemd-logind[1291]: Removed session 18. May 8 00:52:15.646816 sshd[3767]: Accepted publickey for core from 10.0.0.1 port 47812 ssh2: RSA SHA256:1LBxu83eHkdm4X8dsk4zPTne32Wp9pee2vrXUZ4T9Dg May 8 00:52:15.648241 sshd[3767]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:52:15.652334 systemd-logind[1291]: New session 19 of user core. May 8 00:52:15.653237 systemd[1]: Started session-19.scope. May 8 00:52:19.474850 systemd[1]: Started sshd@19-10.0.0.121:22-10.0.0.1:47826.service. May 8 00:52:19.475652 sshd[3767]: pam_unix(sshd:session): session closed for user core May 8 00:52:19.478066 systemd[1]: sshd@18-10.0.0.121:22-10.0.0.1:47812.service: Deactivated successfully. May 8 00:52:19.479146 systemd-logind[1291]: Session 19 logged out. Waiting for processes to exit. May 8 00:52:19.479324 systemd[1]: session-19.scope: Deactivated successfully. May 8 00:52:19.480714 systemd-logind[1291]: Removed session 19. May 8 00:52:19.511943 sshd[3799]: Accepted publickey for core from 10.0.0.1 port 47826 ssh2: RSA SHA256:1LBxu83eHkdm4X8dsk4zPTne32Wp9pee2vrXUZ4T9Dg May 8 00:52:19.513252 sshd[3799]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:52:19.518074 systemd-logind[1291]: New session 20 of user core. May 8 00:52:19.519322 systemd[1]: Started session-20.scope. May 8 00:52:20.665525 sshd[3799]: pam_unix(sshd:session): session closed for user core May 8 00:52:20.668117 systemd[1]: Started sshd@20-10.0.0.121:22-10.0.0.1:47836.service. May 8 00:52:20.669135 systemd[1]: sshd@19-10.0.0.121:22-10.0.0.1:47826.service: Deactivated successfully. May 8 00:52:20.670115 systemd[1]: session-20.scope: Deactivated successfully. May 8 00:52:20.670695 systemd-logind[1291]: Session 20 logged out. Waiting for processes to exit. May 8 00:52:20.671528 systemd-logind[1291]: Removed session 20. May 8 00:52:20.705723 sshd[3811]: Accepted publickey for core from 10.0.0.1 port 47836 ssh2: RSA SHA256:1LBxu83eHkdm4X8dsk4zPTne32Wp9pee2vrXUZ4T9Dg May 8 00:52:20.707508 sshd[3811]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:52:20.712173 systemd-logind[1291]: New session 21 of user core. May 8 00:52:20.713073 systemd[1]: Started session-21.scope. May 8 00:52:20.824138 sshd[3811]: pam_unix(sshd:session): session closed for user core May 8 00:52:20.826987 systemd[1]: sshd@20-10.0.0.121:22-10.0.0.1:47836.service: Deactivated successfully. May 8 00:52:20.828236 systemd-logind[1291]: Session 21 logged out. Waiting for processes to exit. May 8 00:52:20.828323 systemd[1]: session-21.scope: Deactivated successfully. May 8 00:52:20.829588 systemd-logind[1291]: Removed session 21. May 8 00:52:25.827369 systemd[1]: Started sshd@21-10.0.0.121:22-10.0.0.1:40074.service. May 8 00:52:25.859935 sshd[3827]: Accepted publickey for core from 10.0.0.1 port 40074 ssh2: RSA SHA256:1LBxu83eHkdm4X8dsk4zPTne32Wp9pee2vrXUZ4T9Dg May 8 00:52:25.861144 sshd[3827]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:52:25.864953 systemd-logind[1291]: New session 22 of user core. May 8 00:52:25.865726 systemd[1]: Started session-22.scope. May 8 00:52:26.024895 sshd[3827]: pam_unix(sshd:session): session closed for user core May 8 00:52:26.026968 systemd[1]: sshd@21-10.0.0.121:22-10.0.0.1:40074.service: Deactivated successfully. May 8 00:52:26.027840 systemd[1]: session-22.scope: Deactivated successfully. May 8 00:52:26.028814 systemd-logind[1291]: Session 22 logged out. Waiting for processes to exit. May 8 00:52:26.029620 systemd-logind[1291]: Removed session 22. May 8 00:52:31.029100 systemd[1]: Started sshd@22-10.0.0.121:22-10.0.0.1:40084.service. May 8 00:52:31.060899 sshd[3841]: Accepted publickey for core from 10.0.0.1 port 40084 ssh2: RSA SHA256:1LBxu83eHkdm4X8dsk4zPTne32Wp9pee2vrXUZ4T9Dg May 8 00:52:31.062466 sshd[3841]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:52:31.067828 systemd-logind[1291]: New session 23 of user core. May 8 00:52:31.068782 systemd[1]: Started session-23.scope. May 8 00:52:31.216179 sshd[3841]: pam_unix(sshd:session): session closed for user core May 8 00:52:31.218765 systemd[1]: sshd@22-10.0.0.121:22-10.0.0.1:40084.service: Deactivated successfully. May 8 00:52:31.219600 systemd[1]: session-23.scope: Deactivated successfully. May 8 00:52:31.220397 systemd-logind[1291]: Session 23 logged out. Waiting for processes to exit. May 8 00:52:31.221126 systemd-logind[1291]: Removed session 23. May 8 00:52:36.218853 systemd[1]: Started sshd@23-10.0.0.121:22-10.0.0.1:37544.service. May 8 00:52:36.250801 sshd[3857]: Accepted publickey for core from 10.0.0.1 port 37544 ssh2: RSA SHA256:1LBxu83eHkdm4X8dsk4zPTne32Wp9pee2vrXUZ4T9Dg May 8 00:52:36.252455 sshd[3857]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:52:36.256567 systemd-logind[1291]: New session 24 of user core. May 8 00:52:36.257734 systemd[1]: Started session-24.scope. May 8 00:52:36.437874 sshd[3857]: pam_unix(sshd:session): session closed for user core May 8 00:52:36.440313 systemd[1]: sshd@23-10.0.0.121:22-10.0.0.1:37544.service: Deactivated successfully. May 8 00:52:36.441152 systemd[1]: session-24.scope: Deactivated successfully. May 8 00:52:36.442131 systemd-logind[1291]: Session 24 logged out. Waiting for processes to exit. May 8 00:52:36.443155 systemd-logind[1291]: Removed session 24. May 8 00:52:41.442179 systemd[1]: Started sshd@24-10.0.0.121:22-10.0.0.1:37552.service. May 8 00:52:41.475583 sshd[3874]: Accepted publickey for core from 10.0.0.1 port 37552 ssh2: RSA SHA256:1LBxu83eHkdm4X8dsk4zPTne32Wp9pee2vrXUZ4T9Dg May 8 00:52:41.477157 sshd[3874]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:52:41.481216 systemd-logind[1291]: New session 25 of user core. May 8 00:52:41.482275 systemd[1]: Started session-25.scope. May 8 00:52:41.621425 sshd[3874]: pam_unix(sshd:session): session closed for user core May 8 00:52:41.624870 systemd[1]: sshd@24-10.0.0.121:22-10.0.0.1:37552.service: Deactivated successfully. May 8 00:52:41.626482 systemd[1]: session-25.scope: Deactivated successfully. May 8 00:52:41.627075 systemd-logind[1291]: Session 25 logged out. Waiting for processes to exit. May 8 00:52:41.628085 systemd-logind[1291]: Removed session 25. May 8 00:52:42.675300 kubelet[2214]: E0508 00:52:42.675155 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:52:46.624454 systemd[1]: Started sshd@25-10.0.0.121:22-10.0.0.1:47580.service. May 8 00:52:46.657919 sshd[3890]: Accepted publickey for core from 10.0.0.1 port 47580 ssh2: RSA SHA256:1LBxu83eHkdm4X8dsk4zPTne32Wp9pee2vrXUZ4T9Dg May 8 00:52:46.659657 sshd[3890]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:52:46.664510 systemd-logind[1291]: New session 26 of user core. May 8 00:52:46.665669 systemd[1]: Started session-26.scope. May 8 00:52:46.775422 sshd[3890]: pam_unix(sshd:session): session closed for user core May 8 00:52:46.777694 systemd[1]: sshd@25-10.0.0.121:22-10.0.0.1:47580.service: Deactivated successfully. May 8 00:52:46.778552 systemd[1]: session-26.scope: Deactivated successfully. May 8 00:52:46.779379 systemd-logind[1291]: Session 26 logged out. Waiting for processes to exit. May 8 00:52:46.780208 systemd-logind[1291]: Removed session 26. May 8 00:52:47.675130 kubelet[2214]: E0508 00:52:47.675067 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:52:51.675198 kubelet[2214]: E0508 00:52:51.675145 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:52:51.778826 systemd[1]: Started sshd@26-10.0.0.121:22-10.0.0.1:47592.service. May 8 00:52:51.813576 sshd[3905]: Accepted publickey for core from 10.0.0.1 port 47592 ssh2: RSA SHA256:1LBxu83eHkdm4X8dsk4zPTne32Wp9pee2vrXUZ4T9Dg May 8 00:52:51.815011 sshd[3905]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:52:51.819309 systemd-logind[1291]: New session 27 of user core. May 8 00:52:51.820600 systemd[1]: Started session-27.scope. May 8 00:52:51.935035 sshd[3905]: pam_unix(sshd:session): session closed for user core May 8 00:52:51.937761 systemd[1]: sshd@26-10.0.0.121:22-10.0.0.1:47592.service: Deactivated successfully. May 8 00:52:51.938894 systemd-logind[1291]: Session 27 logged out. Waiting for processes to exit. May 8 00:52:51.938958 systemd[1]: session-27.scope: Deactivated successfully. May 8 00:52:51.939948 systemd-logind[1291]: Removed session 27. May 8 00:52:53.675020 kubelet[2214]: E0508 00:52:53.674935 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:52:56.938543 systemd[1]: Started sshd@27-10.0.0.121:22-10.0.0.1:44148.service. May 8 00:52:56.970046 sshd[3920]: Accepted publickey for core from 10.0.0.1 port 44148 ssh2: RSA SHA256:1LBxu83eHkdm4X8dsk4zPTne32Wp9pee2vrXUZ4T9Dg May 8 00:52:56.971215 sshd[3920]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:52:56.974637 systemd-logind[1291]: New session 28 of user core. May 8 00:52:56.975494 systemd[1]: Started session-28.scope. May 8 00:52:57.078654 sshd[3920]: pam_unix(sshd:session): session closed for user core May 8 00:52:57.082107 systemd[1]: Started sshd@28-10.0.0.121:22-10.0.0.1:44156.service. May 8 00:52:57.082852 systemd[1]: sshd@27-10.0.0.121:22-10.0.0.1:44148.service: Deactivated successfully. May 8 00:52:57.084596 systemd[1]: session-28.scope: Deactivated successfully. May 8 00:52:57.085084 systemd-logind[1291]: Session 28 logged out. Waiting for processes to exit. May 8 00:52:57.086291 systemd-logind[1291]: Removed session 28. May 8 00:52:57.117941 sshd[3935]: Accepted publickey for core from 10.0.0.1 port 44156 ssh2: RSA SHA256:1LBxu83eHkdm4X8dsk4zPTne32Wp9pee2vrXUZ4T9Dg May 8 00:52:57.119165 sshd[3935]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:52:57.122904 systemd-logind[1291]: New session 29 of user core. May 8 00:52:57.123751 systemd[1]: Started session-29.scope. May 8 00:52:58.881115 systemd[1]: run-containerd-runc-k8s.io-8c9d441d25ff10f6b052c9d3ae360cb22f6339b7c29084bb5e1c343c3a1020ab-runc.82v6yx.mount: Deactivated successfully. May 8 00:52:58.969040 env[1309]: time="2025-05-08T00:52:58.968956115Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:52:58.980784 env[1309]: time="2025-05-08T00:52:58.980746041Z" level=info msg="StopContainer for \"8c9d441d25ff10f6b052c9d3ae360cb22f6339b7c29084bb5e1c343c3a1020ab\" with timeout 2 (s)" May 8 00:52:58.980989 env[1309]: time="2025-05-08T00:52:58.980972749Z" level=info msg="Stop container \"8c9d441d25ff10f6b052c9d3ae360cb22f6339b7c29084bb5e1c343c3a1020ab\" with signal terminated" May 8 00:52:58.986660 systemd-networkd[1091]: lxc_health: Link DOWN May 8 00:52:58.986667 systemd-networkd[1091]: lxc_health: Lost carrier May 8 00:52:59.040069 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c9d441d25ff10f6b052c9d3ae360cb22f6339b7c29084bb5e1c343c3a1020ab-rootfs.mount: Deactivated successfully. May 8 00:52:59.170838 env[1309]: time="2025-05-08T00:52:59.170663919Z" level=info msg="StopContainer for \"828caf098388f44c7860575f8ded4684c3a503b6827c2f248fae8b00141d075c\" with timeout 30 (s)" May 8 00:52:59.171134 env[1309]: time="2025-05-08T00:52:59.171104279Z" level=info msg="Stop container \"828caf098388f44c7860575f8ded4684c3a503b6827c2f248fae8b00141d075c\" with signal terminated" May 8 00:52:59.199211 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-828caf098388f44c7860575f8ded4684c3a503b6827c2f248fae8b00141d075c-rootfs.mount: Deactivated successfully. May 8 00:52:59.310920 env[1309]: time="2025-05-08T00:52:59.310862497Z" level=info msg="shim disconnected" id=8c9d441d25ff10f6b052c9d3ae360cb22f6339b7c29084bb5e1c343c3a1020ab May 8 00:52:59.310920 env[1309]: time="2025-05-08T00:52:59.310920656Z" level=warning msg="cleaning up after shim disconnected" id=8c9d441d25ff10f6b052c9d3ae360cb22f6339b7c29084bb5e1c343c3a1020ab namespace=k8s.io May 8 00:52:59.310920 env[1309]: time="2025-05-08T00:52:59.310930445Z" level=info msg="cleaning up dead shim" May 8 00:52:59.318303 env[1309]: time="2025-05-08T00:52:59.318220838Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:52:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4010 runtime=io.containerd.runc.v2\n" May 8 00:52:59.351376 env[1309]: time="2025-05-08T00:52:59.351292574Z" level=info msg="shim disconnected" id=828caf098388f44c7860575f8ded4684c3a503b6827c2f248fae8b00141d075c May 8 00:52:59.351376 env[1309]: time="2025-05-08T00:52:59.351360222Z" level=warning msg="cleaning up after shim disconnected" id=828caf098388f44c7860575f8ded4684c3a503b6827c2f248fae8b00141d075c namespace=k8s.io May 8 00:52:59.351376 env[1309]: time="2025-05-08T00:52:59.351378626Z" level=info msg="cleaning up dead shim" May 8 00:52:59.357498 env[1309]: time="2025-05-08T00:52:59.357447528Z" level=info msg="StopContainer for \"8c9d441d25ff10f6b052c9d3ae360cb22f6339b7c29084bb5e1c343c3a1020ab\" returns successfully" May 8 00:52:59.358141 env[1309]: time="2025-05-08T00:52:59.358116639Z" level=info msg="StopPodSandbox for \"e7e5cc63b5f70e6f6a71e6642c0dd03399509864a9e23b45164dc0bce46e3aad\"" May 8 00:52:59.358307 env[1309]: time="2025-05-08T00:52:59.358279957Z" level=info msg="Container to stop \"e1a7439f65b1bd147bec076ad7c7eda102fba0507c339034098e784fd4393af9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:52:59.358407 env[1309]: time="2025-05-08T00:52:59.358382370Z" level=info msg="Container to stop \"da8a0d75f3f85c247402ca44b1e05309dc1730130b9073bb14a69a55555c8409\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:52:59.358497 env[1309]: time="2025-05-08T00:52:59.358473150Z" level=info msg="Container to stop \"471302692bb98d6e5caf12d413f6737420bc28b20e9deecd271581a108ffca3e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:52:59.358593 env[1309]: time="2025-05-08T00:52:59.358570955Z" level=info msg="Container to stop \"8c9d441d25ff10f6b052c9d3ae360cb22f6339b7c29084bb5e1c343c3a1020ab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:52:59.358684 env[1309]: time="2025-05-08T00:52:59.358660734Z" level=info msg="Container to stop \"3ff42ee5df968dc5e412a1296bdc5d3035cf6b01d0f60ad5404272499e0d6583\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:52:59.359652 env[1309]: time="2025-05-08T00:52:59.359609623Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:52:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4023 runtime=io.containerd.runc.v2\n" May 8 00:52:59.365033 env[1309]: time="2025-05-08T00:52:59.364988553Z" level=info msg="StopContainer for \"828caf098388f44c7860575f8ded4684c3a503b6827c2f248fae8b00141d075c\" returns successfully" May 8 00:52:59.365711 env[1309]: time="2025-05-08T00:52:59.365676821Z" level=info msg="StopPodSandbox for \"572689e2e5a0284ac8f6e9a99e30030fbb9968321056090780d81112b7518433\"" May 8 00:52:59.365777 env[1309]: time="2025-05-08T00:52:59.365759597Z" level=info msg="Container to stop \"828caf098388f44c7860575f8ded4684c3a503b6827c2f248fae8b00141d075c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:52:59.487583 env[1309]: time="2025-05-08T00:52:59.487436097Z" level=info msg="shim disconnected" id=e7e5cc63b5f70e6f6a71e6642c0dd03399509864a9e23b45164dc0bce46e3aad May 8 00:52:59.487583 env[1309]: time="2025-05-08T00:52:59.487492153Z" level=warning msg="cleaning up after shim disconnected" id=e7e5cc63b5f70e6f6a71e6642c0dd03399509864a9e23b45164dc0bce46e3aad namespace=k8s.io May 8 00:52:59.487583 env[1309]: time="2025-05-08T00:52:59.487501240Z" level=info msg="cleaning up dead shim" May 8 00:52:59.488307 env[1309]: time="2025-05-08T00:52:59.488169429Z" level=info msg="shim disconnected" id=572689e2e5a0284ac8f6e9a99e30030fbb9968321056090780d81112b7518433 May 8 00:52:59.488307 env[1309]: time="2025-05-08T00:52:59.488209816Z" level=warning msg="cleaning up after shim disconnected" id=572689e2e5a0284ac8f6e9a99e30030fbb9968321056090780d81112b7518433 namespace=k8s.io May 8 00:52:59.488307 env[1309]: time="2025-05-08T00:52:59.488220847Z" level=info msg="cleaning up dead shim" May 8 00:52:59.494188 env[1309]: time="2025-05-08T00:52:59.494128584Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:52:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4075 runtime=io.containerd.runc.v2\n" May 8 00:52:59.494594 env[1309]: time="2025-05-08T00:52:59.494552032Z" level=info msg="TearDown network for sandbox \"e7e5cc63b5f70e6f6a71e6642c0dd03399509864a9e23b45164dc0bce46e3aad\" successfully" May 8 00:52:59.494594 env[1309]: time="2025-05-08T00:52:59.494586827Z" level=info msg="StopPodSandbox for \"e7e5cc63b5f70e6f6a71e6642c0dd03399509864a9e23b45164dc0bce46e3aad\" returns successfully" May 8 00:52:59.496998 env[1309]: time="2025-05-08T00:52:59.496973096Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:52:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4076 runtime=io.containerd.runc.v2\n" May 8 00:52:59.497240 env[1309]: time="2025-05-08T00:52:59.497198271Z" level=info msg="TearDown network for sandbox \"572689e2e5a0284ac8f6e9a99e30030fbb9968321056090780d81112b7518433\" successfully" May 8 00:52:59.497240 env[1309]: time="2025-05-08T00:52:59.497217296Z" level=info msg="StopPodSandbox for \"572689e2e5a0284ac8f6e9a99e30030fbb9968321056090780d81112b7518433\" returns successfully" May 8 00:52:59.671964 kubelet[2214]: I0508 00:52:59.671883 2214 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-cilium-config-path\") pod \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\" (UID: \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\") " May 8 00:52:59.671964 kubelet[2214]: I0508 00:52:59.671943 2214 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-host-proc-sys-net\") pod \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\" (UID: \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\") " May 8 00:52:59.671964 kubelet[2214]: I0508 00:52:59.671958 2214 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-bpf-maps\") pod \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\" (UID: \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\") " May 8 00:52:59.671964 kubelet[2214]: I0508 00:52:59.671975 2214 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-host-proc-sys-kernel\") pod \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\" (UID: \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\") " May 8 00:52:59.672583 kubelet[2214]: I0508 00:52:59.671991 2214 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-etc-cni-netd\") pod \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\" (UID: \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\") " May 8 00:52:59.672583 kubelet[2214]: I0508 00:52:59.672007 2214 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-clustermesh-secrets\") pod \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\" (UID: \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\") " May 8 00:52:59.672583 kubelet[2214]: I0508 00:52:59.672023 2214 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-lib-modules\") pod \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\" (UID: \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\") " May 8 00:52:59.672583 kubelet[2214]: I0508 00:52:59.672045 2214 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6b0cf41d-071d-4d30-b83a-32bd2bdc33f6-cilium-config-path\") pod \"6b0cf41d-071d-4d30-b83a-32bd2bdc33f6\" (UID: \"6b0cf41d-071d-4d30-b83a-32bd2bdc33f6\") " May 8 00:52:59.672583 kubelet[2214]: I0508 00:52:59.672064 2214 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-xtables-lock\") pod \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\" (UID: \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\") " May 8 00:52:59.672583 kubelet[2214]: I0508 00:52:59.672080 2214 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-hostproc\") pod \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\" (UID: \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\") " May 8 00:52:59.672726 kubelet[2214]: I0508 00:52:59.672099 2214 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t79fv\" (UniqueName: \"kubernetes.io/projected/6b0cf41d-071d-4d30-b83a-32bd2bdc33f6-kube-api-access-t79fv\") pod \"6b0cf41d-071d-4d30-b83a-32bd2bdc33f6\" (UID: \"6b0cf41d-071d-4d30-b83a-32bd2bdc33f6\") " May 8 00:52:59.672726 kubelet[2214]: I0508 00:52:59.672115 2214 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-cni-path\") pod \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\" (UID: \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\") " May 8 00:52:59.672726 kubelet[2214]: I0508 00:52:59.672128 2214 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h859v\" (UniqueName: \"kubernetes.io/projected/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-kube-api-access-h859v\") pod \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\" (UID: \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\") " May 8 00:52:59.672726 kubelet[2214]: I0508 00:52:59.672113 2214 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "86d7eaa7-85f4-4d05-9af2-eedae9936a4f" (UID: "86d7eaa7-85f4-4d05-9af2-eedae9936a4f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:52:59.672726 kubelet[2214]: I0508 00:52:59.672177 2214 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "86d7eaa7-85f4-4d05-9af2-eedae9936a4f" (UID: "86d7eaa7-85f4-4d05-9af2-eedae9936a4f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:52:59.672849 kubelet[2214]: I0508 00:52:59.672142 2214 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-cilium-cgroup\") pod \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\" (UID: \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\") " May 8 00:52:59.672849 kubelet[2214]: I0508 00:52:59.672207 2214 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "86d7eaa7-85f4-4d05-9af2-eedae9936a4f" (UID: "86d7eaa7-85f4-4d05-9af2-eedae9936a4f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:52:59.672849 kubelet[2214]: I0508 00:52:59.672235 2214 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-hubble-tls\") pod \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\" (UID: \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\") " May 8 00:52:59.672849 kubelet[2214]: I0508 00:52:59.672301 2214 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "86d7eaa7-85f4-4d05-9af2-eedae9936a4f" (UID: "86d7eaa7-85f4-4d05-9af2-eedae9936a4f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:52:59.672849 kubelet[2214]: I0508 00:52:59.672324 2214 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "86d7eaa7-85f4-4d05-9af2-eedae9936a4f" (UID: "86d7eaa7-85f4-4d05-9af2-eedae9936a4f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:52:59.672969 kubelet[2214]: I0508 00:52:59.672252 2214 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-cilium-run\") pod \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\" (UID: \"86d7eaa7-85f4-4d05-9af2-eedae9936a4f\") " May 8 00:52:59.672969 kubelet[2214]: I0508 00:52:59.672678 2214 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 8 00:52:59.672969 kubelet[2214]: I0508 00:52:59.672688 2214 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 8 00:52:59.672969 kubelet[2214]: I0508 00:52:59.672696 2214 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 8 00:52:59.672969 kubelet[2214]: I0508 00:52:59.672703 2214 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 8 00:52:59.672969 kubelet[2214]: I0508 00:52:59.672725 2214 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "86d7eaa7-85f4-4d05-9af2-eedae9936a4f" (UID: "86d7eaa7-85f4-4d05-9af2-eedae9936a4f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:52:59.672969 kubelet[2214]: I0508 00:52:59.672744 2214 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-hostproc" (OuterVolumeSpecName: "hostproc") pod "86d7eaa7-85f4-4d05-9af2-eedae9936a4f" (UID: "86d7eaa7-85f4-4d05-9af2-eedae9936a4f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:52:59.673143 kubelet[2214]: I0508 00:52:59.672758 2214 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "86d7eaa7-85f4-4d05-9af2-eedae9936a4f" (UID: "86d7eaa7-85f4-4d05-9af2-eedae9936a4f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:52:59.674868 kubelet[2214]: I0508 00:52:59.674548 2214 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b0cf41d-071d-4d30-b83a-32bd2bdc33f6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6b0cf41d-071d-4d30-b83a-32bd2bdc33f6" (UID: "6b0cf41d-071d-4d30-b83a-32bd2bdc33f6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 00:52:59.674868 kubelet[2214]: I0508 00:52:59.674585 2214 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "86d7eaa7-85f4-4d05-9af2-eedae9936a4f" (UID: "86d7eaa7-85f4-4d05-9af2-eedae9936a4f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:52:59.674868 kubelet[2214]: I0508 00:52:59.674600 2214 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-cni-path" (OuterVolumeSpecName: "cni-path") pod "86d7eaa7-85f4-4d05-9af2-eedae9936a4f" (UID: "86d7eaa7-85f4-4d05-9af2-eedae9936a4f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:52:59.675105 kubelet[2214]: I0508 00:52:59.675062 2214 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "86d7eaa7-85f4-4d05-9af2-eedae9936a4f" (UID: "86d7eaa7-85f4-4d05-9af2-eedae9936a4f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 00:52:59.675481 kubelet[2214]: I0508 00:52:59.675454 2214 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "86d7eaa7-85f4-4d05-9af2-eedae9936a4f" (UID: "86d7eaa7-85f4-4d05-9af2-eedae9936a4f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 00:52:59.676039 kubelet[2214]: I0508 00:52:59.676005 2214 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "86d7eaa7-85f4-4d05-9af2-eedae9936a4f" (UID: "86d7eaa7-85f4-4d05-9af2-eedae9936a4f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:52:59.677468 kubelet[2214]: I0508 00:52:59.677414 2214 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b0cf41d-071d-4d30-b83a-32bd2bdc33f6-kube-api-access-t79fv" (OuterVolumeSpecName: "kube-api-access-t79fv") pod "6b0cf41d-071d-4d30-b83a-32bd2bdc33f6" (UID: "6b0cf41d-071d-4d30-b83a-32bd2bdc33f6"). InnerVolumeSpecName "kube-api-access-t79fv". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:52:59.677932 kubelet[2214]: I0508 00:52:59.677895 2214 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-kube-api-access-h859v" (OuterVolumeSpecName: "kube-api-access-h859v") pod "86d7eaa7-85f4-4d05-9af2-eedae9936a4f" (UID: "86d7eaa7-85f4-4d05-9af2-eedae9936a4f"). InnerVolumeSpecName "kube-api-access-h859v". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:52:59.740156 kubelet[2214]: E0508 00:52:59.740017 2214 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 00:52:59.773425 kubelet[2214]: I0508 00:52:59.773379 2214 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 8 00:52:59.773425 kubelet[2214]: I0508 00:52:59.773415 2214 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 8 00:52:59.773425 kubelet[2214]: I0508 00:52:59.773425 2214 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-lib-modules\") on node \"localhost\" DevicePath \"\"" May 8 00:52:59.773633 kubelet[2214]: I0508 00:52:59.773439 2214 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6b0cf41d-071d-4d30-b83a-32bd2bdc33f6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 8 00:52:59.773633 kubelet[2214]: I0508 00:52:59.773449 2214 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 8 00:52:59.773633 kubelet[2214]: I0508 00:52:59.773459 2214 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-hostproc\") on node \"localhost\" DevicePath \"\"" May 8 00:52:59.773633 kubelet[2214]: I0508 00:52:59.773469 2214 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-t79fv\" (UniqueName: \"kubernetes.io/projected/6b0cf41d-071d-4d30-b83a-32bd2bdc33f6-kube-api-access-t79fv\") on node \"localhost\" DevicePath \"\"" May 8 00:52:59.773633 kubelet[2214]: I0508 00:52:59.773479 2214 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-cni-path\") on node \"localhost\" DevicePath \"\"" May 8 00:52:59.773633 kubelet[2214]: I0508 00:52:59.773487 2214 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-h859v\" (UniqueName: \"kubernetes.io/projected/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-kube-api-access-h859v\") on node \"localhost\" DevicePath \"\"" May 8 00:52:59.773633 kubelet[2214]: I0508 00:52:59.773496 2214 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 8 00:52:59.773633 kubelet[2214]: I0508 00:52:59.773505 2214 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-cilium-run\") on node \"localhost\" DevicePath \"\"" May 8 00:52:59.773808 kubelet[2214]: I0508 00:52:59.773514 2214 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/86d7eaa7-85f4-4d05-9af2-eedae9936a4f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 8 00:52:59.873892 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-572689e2e5a0284ac8f6e9a99e30030fbb9968321056090780d81112b7518433-rootfs.mount: Deactivated successfully. May 8 00:52:59.874107 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-572689e2e5a0284ac8f6e9a99e30030fbb9968321056090780d81112b7518433-shm.mount: Deactivated successfully. May 8 00:52:59.874237 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7e5cc63b5f70e6f6a71e6642c0dd03399509864a9e23b45164dc0bce46e3aad-rootfs.mount: Deactivated successfully. May 8 00:52:59.874411 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e7e5cc63b5f70e6f6a71e6642c0dd03399509864a9e23b45164dc0bce46e3aad-shm.mount: Deactivated successfully. May 8 00:52:59.874559 systemd[1]: var-lib-kubelet-pods-6b0cf41d\x2d071d\x2d4d30\x2db83a\x2d32bd2bdc33f6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt79fv.mount: Deactivated successfully. May 8 00:52:59.874692 systemd[1]: var-lib-kubelet-pods-86d7eaa7\x2d85f4\x2d4d05\x2d9af2\x2deedae9936a4f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh859v.mount: Deactivated successfully. May 8 00:52:59.874817 systemd[1]: var-lib-kubelet-pods-86d7eaa7\x2d85f4\x2d4d05\x2d9af2\x2deedae9936a4f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 8 00:52:59.874942 systemd[1]: var-lib-kubelet-pods-86d7eaa7\x2d85f4\x2d4d05\x2d9af2\x2deedae9936a4f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 8 00:53:00.005642 kubelet[2214]: I0508 00:53:00.005588 2214 scope.go:117] "RemoveContainer" containerID="8c9d441d25ff10f6b052c9d3ae360cb22f6339b7c29084bb5e1c343c3a1020ab" May 8 00:53:00.007015 env[1309]: time="2025-05-08T00:53:00.006974050Z" level=info msg="RemoveContainer for \"8c9d441d25ff10f6b052c9d3ae360cb22f6339b7c29084bb5e1c343c3a1020ab\"" May 8 00:53:00.161083 env[1309]: time="2025-05-08T00:53:00.161010883Z" level=info msg="RemoveContainer for \"8c9d441d25ff10f6b052c9d3ae360cb22f6339b7c29084bb5e1c343c3a1020ab\" returns successfully" May 8 00:53:00.161427 kubelet[2214]: I0508 00:53:00.161384 2214 scope.go:117] "RemoveContainer" containerID="da8a0d75f3f85c247402ca44b1e05309dc1730130b9073bb14a69a55555c8409" May 8 00:53:00.162871 env[1309]: time="2025-05-08T00:53:00.162817348Z" level=info msg="RemoveContainer for \"da8a0d75f3f85c247402ca44b1e05309dc1730130b9073bb14a69a55555c8409\"" May 8 00:53:00.334338 env[1309]: time="2025-05-08T00:53:00.334178928Z" level=info msg="RemoveContainer for \"da8a0d75f3f85c247402ca44b1e05309dc1730130b9073bb14a69a55555c8409\" returns successfully" May 8 00:53:00.334916 kubelet[2214]: I0508 00:53:00.334872 2214 scope.go:117] "RemoveContainer" containerID="e1a7439f65b1bd147bec076ad7c7eda102fba0507c339034098e784fd4393af9" May 8 00:53:00.336281 env[1309]: time="2025-05-08T00:53:00.336227299Z" level=info msg="RemoveContainer for \"e1a7439f65b1bd147bec076ad7c7eda102fba0507c339034098e784fd4393af9\"" May 8 00:53:00.419003 env[1309]: time="2025-05-08T00:53:00.418928529Z" level=info msg="RemoveContainer for \"e1a7439f65b1bd147bec076ad7c7eda102fba0507c339034098e784fd4393af9\" returns successfully" May 8 00:53:00.419312 kubelet[2214]: I0508 00:53:00.419232 2214 scope.go:117] "RemoveContainer" containerID="471302692bb98d6e5caf12d413f6737420bc28b20e9deecd271581a108ffca3e" May 8 00:53:00.420963 env[1309]: time="2025-05-08T00:53:00.420901708Z" level=info msg="RemoveContainer for \"471302692bb98d6e5caf12d413f6737420bc28b20e9deecd271581a108ffca3e\"" May 8 00:53:00.504713 env[1309]: time="2025-05-08T00:53:00.504650421Z" level=info msg="RemoveContainer for \"471302692bb98d6e5caf12d413f6737420bc28b20e9deecd271581a108ffca3e\" returns successfully" May 8 00:53:00.505153 kubelet[2214]: I0508 00:53:00.505086 2214 scope.go:117] "RemoveContainer" containerID="3ff42ee5df968dc5e412a1296bdc5d3035cf6b01d0f60ad5404272499e0d6583" May 8 00:53:00.506533 env[1309]: time="2025-05-08T00:53:00.506482085Z" level=info msg="RemoveContainer for \"3ff42ee5df968dc5e412a1296bdc5d3035cf6b01d0f60ad5404272499e0d6583\"" May 8 00:53:00.608142 env[1309]: time="2025-05-08T00:53:00.607994354Z" level=info msg="RemoveContainer for \"3ff42ee5df968dc5e412a1296bdc5d3035cf6b01d0f60ad5404272499e0d6583\" returns successfully" May 8 00:53:00.608520 kubelet[2214]: I0508 00:53:00.608480 2214 scope.go:117] "RemoveContainer" containerID="8c9d441d25ff10f6b052c9d3ae360cb22f6339b7c29084bb5e1c343c3a1020ab" May 8 00:53:00.608825 env[1309]: time="2025-05-08T00:53:00.608742073Z" level=error msg="ContainerStatus for \"8c9d441d25ff10f6b052c9d3ae360cb22f6339b7c29084bb5e1c343c3a1020ab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8c9d441d25ff10f6b052c9d3ae360cb22f6339b7c29084bb5e1c343c3a1020ab\": not found" May 8 00:53:00.608964 kubelet[2214]: E0508 00:53:00.608942 2214 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8c9d441d25ff10f6b052c9d3ae360cb22f6339b7c29084bb5e1c343c3a1020ab\": not found" containerID="8c9d441d25ff10f6b052c9d3ae360cb22f6339b7c29084bb5e1c343c3a1020ab" May 8 00:53:00.609050 kubelet[2214]: I0508 00:53:00.608969 2214 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8c9d441d25ff10f6b052c9d3ae360cb22f6339b7c29084bb5e1c343c3a1020ab"} err="failed to get container status \"8c9d441d25ff10f6b052c9d3ae360cb22f6339b7c29084bb5e1c343c3a1020ab\": rpc error: code = NotFound desc = an error occurred when try to find container \"8c9d441d25ff10f6b052c9d3ae360cb22f6339b7c29084bb5e1c343c3a1020ab\": not found" May 8 00:53:00.609146 kubelet[2214]: I0508 00:53:00.609050 2214 scope.go:117] "RemoveContainer" containerID="da8a0d75f3f85c247402ca44b1e05309dc1730130b9073bb14a69a55555c8409" May 8 00:53:00.609222 env[1309]: time="2025-05-08T00:53:00.609184848Z" level=error msg="ContainerStatus for \"da8a0d75f3f85c247402ca44b1e05309dc1730130b9073bb14a69a55555c8409\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"da8a0d75f3f85c247402ca44b1e05309dc1730130b9073bb14a69a55555c8409\": not found" May 8 00:53:00.609326 kubelet[2214]: E0508 00:53:00.609309 2214 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"da8a0d75f3f85c247402ca44b1e05309dc1730130b9073bb14a69a55555c8409\": not found" containerID="da8a0d75f3f85c247402ca44b1e05309dc1730130b9073bb14a69a55555c8409" May 8 00:53:00.609368 kubelet[2214]: I0508 00:53:00.609328 2214 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"da8a0d75f3f85c247402ca44b1e05309dc1730130b9073bb14a69a55555c8409"} err="failed to get container status \"da8a0d75f3f85c247402ca44b1e05309dc1730130b9073bb14a69a55555c8409\": rpc error: code = NotFound desc = an error occurred when try to find container \"da8a0d75f3f85c247402ca44b1e05309dc1730130b9073bb14a69a55555c8409\": not found" May 8 00:53:00.609368 kubelet[2214]: I0508 00:53:00.609343 2214 scope.go:117] "RemoveContainer" containerID="e1a7439f65b1bd147bec076ad7c7eda102fba0507c339034098e784fd4393af9" May 8 00:53:00.609503 env[1309]: time="2025-05-08T00:53:00.609466388Z" level=error msg="ContainerStatus for \"e1a7439f65b1bd147bec076ad7c7eda102fba0507c339034098e784fd4393af9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e1a7439f65b1bd147bec076ad7c7eda102fba0507c339034098e784fd4393af9\": not found" May 8 00:53:00.609595 kubelet[2214]: E0508 00:53:00.609574 2214 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e1a7439f65b1bd147bec076ad7c7eda102fba0507c339034098e784fd4393af9\": not found" containerID="e1a7439f65b1bd147bec076ad7c7eda102fba0507c339034098e784fd4393af9" May 8 00:53:00.609639 kubelet[2214]: I0508 00:53:00.609597 2214 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e1a7439f65b1bd147bec076ad7c7eda102fba0507c339034098e784fd4393af9"} err="failed to get container status \"e1a7439f65b1bd147bec076ad7c7eda102fba0507c339034098e784fd4393af9\": rpc error: code = NotFound desc = an error occurred when try to find container \"e1a7439f65b1bd147bec076ad7c7eda102fba0507c339034098e784fd4393af9\": not found" May 8 00:53:00.609639 kubelet[2214]: I0508 00:53:00.609609 2214 scope.go:117] "RemoveContainer" containerID="471302692bb98d6e5caf12d413f6737420bc28b20e9deecd271581a108ffca3e" May 8 00:53:00.609760 env[1309]: time="2025-05-08T00:53:00.609725778Z" level=error msg="ContainerStatus for \"471302692bb98d6e5caf12d413f6737420bc28b20e9deecd271581a108ffca3e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"471302692bb98d6e5caf12d413f6737420bc28b20e9deecd271581a108ffca3e\": not found" May 8 00:53:00.609867 kubelet[2214]: E0508 00:53:00.609845 2214 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"471302692bb98d6e5caf12d413f6737420bc28b20e9deecd271581a108ffca3e\": not found" containerID="471302692bb98d6e5caf12d413f6737420bc28b20e9deecd271581a108ffca3e" May 8 00:53:00.609914 kubelet[2214]: I0508 00:53:00.609870 2214 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"471302692bb98d6e5caf12d413f6737420bc28b20e9deecd271581a108ffca3e"} err="failed to get container status \"471302692bb98d6e5caf12d413f6737420bc28b20e9deecd271581a108ffca3e\": rpc error: code = NotFound desc = an error occurred when try to find container \"471302692bb98d6e5caf12d413f6737420bc28b20e9deecd271581a108ffca3e\": not found" May 8 00:53:00.609914 kubelet[2214]: I0508 00:53:00.609884 2214 scope.go:117] "RemoveContainer" containerID="3ff42ee5df968dc5e412a1296bdc5d3035cf6b01d0f60ad5404272499e0d6583" May 8 00:53:00.610102 env[1309]: time="2025-05-08T00:53:00.610051161Z" level=error msg="ContainerStatus for \"3ff42ee5df968dc5e412a1296bdc5d3035cf6b01d0f60ad5404272499e0d6583\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3ff42ee5df968dc5e412a1296bdc5d3035cf6b01d0f60ad5404272499e0d6583\": not found" May 8 00:53:00.610213 kubelet[2214]: E0508 00:53:00.610188 2214 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3ff42ee5df968dc5e412a1296bdc5d3035cf6b01d0f60ad5404272499e0d6583\": not found" containerID="3ff42ee5df968dc5e412a1296bdc5d3035cf6b01d0f60ad5404272499e0d6583" May 8 00:53:00.610304 kubelet[2214]: I0508 00:53:00.610218 2214 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3ff42ee5df968dc5e412a1296bdc5d3035cf6b01d0f60ad5404272499e0d6583"} err="failed to get container status \"3ff42ee5df968dc5e412a1296bdc5d3035cf6b01d0f60ad5404272499e0d6583\": rpc error: code = NotFound desc = an error occurred when try to find container \"3ff42ee5df968dc5e412a1296bdc5d3035cf6b01d0f60ad5404272499e0d6583\": not found" May 8 00:53:00.610304 kubelet[2214]: I0508 00:53:00.610250 2214 scope.go:117] "RemoveContainer" containerID="828caf098388f44c7860575f8ded4684c3a503b6827c2f248fae8b00141d075c" May 8 00:53:00.611275 env[1309]: time="2025-05-08T00:53:00.611221738Z" level=info msg="RemoveContainer for \"828caf098388f44c7860575f8ded4684c3a503b6827c2f248fae8b00141d075c\"" May 8 00:53:00.673978 sshd[3935]: pam_unix(sshd:session): session closed for user core May 8 00:53:00.677253 systemd[1]: Started sshd@29-10.0.0.121:22-10.0.0.1:44166.service. May 8 00:53:00.679917 kubelet[2214]: I0508 00:53:00.677890 2214 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b0cf41d-071d-4d30-b83a-32bd2bdc33f6" path="/var/lib/kubelet/pods/6b0cf41d-071d-4d30-b83a-32bd2bdc33f6/volumes" May 8 00:53:00.679917 kubelet[2214]: I0508 00:53:00.678441 2214 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86d7eaa7-85f4-4d05-9af2-eedae9936a4f" path="/var/lib/kubelet/pods/86d7eaa7-85f4-4d05-9af2-eedae9936a4f/volumes" May 8 00:53:00.677936 systemd[1]: sshd@28-10.0.0.121:22-10.0.0.1:44156.service: Deactivated successfully. May 8 00:53:00.679820 systemd[1]: session-29.scope: Deactivated successfully. May 8 00:53:00.680214 systemd-logind[1291]: Session 29 logged out. Waiting for processes to exit. May 8 00:53:00.681306 systemd-logind[1291]: Removed session 29. May 8 00:53:00.716780 sshd[4104]: Accepted publickey for core from 10.0.0.1 port 44166 ssh2: RSA SHA256:1LBxu83eHkdm4X8dsk4zPTne32Wp9pee2vrXUZ4T9Dg May 8 00:53:00.717931 sshd[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:53:00.721729 systemd-logind[1291]: New session 30 of user core. May 8 00:53:00.722136 env[1309]: time="2025-05-08T00:53:00.721829402Z" level=info msg="RemoveContainer for \"828caf098388f44c7860575f8ded4684c3a503b6827c2f248fae8b00141d075c\" returns successfully" May 8 00:53:00.722191 kubelet[2214]: I0508 00:53:00.722110 2214 scope.go:117] "RemoveContainer" containerID="828caf098388f44c7860575f8ded4684c3a503b6827c2f248fae8b00141d075c" May 8 00:53:00.722418 env[1309]: time="2025-05-08T00:53:00.722337791Z" level=error msg="ContainerStatus for \"828caf098388f44c7860575f8ded4684c3a503b6827c2f248fae8b00141d075c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"828caf098388f44c7860575f8ded4684c3a503b6827c2f248fae8b00141d075c\": not found" May 8 00:53:00.722505 kubelet[2214]: E0508 00:53:00.722477 2214 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"828caf098388f44c7860575f8ded4684c3a503b6827c2f248fae8b00141d075c\": not found" containerID="828caf098388f44c7860575f8ded4684c3a503b6827c2f248fae8b00141d075c" May 8 00:53:00.722565 kubelet[2214]: I0508 00:53:00.722501 2214 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"828caf098388f44c7860575f8ded4684c3a503b6827c2f248fae8b00141d075c"} err="failed to get container status \"828caf098388f44c7860575f8ded4684c3a503b6827c2f248fae8b00141d075c\": rpc error: code = NotFound desc = an error occurred when try to find container \"828caf098388f44c7860575f8ded4684c3a503b6827c2f248fae8b00141d075c\": not found" May 8 00:53:00.722551 systemd[1]: Started session-30.scope. May 8 00:53:01.682803 sshd[4104]: pam_unix(sshd:session): session closed for user core May 8 00:53:01.685617 systemd[1]: Started sshd@30-10.0.0.121:22-10.0.0.1:44170.service. May 8 00:53:01.691648 systemd[1]: sshd@29-10.0.0.121:22-10.0.0.1:44166.service: Deactivated successfully. May 8 00:53:01.693313 systemd-logind[1291]: Session 30 logged out. Waiting for processes to exit. May 8 00:53:01.693470 systemd[1]: session-30.scope: Deactivated successfully. May 8 00:53:01.694464 systemd-logind[1291]: Removed session 30. May 8 00:53:01.706093 kubelet[2214]: I0508 00:53:01.705978 2214 topology_manager.go:215] "Topology Admit Handler" podUID="5458e8d0-98e4-4830-993c-1845545bc1b8" podNamespace="kube-system" podName="cilium-cpl5j" May 8 00:53:01.706093 kubelet[2214]: E0508 00:53:01.706071 2214 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="86d7eaa7-85f4-4d05-9af2-eedae9936a4f" containerName="mount-cgroup" May 8 00:53:01.706093 kubelet[2214]: E0508 00:53:01.706084 2214 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="86d7eaa7-85f4-4d05-9af2-eedae9936a4f" containerName="cilium-agent" May 8 00:53:01.706093 kubelet[2214]: E0508 00:53:01.706091 2214 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="86d7eaa7-85f4-4d05-9af2-eedae9936a4f" containerName="apply-sysctl-overwrites" May 8 00:53:01.706093 kubelet[2214]: E0508 00:53:01.706096 2214 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="86d7eaa7-85f4-4d05-9af2-eedae9936a4f" containerName="mount-bpf-fs" May 8 00:53:01.706602 kubelet[2214]: E0508 00:53:01.706101 2214 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="86d7eaa7-85f4-4d05-9af2-eedae9936a4f" containerName="clean-cilium-state" May 8 00:53:01.706602 kubelet[2214]: E0508 00:53:01.706153 2214 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6b0cf41d-071d-4d30-b83a-32bd2bdc33f6" containerName="cilium-operator" May 8 00:53:01.714333 kubelet[2214]: I0508 00:53:01.713131 2214 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b0cf41d-071d-4d30-b83a-32bd2bdc33f6" containerName="cilium-operator" May 8 00:53:01.714333 kubelet[2214]: I0508 00:53:01.713206 2214 memory_manager.go:354] "RemoveStaleState removing state" podUID="86d7eaa7-85f4-4d05-9af2-eedae9936a4f" containerName="cilium-agent" May 8 00:53:01.735991 sshd[4116]: Accepted publickey for core from 10.0.0.1 port 44170 ssh2: RSA SHA256:1LBxu83eHkdm4X8dsk4zPTne32Wp9pee2vrXUZ4T9Dg May 8 00:53:01.737688 sshd[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:53:01.743441 systemd-logind[1291]: New session 31 of user core. May 8 00:53:01.744470 systemd[1]: Started session-31.scope. May 8 00:53:01.886161 kubelet[2214]: I0508 00:53:01.886074 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-cilium-run\") pod \"cilium-cpl5j\" (UID: \"5458e8d0-98e4-4830-993c-1845545bc1b8\") " pod="kube-system/cilium-cpl5j" May 8 00:53:01.886161 kubelet[2214]: I0508 00:53:01.886132 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-etc-cni-netd\") pod \"cilium-cpl5j\" (UID: \"5458e8d0-98e4-4830-993c-1845545bc1b8\") " pod="kube-system/cilium-cpl5j" May 8 00:53:01.886421 kubelet[2214]: I0508 00:53:01.886150 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5458e8d0-98e4-4830-993c-1845545bc1b8-cilium-ipsec-secrets\") pod \"cilium-cpl5j\" (UID: \"5458e8d0-98e4-4830-993c-1845545bc1b8\") " pod="kube-system/cilium-cpl5j" May 8 00:53:01.886421 kubelet[2214]: I0508 00:53:01.886231 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npgzp\" (UniqueName: \"kubernetes.io/projected/5458e8d0-98e4-4830-993c-1845545bc1b8-kube-api-access-npgzp\") pod \"cilium-cpl5j\" (UID: \"5458e8d0-98e4-4830-993c-1845545bc1b8\") " pod="kube-system/cilium-cpl5j" May 8 00:53:01.886421 kubelet[2214]: I0508 00:53:01.886247 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-bpf-maps\") pod \"cilium-cpl5j\" (UID: \"5458e8d0-98e4-4830-993c-1845545bc1b8\") " pod="kube-system/cilium-cpl5j" May 8 00:53:01.886421 kubelet[2214]: I0508 00:53:01.886282 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-cni-path\") pod \"cilium-cpl5j\" (UID: \"5458e8d0-98e4-4830-993c-1845545bc1b8\") " pod="kube-system/cilium-cpl5j" May 8 00:53:01.886421 kubelet[2214]: I0508 00:53:01.886295 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5458e8d0-98e4-4830-993c-1845545bc1b8-clustermesh-secrets\") pod \"cilium-cpl5j\" (UID: \"5458e8d0-98e4-4830-993c-1845545bc1b8\") " pod="kube-system/cilium-cpl5j" May 8 00:53:01.886421 kubelet[2214]: I0508 00:53:01.886311 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5458e8d0-98e4-4830-993c-1845545bc1b8-cilium-config-path\") pod \"cilium-cpl5j\" (UID: \"5458e8d0-98e4-4830-993c-1845545bc1b8\") " pod="kube-system/cilium-cpl5j" May 8 00:53:01.886595 kubelet[2214]: I0508 00:53:01.886327 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-lib-modules\") pod \"cilium-cpl5j\" (UID: \"5458e8d0-98e4-4830-993c-1845545bc1b8\") " pod="kube-system/cilium-cpl5j" May 8 00:53:01.886595 kubelet[2214]: I0508 00:53:01.886365 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-host-proc-sys-net\") pod \"cilium-cpl5j\" (UID: \"5458e8d0-98e4-4830-993c-1845545bc1b8\") " pod="kube-system/cilium-cpl5j" May 8 00:53:01.886595 kubelet[2214]: I0508 00:53:01.886389 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-cilium-cgroup\") pod \"cilium-cpl5j\" (UID: \"5458e8d0-98e4-4830-993c-1845545bc1b8\") " pod="kube-system/cilium-cpl5j" May 8 00:53:01.886595 kubelet[2214]: I0508 00:53:01.886411 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-xtables-lock\") pod \"cilium-cpl5j\" (UID: \"5458e8d0-98e4-4830-993c-1845545bc1b8\") " pod="kube-system/cilium-cpl5j" May 8 00:53:01.886595 kubelet[2214]: I0508 00:53:01.886426 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5458e8d0-98e4-4830-993c-1845545bc1b8-hubble-tls\") pod \"cilium-cpl5j\" (UID: \"5458e8d0-98e4-4830-993c-1845545bc1b8\") " pod="kube-system/cilium-cpl5j" May 8 00:53:01.886595 kubelet[2214]: I0508 00:53:01.886441 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-hostproc\") pod \"cilium-cpl5j\" (UID: \"5458e8d0-98e4-4830-993c-1845545bc1b8\") " pod="kube-system/cilium-cpl5j" May 8 00:53:01.886734 kubelet[2214]: I0508 00:53:01.886455 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-host-proc-sys-kernel\") pod \"cilium-cpl5j\" (UID: \"5458e8d0-98e4-4830-993c-1845545bc1b8\") " pod="kube-system/cilium-cpl5j" May 8 00:53:02.086798 sshd[4116]: pam_unix(sshd:session): session closed for user core May 8 00:53:02.094042 systemd[1]: sshd@30-10.0.0.121:22-10.0.0.1:44170.service: Deactivated successfully. May 8 00:53:02.095007 systemd[1]: session-31.scope: Deactivated successfully. May 8 00:53:02.096015 systemd-logind[1291]: Session 31 logged out. Waiting for processes to exit. May 8 00:53:02.097928 systemd[1]: Started sshd@31-10.0.0.121:22-10.0.0.1:44178.service. May 8 00:53:02.099615 systemd-logind[1291]: Removed session 31. May 8 00:53:02.129460 sshd[4136]: Accepted publickey for core from 10.0.0.1 port 44178 ssh2: RSA SHA256:1LBxu83eHkdm4X8dsk4zPTne32Wp9pee2vrXUZ4T9Dg May 8 00:53:02.130844 sshd[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:53:02.134573 systemd-logind[1291]: New session 32 of user core. May 8 00:53:02.135395 systemd[1]: Started session-32.scope. May 8 00:53:02.321722 kubelet[2214]: E0508 00:53:02.321664 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:53:02.322507 env[1309]: time="2025-05-08T00:53:02.322447054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cpl5j,Uid:5458e8d0-98e4-4830-993c-1845545bc1b8,Namespace:kube-system,Attempt:0,}" May 8 00:53:02.453486 env[1309]: time="2025-05-08T00:53:02.453283762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:53:02.453486 env[1309]: time="2025-05-08T00:53:02.453334638Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:53:02.453486 env[1309]: time="2025-05-08T00:53:02.453348644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:53:02.454326 env[1309]: time="2025-05-08T00:53:02.454035038Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/477612e8110db9bb5f5a58d7925cb142e418034d51c10a4e97b3dffeb082d5e9 pid=4155 runtime=io.containerd.runc.v2 May 8 00:53:02.487144 env[1309]: time="2025-05-08T00:53:02.487079875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cpl5j,Uid:5458e8d0-98e4-4830-993c-1845545bc1b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"477612e8110db9bb5f5a58d7925cb142e418034d51c10a4e97b3dffeb082d5e9\"" May 8 00:53:02.488088 kubelet[2214]: E0508 00:53:02.488035 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:53:02.490654 env[1309]: time="2025-05-08T00:53:02.490614557Z" level=info msg="CreateContainer within sandbox \"477612e8110db9bb5f5a58d7925cb142e418034d51c10a4e97b3dffeb082d5e9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:53:02.505610 env[1309]: time="2025-05-08T00:53:02.505518080Z" level=info msg="CreateContainer within sandbox \"477612e8110db9bb5f5a58d7925cb142e418034d51c10a4e97b3dffeb082d5e9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"294ed6463cf286df77359e277e4b2bdf1d496bf620d149f1e7cf91f9f75235f6\"" May 8 00:53:02.506423 env[1309]: time="2025-05-08T00:53:02.506389663Z" level=info msg="StartContainer for \"294ed6463cf286df77359e277e4b2bdf1d496bf620d149f1e7cf91f9f75235f6\"" May 8 00:53:02.554172 env[1309]: time="2025-05-08T00:53:02.554092139Z" level=info msg="StartContainer for \"294ed6463cf286df77359e277e4b2bdf1d496bf620d149f1e7cf91f9f75235f6\" returns successfully" May 8 00:53:02.601290 env[1309]: time="2025-05-08T00:53:02.601194784Z" level=info msg="shim disconnected" id=294ed6463cf286df77359e277e4b2bdf1d496bf620d149f1e7cf91f9f75235f6 May 8 00:53:02.601290 env[1309]: time="2025-05-08T00:53:02.601300744Z" level=warning msg="cleaning up after shim disconnected" id=294ed6463cf286df77359e277e4b2bdf1d496bf620d149f1e7cf91f9f75235f6 namespace=k8s.io May 8 00:53:02.601607 env[1309]: time="2025-05-08T00:53:02.601313648Z" level=info msg="cleaning up dead shim" May 8 00:53:02.609490 env[1309]: time="2025-05-08T00:53:02.609421060Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:53:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4237 runtime=io.containerd.runc.v2\n" May 8 00:53:03.020006 env[1309]: time="2025-05-08T00:53:03.019933086Z" level=info msg="StopPodSandbox for \"477612e8110db9bb5f5a58d7925cb142e418034d51c10a4e97b3dffeb082d5e9\"" May 8 00:53:03.020006 env[1309]: time="2025-05-08T00:53:03.020017264Z" level=info msg="Container to stop \"294ed6463cf286df77359e277e4b2bdf1d496bf620d149f1e7cf91f9f75235f6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:53:03.023453 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-477612e8110db9bb5f5a58d7925cb142e418034d51c10a4e97b3dffeb082d5e9-shm.mount: Deactivated successfully. May 8 00:53:03.104136 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-477612e8110db9bb5f5a58d7925cb142e418034d51c10a4e97b3dffeb082d5e9-rootfs.mount: Deactivated successfully. May 8 00:53:03.111356 env[1309]: time="2025-05-08T00:53:03.111222193Z" level=info msg="shim disconnected" id=477612e8110db9bb5f5a58d7925cb142e418034d51c10a4e97b3dffeb082d5e9 May 8 00:53:03.111356 env[1309]: time="2025-05-08T00:53:03.111316280Z" level=warning msg="cleaning up after shim disconnected" id=477612e8110db9bb5f5a58d7925cb142e418034d51c10a4e97b3dffeb082d5e9 namespace=k8s.io May 8 00:53:03.111356 env[1309]: time="2025-05-08T00:53:03.111333784Z" level=info msg="cleaning up dead shim" May 8 00:53:03.119698 env[1309]: time="2025-05-08T00:53:03.119629810Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:53:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4269 runtime=io.containerd.runc.v2\n" May 8 00:53:03.120124 env[1309]: time="2025-05-08T00:53:03.120079077Z" level=info msg="TearDown network for sandbox \"477612e8110db9bb5f5a58d7925cb142e418034d51c10a4e97b3dffeb082d5e9\" successfully" May 8 00:53:03.120124 env[1309]: time="2025-05-08T00:53:03.120112580Z" level=info msg="StopPodSandbox for \"477612e8110db9bb5f5a58d7925cb142e418034d51c10a4e97b3dffeb082d5e9\" returns successfully" May 8 00:53:03.296762 kubelet[2214]: I0508 00:53:03.296089 2214 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-xtables-lock\") pod \"5458e8d0-98e4-4830-993c-1845545bc1b8\" (UID: \"5458e8d0-98e4-4830-993c-1845545bc1b8\") " May 8 00:53:03.296762 kubelet[2214]: I0508 00:53:03.296145 2214 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-cilium-cgroup\") pod \"5458e8d0-98e4-4830-993c-1845545bc1b8\" (UID: \"5458e8d0-98e4-4830-993c-1845545bc1b8\") " May 8 00:53:03.296762 kubelet[2214]: I0508 00:53:03.296160 2214 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-etc-cni-netd\") pod \"5458e8d0-98e4-4830-993c-1845545bc1b8\" (UID: \"5458e8d0-98e4-4830-993c-1845545bc1b8\") " May 8 00:53:03.296762 kubelet[2214]: I0508 00:53:03.296186 2214 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npgzp\" (UniqueName: \"kubernetes.io/projected/5458e8d0-98e4-4830-993c-1845545bc1b8-kube-api-access-npgzp\") pod \"5458e8d0-98e4-4830-993c-1845545bc1b8\" (UID: \"5458e8d0-98e4-4830-993c-1845545bc1b8\") " May 8 00:53:03.296762 kubelet[2214]: I0508 00:53:03.296217 2214 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-bpf-maps\") pod \"5458e8d0-98e4-4830-993c-1845545bc1b8\" (UID: \"5458e8d0-98e4-4830-993c-1845545bc1b8\") " May 8 00:53:03.296762 kubelet[2214]: I0508 00:53:03.296236 2214 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-cni-path\") pod \"5458e8d0-98e4-4830-993c-1845545bc1b8\" (UID: \"5458e8d0-98e4-4830-993c-1845545bc1b8\") " May 8 00:53:03.297546 kubelet[2214]: I0508 00:53:03.296286 2214 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5458e8d0-98e4-4830-993c-1845545bc1b8-cilium-config-path\") pod \"5458e8d0-98e4-4830-993c-1845545bc1b8\" (UID: \"5458e8d0-98e4-4830-993c-1845545bc1b8\") " May 8 00:53:03.297546 kubelet[2214]: I0508 00:53:03.296308 2214 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-host-proc-sys-net\") pod \"5458e8d0-98e4-4830-993c-1845545bc1b8\" (UID: \"5458e8d0-98e4-4830-993c-1845545bc1b8\") " May 8 00:53:03.297546 kubelet[2214]: I0508 00:53:03.296330 2214 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-cilium-run\") pod \"5458e8d0-98e4-4830-993c-1845545bc1b8\" (UID: \"5458e8d0-98e4-4830-993c-1845545bc1b8\") " May 8 00:53:03.297546 kubelet[2214]: I0508 00:53:03.296353 2214 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5458e8d0-98e4-4830-993c-1845545bc1b8-hubble-tls\") pod \"5458e8d0-98e4-4830-993c-1845545bc1b8\" (UID: \"5458e8d0-98e4-4830-993c-1845545bc1b8\") " May 8 00:53:03.297546 kubelet[2214]: I0508 00:53:03.296348 2214 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5458e8d0-98e4-4830-993c-1845545bc1b8" (UID: "5458e8d0-98e4-4830-993c-1845545bc1b8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:53:03.297711 kubelet[2214]: I0508 00:53:03.296402 2214 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5458e8d0-98e4-4830-993c-1845545bc1b8" (UID: "5458e8d0-98e4-4830-993c-1845545bc1b8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:53:03.297711 kubelet[2214]: I0508 00:53:03.296370 2214 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-host-proc-sys-kernel\") pod \"5458e8d0-98e4-4830-993c-1845545bc1b8\" (UID: \"5458e8d0-98e4-4830-993c-1845545bc1b8\") " May 8 00:53:03.297711 kubelet[2214]: I0508 00:53:03.296442 2214 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5458e8d0-98e4-4830-993c-1845545bc1b8" (UID: "5458e8d0-98e4-4830-993c-1845545bc1b8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:53:03.297711 kubelet[2214]: I0508 00:53:03.296462 2214 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-cni-path" (OuterVolumeSpecName: "cni-path") pod "5458e8d0-98e4-4830-993c-1845545bc1b8" (UID: "5458e8d0-98e4-4830-993c-1845545bc1b8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:53:03.297711 kubelet[2214]: I0508 00:53:03.296465 2214 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5458e8d0-98e4-4830-993c-1845545bc1b8-cilium-ipsec-secrets\") pod \"5458e8d0-98e4-4830-993c-1845545bc1b8\" (UID: \"5458e8d0-98e4-4830-993c-1845545bc1b8\") " May 8 00:53:03.297951 kubelet[2214]: I0508 00:53:03.296494 2214 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-lib-modules\") pod \"5458e8d0-98e4-4830-993c-1845545bc1b8\" (UID: \"5458e8d0-98e4-4830-993c-1845545bc1b8\") " May 8 00:53:03.297951 kubelet[2214]: I0508 00:53:03.296511 2214 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5458e8d0-98e4-4830-993c-1845545bc1b8-clustermesh-secrets\") pod \"5458e8d0-98e4-4830-993c-1845545bc1b8\" (UID: \"5458e8d0-98e4-4830-993c-1845545bc1b8\") " May 8 00:53:03.297951 kubelet[2214]: I0508 00:53:03.296527 2214 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-hostproc\") pod \"5458e8d0-98e4-4830-993c-1845545bc1b8\" (UID: \"5458e8d0-98e4-4830-993c-1845545bc1b8\") " May 8 00:53:03.297951 kubelet[2214]: I0508 00:53:03.296579 2214 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 8 00:53:03.297951 kubelet[2214]: I0508 00:53:03.296587 2214 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-cni-path\") on node \"localhost\" DevicePath \"\"" May 8 00:53:03.297951 kubelet[2214]: I0508 00:53:03.296595 2214 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 8 00:53:03.297951 kubelet[2214]: I0508 00:53:03.296607 2214 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 8 00:53:03.298140 kubelet[2214]: I0508 00:53:03.296626 2214 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-hostproc" (OuterVolumeSpecName: "hostproc") pod "5458e8d0-98e4-4830-993c-1845545bc1b8" (UID: "5458e8d0-98e4-4830-993c-1845545bc1b8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:53:03.298140 kubelet[2214]: I0508 00:53:03.296645 2214 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5458e8d0-98e4-4830-993c-1845545bc1b8" (UID: "5458e8d0-98e4-4830-993c-1845545bc1b8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:53:03.298140 kubelet[2214]: I0508 00:53:03.296684 2214 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5458e8d0-98e4-4830-993c-1845545bc1b8" (UID: "5458e8d0-98e4-4830-993c-1845545bc1b8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:53:03.298140 kubelet[2214]: I0508 00:53:03.296708 2214 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5458e8d0-98e4-4830-993c-1845545bc1b8" (UID: "5458e8d0-98e4-4830-993c-1845545bc1b8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:53:03.298140 kubelet[2214]: I0508 00:53:03.297478 2214 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5458e8d0-98e4-4830-993c-1845545bc1b8" (UID: "5458e8d0-98e4-4830-993c-1845545bc1b8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:53:03.298315 kubelet[2214]: I0508 00:53:03.297504 2214 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5458e8d0-98e4-4830-993c-1845545bc1b8" (UID: "5458e8d0-98e4-4830-993c-1845545bc1b8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:53:03.299068 kubelet[2214]: I0508 00:53:03.299040 2214 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5458e8d0-98e4-4830-993c-1845545bc1b8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5458e8d0-98e4-4830-993c-1845545bc1b8" (UID: "5458e8d0-98e4-4830-993c-1845545bc1b8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 00:53:03.303202 systemd[1]: var-lib-kubelet-pods-5458e8d0\x2d98e4\x2d4830\x2d993c\x2d1845545bc1b8-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 8 00:53:03.303389 systemd[1]: var-lib-kubelet-pods-5458e8d0\x2d98e4\x2d4830\x2d993c\x2d1845545bc1b8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 8 00:53:03.306113 systemd[1]: var-lib-kubelet-pods-5458e8d0\x2d98e4\x2d4830\x2d993c\x2d1845545bc1b8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnpgzp.mount: Deactivated successfully. May 8 00:53:03.306248 systemd[1]: var-lib-kubelet-pods-5458e8d0\x2d98e4\x2d4830\x2d993c\x2d1845545bc1b8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 8 00:53:03.308061 kubelet[2214]: I0508 00:53:03.308020 2214 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5458e8d0-98e4-4830-993c-1845545bc1b8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5458e8d0-98e4-4830-993c-1845545bc1b8" (UID: "5458e8d0-98e4-4830-993c-1845545bc1b8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:53:03.308156 kubelet[2214]: I0508 00:53:03.308126 2214 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5458e8d0-98e4-4830-993c-1845545bc1b8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5458e8d0-98e4-4830-993c-1845545bc1b8" (UID: "5458e8d0-98e4-4830-993c-1845545bc1b8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 00:53:03.308328 kubelet[2214]: I0508 00:53:03.308254 2214 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5458e8d0-98e4-4830-993c-1845545bc1b8-kube-api-access-npgzp" (OuterVolumeSpecName: "kube-api-access-npgzp") pod "5458e8d0-98e4-4830-993c-1845545bc1b8" (UID: "5458e8d0-98e4-4830-993c-1845545bc1b8"). InnerVolumeSpecName "kube-api-access-npgzp". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:53:03.308328 kubelet[2214]: I0508 00:53:03.308317 2214 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5458e8d0-98e4-4830-993c-1845545bc1b8-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "5458e8d0-98e4-4830-993c-1845545bc1b8" (UID: "5458e8d0-98e4-4830-993c-1845545bc1b8"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 00:53:03.396922 kubelet[2214]: I0508 00:53:03.396851 2214 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 8 00:53:03.396922 kubelet[2214]: I0508 00:53:03.396901 2214 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5458e8d0-98e4-4830-993c-1845545bc1b8-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 8 00:53:03.396922 kubelet[2214]: I0508 00:53:03.396915 2214 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5458e8d0-98e4-4830-993c-1845545bc1b8-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 8 00:53:03.396922 kubelet[2214]: I0508 00:53:03.396922 2214 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-cilium-run\") on node \"localhost\" DevicePath \"\"" May 8 00:53:03.396922 kubelet[2214]: I0508 00:53:03.396930 2214 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5458e8d0-98e4-4830-993c-1845545bc1b8-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" May 8 00:53:03.396922 kubelet[2214]: I0508 00:53:03.396937 2214 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-lib-modules\") on node \"localhost\" DevicePath \"\"" May 8 00:53:03.396922 kubelet[2214]: I0508 00:53:03.396944 2214 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5458e8d0-98e4-4830-993c-1845545bc1b8-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 8 00:53:03.396922 kubelet[2214]: I0508 00:53:03.396951 2214 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-hostproc\") on node \"localhost\" DevicePath \"\"" May 8 00:53:03.397474 kubelet[2214]: I0508 00:53:03.396959 2214 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 8 00:53:03.397474 kubelet[2214]: I0508 00:53:03.396966 2214 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5458e8d0-98e4-4830-993c-1845545bc1b8-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 8 00:53:03.397474 kubelet[2214]: I0508 00:53:03.396972 2214 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-npgzp\" (UniqueName: \"kubernetes.io/projected/5458e8d0-98e4-4830-993c-1845545bc1b8-kube-api-access-npgzp\") on node \"localhost\" DevicePath \"\"" May 8 00:53:04.023091 kubelet[2214]: I0508 00:53:04.023053 2214 scope.go:117] "RemoveContainer" containerID="294ed6463cf286df77359e277e4b2bdf1d496bf620d149f1e7cf91f9f75235f6" May 8 00:53:04.024780 env[1309]: time="2025-05-08T00:53:04.024167625Z" level=info msg="RemoveContainer for \"294ed6463cf286df77359e277e4b2bdf1d496bf620d149f1e7cf91f9f75235f6\"" May 8 00:53:04.028545 env[1309]: time="2025-05-08T00:53:04.028496214Z" level=info msg="RemoveContainer for \"294ed6463cf286df77359e277e4b2bdf1d496bf620d149f1e7cf91f9f75235f6\" returns successfully" May 8 00:53:04.134722 kubelet[2214]: I0508 00:53:04.134659 2214 topology_manager.go:215] "Topology Admit Handler" podUID="93f43732-739a-4621-ae47-68e025bb25de" podNamespace="kube-system" podName="cilium-qlvj4" May 8 00:53:04.134973 kubelet[2214]: E0508 00:53:04.134737 2214 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5458e8d0-98e4-4830-993c-1845545bc1b8" containerName="mount-cgroup" May 8 00:53:04.134973 kubelet[2214]: I0508 00:53:04.134778 2214 memory_manager.go:354] "RemoveStaleState removing state" podUID="5458e8d0-98e4-4830-993c-1845545bc1b8" containerName="mount-cgroup" May 8 00:53:04.303529 kubelet[2214]: I0508 00:53:04.303303 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/93f43732-739a-4621-ae47-68e025bb25de-cni-path\") pod \"cilium-qlvj4\" (UID: \"93f43732-739a-4621-ae47-68e025bb25de\") " pod="kube-system/cilium-qlvj4" May 8 00:53:04.303529 kubelet[2214]: I0508 00:53:04.303368 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93f43732-739a-4621-ae47-68e025bb25de-lib-modules\") pod \"cilium-qlvj4\" (UID: \"93f43732-739a-4621-ae47-68e025bb25de\") " pod="kube-system/cilium-qlvj4" May 8 00:53:04.303529 kubelet[2214]: I0508 00:53:04.303398 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/93f43732-739a-4621-ae47-68e025bb25de-hostproc\") pod \"cilium-qlvj4\" (UID: \"93f43732-739a-4621-ae47-68e025bb25de\") " pod="kube-system/cilium-qlvj4" May 8 00:53:04.303529 kubelet[2214]: I0508 00:53:04.303418 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93f43732-739a-4621-ae47-68e025bb25de-xtables-lock\") pod \"cilium-qlvj4\" (UID: \"93f43732-739a-4621-ae47-68e025bb25de\") " pod="kube-system/cilium-qlvj4" May 8 00:53:04.303529 kubelet[2214]: I0508 00:53:04.303439 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/93f43732-739a-4621-ae47-68e025bb25de-cilium-config-path\") pod \"cilium-qlvj4\" (UID: \"93f43732-739a-4621-ae47-68e025bb25de\") " pod="kube-system/cilium-qlvj4" May 8 00:53:04.303529 kubelet[2214]: I0508 00:53:04.303459 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/93f43732-739a-4621-ae47-68e025bb25de-hubble-tls\") pod \"cilium-qlvj4\" (UID: \"93f43732-739a-4621-ae47-68e025bb25de\") " pod="kube-system/cilium-qlvj4" May 8 00:53:04.304345 kubelet[2214]: I0508 00:53:04.303564 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/93f43732-739a-4621-ae47-68e025bb25de-cilium-run\") pod \"cilium-qlvj4\" (UID: \"93f43732-739a-4621-ae47-68e025bb25de\") " pod="kube-system/cilium-qlvj4" May 8 00:53:04.304345 kubelet[2214]: I0508 00:53:04.303633 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/93f43732-739a-4621-ae47-68e025bb25de-bpf-maps\") pod \"cilium-qlvj4\" (UID: \"93f43732-739a-4621-ae47-68e025bb25de\") " pod="kube-system/cilium-qlvj4" May 8 00:53:04.304345 kubelet[2214]: I0508 00:53:04.303653 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/93f43732-739a-4621-ae47-68e025bb25de-etc-cni-netd\") pod \"cilium-qlvj4\" (UID: \"93f43732-739a-4621-ae47-68e025bb25de\") " pod="kube-system/cilium-qlvj4" May 8 00:53:04.304345 kubelet[2214]: I0508 00:53:04.303676 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/93f43732-739a-4621-ae47-68e025bb25de-clustermesh-secrets\") pod \"cilium-qlvj4\" (UID: \"93f43732-739a-4621-ae47-68e025bb25de\") " pod="kube-system/cilium-qlvj4" May 8 00:53:04.304345 kubelet[2214]: I0508 00:53:04.303696 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/93f43732-739a-4621-ae47-68e025bb25de-cilium-ipsec-secrets\") pod \"cilium-qlvj4\" (UID: \"93f43732-739a-4621-ae47-68e025bb25de\") " pod="kube-system/cilium-qlvj4" May 8 00:53:04.304345 kubelet[2214]: I0508 00:53:04.303723 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/93f43732-739a-4621-ae47-68e025bb25de-host-proc-sys-kernel\") pod \"cilium-qlvj4\" (UID: \"93f43732-739a-4621-ae47-68e025bb25de\") " pod="kube-system/cilium-qlvj4" May 8 00:53:04.304620 kubelet[2214]: I0508 00:53:04.303741 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g772k\" (UniqueName: \"kubernetes.io/projected/93f43732-739a-4621-ae47-68e025bb25de-kube-api-access-g772k\") pod \"cilium-qlvj4\" (UID: \"93f43732-739a-4621-ae47-68e025bb25de\") " pod="kube-system/cilium-qlvj4" May 8 00:53:04.304620 kubelet[2214]: I0508 00:53:04.303761 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/93f43732-739a-4621-ae47-68e025bb25de-host-proc-sys-net\") pod \"cilium-qlvj4\" (UID: \"93f43732-739a-4621-ae47-68e025bb25de\") " pod="kube-system/cilium-qlvj4" May 8 00:53:04.304620 kubelet[2214]: I0508 00:53:04.303781 2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/93f43732-739a-4621-ae47-68e025bb25de-cilium-cgroup\") pod \"cilium-qlvj4\" (UID: \"93f43732-739a-4621-ae47-68e025bb25de\") " pod="kube-system/cilium-qlvj4" May 8 00:53:04.442136 kubelet[2214]: E0508 00:53:04.442091 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:53:04.442774 env[1309]: time="2025-05-08T00:53:04.442718115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qlvj4,Uid:93f43732-739a-4621-ae47-68e025bb25de,Namespace:kube-system,Attempt:0,}" May 8 00:53:04.573462 env[1309]: time="2025-05-08T00:53:04.573241474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:53:04.573718 env[1309]: time="2025-05-08T00:53:04.573686984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:53:04.573881 env[1309]: time="2025-05-08T00:53:04.573834232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:53:04.574352 env[1309]: time="2025-05-08T00:53:04.574302655Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cb95d849595f1003908ade6a4853b3617977a01f105776dc523cdefd1fc5c70c pid=4296 runtime=io.containerd.runc.v2 May 8 00:53:04.618115 env[1309]: time="2025-05-08T00:53:04.618062433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qlvj4,Uid:93f43732-739a-4621-ae47-68e025bb25de,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb95d849595f1003908ade6a4853b3617977a01f105776dc523cdefd1fc5c70c\"" May 8 00:53:04.618657 kubelet[2214]: E0508 00:53:04.618628 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:53:04.621100 env[1309]: time="2025-05-08T00:53:04.621070953Z" level=info msg="CreateContainer within sandbox \"cb95d849595f1003908ade6a4853b3617977a01f105776dc523cdefd1fc5c70c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:53:04.676781 kubelet[2214]: I0508 00:53:04.676719 2214 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5458e8d0-98e4-4830-993c-1845545bc1b8" path="/var/lib/kubelet/pods/5458e8d0-98e4-4830-993c-1845545bc1b8/volumes" May 8 00:53:04.740875 kubelet[2214]: E0508 00:53:04.740825 2214 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 00:53:05.178862 env[1309]: time="2025-05-08T00:53:05.178788672Z" level=info msg="CreateContainer within sandbox \"cb95d849595f1003908ade6a4853b3617977a01f105776dc523cdefd1fc5c70c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6e2712c5c8e87253dbc40e5e18c1decda7e0bdcf73056d2b9c44103f20d0d631\"" May 8 00:53:05.179776 env[1309]: time="2025-05-08T00:53:05.179693507Z" level=info msg="StartContainer for \"6e2712c5c8e87253dbc40e5e18c1decda7e0bdcf73056d2b9c44103f20d0d631\"" May 8 00:53:05.307166 env[1309]: time="2025-05-08T00:53:05.307080005Z" level=info msg="StartContainer for \"6e2712c5c8e87253dbc40e5e18c1decda7e0bdcf73056d2b9c44103f20d0d631\" returns successfully" May 8 00:53:05.339312 env[1309]: time="2025-05-08T00:53:05.339237713Z" level=info msg="shim disconnected" id=6e2712c5c8e87253dbc40e5e18c1decda7e0bdcf73056d2b9c44103f20d0d631 May 8 00:53:05.339312 env[1309]: time="2025-05-08T00:53:05.339311021Z" level=warning msg="cleaning up after shim disconnected" id=6e2712c5c8e87253dbc40e5e18c1decda7e0bdcf73056d2b9c44103f20d0d631 namespace=k8s.io May 8 00:53:05.339312 env[1309]: time="2025-05-08T00:53:05.339321131Z" level=info msg="cleaning up dead shim" May 8 00:53:05.346403 env[1309]: time="2025-05-08T00:53:05.346337655Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:53:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4380 runtime=io.containerd.runc.v2\n" May 8 00:53:06.031439 kubelet[2214]: E0508 00:53:06.031393 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:53:06.033027 env[1309]: time="2025-05-08T00:53:06.032989150Z" level=info msg="CreateContainer within sandbox \"cb95d849595f1003908ade6a4853b3617977a01f105776dc523cdefd1fc5c70c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:53:06.177757 env[1309]: time="2025-05-08T00:53:06.177538185Z" level=info msg="CreateContainer within sandbox \"cb95d849595f1003908ade6a4853b3617977a01f105776dc523cdefd1fc5c70c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"22eb023122da365f77826ffb5130c1b63f3e339f31c71cf17c85acae568523d4\"" May 8 00:53:06.178371 env[1309]: time="2025-05-08T00:53:06.178343232Z" level=info msg="StartContainer for \"22eb023122da365f77826ffb5130c1b63f3e339f31c71cf17c85acae568523d4\"" May 8 00:53:06.230961 env[1309]: time="2025-05-08T00:53:06.230914451Z" level=info msg="StartContainer for \"22eb023122da365f77826ffb5130c1b63f3e339f31c71cf17c85acae568523d4\" returns successfully" May 8 00:53:06.260455 env[1309]: time="2025-05-08T00:53:06.260384352Z" level=info msg="shim disconnected" id=22eb023122da365f77826ffb5130c1b63f3e339f31c71cf17c85acae568523d4 May 8 00:53:06.260455 env[1309]: time="2025-05-08T00:53:06.260455356Z" level=warning msg="cleaning up after shim disconnected" id=22eb023122da365f77826ffb5130c1b63f3e339f31c71cf17c85acae568523d4 namespace=k8s.io May 8 00:53:06.260455 env[1309]: time="2025-05-08T00:53:06.260467839Z" level=info msg="cleaning up dead shim" May 8 00:53:06.269095 env[1309]: time="2025-05-08T00:53:06.269018133Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:53:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4440 runtime=io.containerd.runc.v2\n" May 8 00:53:06.410705 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22eb023122da365f77826ffb5130c1b63f3e339f31c71cf17c85acae568523d4-rootfs.mount: Deactivated successfully. May 8 00:53:07.035020 kubelet[2214]: E0508 00:53:07.034978 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:53:07.037136 env[1309]: time="2025-05-08T00:53:07.037071698Z" level=info msg="CreateContainer within sandbox \"cb95d849595f1003908ade6a4853b3617977a01f105776dc523cdefd1fc5c70c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:53:07.368618 kubelet[2214]: I0508 00:53:07.368449 2214 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-08T00:53:07Z","lastTransitionTime":"2025-05-08T00:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 8 00:53:07.550035 env[1309]: time="2025-05-08T00:53:07.549946798Z" level=info msg="CreateContainer within sandbox \"cb95d849595f1003908ade6a4853b3617977a01f105776dc523cdefd1fc5c70c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8625fd06965cc69096f61466c08b712cd1e55e385d9156102f39309bc6d11fee\"" May 8 00:53:07.550829 env[1309]: time="2025-05-08T00:53:07.550802501Z" level=info msg="StartContainer for \"8625fd06965cc69096f61466c08b712cd1e55e385d9156102f39309bc6d11fee\"" May 8 00:53:07.729956 env[1309]: time="2025-05-08T00:53:07.729816069Z" level=info msg="StartContainer for \"8625fd06965cc69096f61466c08b712cd1e55e385d9156102f39309bc6d11fee\" returns successfully" May 8 00:53:07.745342 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8625fd06965cc69096f61466c08b712cd1e55e385d9156102f39309bc6d11fee-rootfs.mount: Deactivated successfully. May 8 00:53:08.040299 kubelet[2214]: E0508 00:53:08.039928 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:53:08.114127 env[1309]: time="2025-05-08T00:53:08.114049755Z" level=info msg="shim disconnected" id=8625fd06965cc69096f61466c08b712cd1e55e385d9156102f39309bc6d11fee May 8 00:53:08.114127 env[1309]: time="2025-05-08T00:53:08.114121180Z" level=warning msg="cleaning up after shim disconnected" id=8625fd06965cc69096f61466c08b712cd1e55e385d9156102f39309bc6d11fee namespace=k8s.io May 8 00:53:08.114127 env[1309]: time="2025-05-08T00:53:08.114135086Z" level=info msg="cleaning up dead shim" May 8 00:53:08.121994 env[1309]: time="2025-05-08T00:53:08.121927602Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:53:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4497 runtime=io.containerd.runc.v2\n" May 8 00:53:09.044400 kubelet[2214]: E0508 00:53:09.044336 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:53:09.047384 env[1309]: time="2025-05-08T00:53:09.047315501Z" level=info msg="CreateContainer within sandbox \"cb95d849595f1003908ade6a4853b3617977a01f105776dc523cdefd1fc5c70c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:53:09.384267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3899572338.mount: Deactivated successfully. May 8 00:53:09.588527 env[1309]: time="2025-05-08T00:53:09.588425919Z" level=info msg="CreateContainer within sandbox \"cb95d849595f1003908ade6a4853b3617977a01f105776dc523cdefd1fc5c70c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dafc0b9d6a30012fdfd122c25514905b7b57ba56cc8a33b12f7ebdbabd63cb89\"" May 8 00:53:09.589276 env[1309]: time="2025-05-08T00:53:09.589182534Z" level=info msg="StartContainer for \"dafc0b9d6a30012fdfd122c25514905b7b57ba56cc8a33b12f7ebdbabd63cb89\"" May 8 00:53:09.679778 env[1309]: time="2025-05-08T00:53:09.679518190Z" level=info msg="StartContainer for \"dafc0b9d6a30012fdfd122c25514905b7b57ba56cc8a33b12f7ebdbabd63cb89\" returns successfully" May 8 00:53:09.730826 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dafc0b9d6a30012fdfd122c25514905b7b57ba56cc8a33b12f7ebdbabd63cb89-rootfs.mount: Deactivated successfully. May 8 00:53:09.735784 env[1309]: time="2025-05-08T00:53:09.735733846Z" level=info msg="shim disconnected" id=dafc0b9d6a30012fdfd122c25514905b7b57ba56cc8a33b12f7ebdbabd63cb89 May 8 00:53:09.735784 env[1309]: time="2025-05-08T00:53:09.735783509Z" level=warning msg="cleaning up after shim disconnected" id=dafc0b9d6a30012fdfd122c25514905b7b57ba56cc8a33b12f7ebdbabd63cb89 namespace=k8s.io May 8 00:53:09.736020 env[1309]: time="2025-05-08T00:53:09.735792787Z" level=info msg="cleaning up dead shim" May 8 00:53:09.742573 kubelet[2214]: E0508 00:53:09.742513 2214 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 00:53:09.742771 env[1309]: time="2025-05-08T00:53:09.742632136Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:53:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4551 runtime=io.containerd.runc.v2\n" May 8 00:53:10.049611 kubelet[2214]: E0508 00:53:10.049561 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:53:10.060307 env[1309]: time="2025-05-08T00:53:10.054723260Z" level=info msg="CreateContainer within sandbox \"cb95d849595f1003908ade6a4853b3617977a01f105776dc523cdefd1fc5c70c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:53:10.437730 env[1309]: time="2025-05-08T00:53:10.437531750Z" level=info msg="CreateContainer within sandbox \"cb95d849595f1003908ade6a4853b3617977a01f105776dc523cdefd1fc5c70c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6968ae91ac0e33ddef84a78db11c37e14e555c82b9a5152caf4a999b437ac43f\"" May 8 00:53:10.438608 env[1309]: time="2025-05-08T00:53:10.438564396Z" level=info msg="StartContainer for \"6968ae91ac0e33ddef84a78db11c37e14e555c82b9a5152caf4a999b437ac43f\"" May 8 00:53:10.459719 systemd[1]: run-containerd-runc-k8s.io-6968ae91ac0e33ddef84a78db11c37e14e555c82b9a5152caf4a999b437ac43f-runc.sQB4qL.mount: Deactivated successfully. May 8 00:53:10.621666 env[1309]: time="2025-05-08T00:53:10.621594129Z" level=info msg="StartContainer for \"6968ae91ac0e33ddef84a78db11c37e14e555c82b9a5152caf4a999b437ac43f\" returns successfully" May 8 00:53:10.932292 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 8 00:53:11.056711 kubelet[2214]: E0508 00:53:11.056315 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:53:12.058318 kubelet[2214]: E0508 00:53:12.058243 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:53:13.060361 kubelet[2214]: E0508 00:53:13.060317 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:53:13.744200 systemd-networkd[1091]: lxc_health: Link UP May 8 00:53:13.754580 systemd-networkd[1091]: lxc_health: Gained carrier May 8 00:53:13.755308 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 8 00:53:14.444117 kubelet[2214]: E0508 00:53:14.444058 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:53:14.521828 kubelet[2214]: I0508 00:53:14.521573 2214 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qlvj4" podStartSLOduration=10.521554067 podStartE2EDuration="10.521554067s" podCreationTimestamp="2025-05-08 00:53:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:53:11.088965104 +0000 UTC m=+146.543262972" watchObservedRunningTime="2025-05-08 00:53:14.521554067 +0000 UTC m=+149.975851915" May 8 00:53:14.675919 kubelet[2214]: E0508 00:53:14.675823 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:53:15.063807 kubelet[2214]: E0508 00:53:15.063769 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:53:15.148633 systemd[1]: run-containerd-runc-k8s.io-6968ae91ac0e33ddef84a78db11c37e14e555c82b9a5152caf4a999b437ac43f-runc.9nkz73.mount: Deactivated successfully. May 8 00:53:15.806486 systemd-networkd[1091]: lxc_health: Gained IPv6LL May 8 00:53:16.066091 kubelet[2214]: E0508 00:53:16.065972 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:53:18.674634 kubelet[2214]: E0508 00:53:18.674590 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:53:19.371093 systemd[1]: run-containerd-runc-k8s.io-6968ae91ac0e33ddef84a78db11c37e14e555c82b9a5152caf4a999b437ac43f-runc.RiHWAt.mount: Deactivated successfully. May 8 00:53:19.415996 sshd[4136]: pam_unix(sshd:session): session closed for user core May 8 00:53:19.418436 systemd[1]: sshd@31-10.0.0.121:22-10.0.0.1:44178.service: Deactivated successfully. May 8 00:53:19.419460 systemd[1]: session-32.scope: Deactivated successfully. May 8 00:53:19.419488 systemd-logind[1291]: Session 32 logged out. Waiting for processes to exit. May 8 00:53:19.420504 systemd-logind[1291]: Removed session 32. May 8 00:53:19.675419 kubelet[2214]: E0508 00:53:19.675238 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"