Sep 10 00:49:12.096138 kernel: Linux version 5.15.191-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Sep 9 23:10:34 -00 2025 Sep 10 00:49:12.096159 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ebdf135b7dd8c9596dea7f2ca48bf31be0143f7cba32a9cc0282a66ca6db3272 Sep 10 00:49:12.096170 kernel: BIOS-provided physical RAM map: Sep 10 00:49:12.096175 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 10 00:49:12.096181 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 10 00:49:12.096186 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 10 00:49:12.096193 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 10 00:49:12.096199 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 10 00:49:12.096205 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Sep 10 00:49:12.096212 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Sep 10 00:49:12.096217 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Sep 10 00:49:12.096223 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Sep 10 00:49:12.096229 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Sep 10 00:49:12.096234 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 10 00:49:12.096241 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Sep 10 00:49:12.096249 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Sep 10 00:49:12.096255 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 10 00:49:12.096261 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 10 00:49:12.096282 kernel: NX (Execute Disable) protection: active Sep 10 00:49:12.096301 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Sep 10 00:49:12.096307 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Sep 10 00:49:12.096313 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Sep 10 00:49:12.096319 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Sep 10 00:49:12.096325 kernel: extended physical RAM map: Sep 10 00:49:12.096331 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 10 00:49:12.096340 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 10 00:49:12.096346 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 10 00:49:12.096352 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 10 00:49:12.096358 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 10 00:49:12.096364 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Sep 10 00:49:12.096370 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Sep 10 00:49:12.096376 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b438017] usable Sep 10 00:49:12.096382 kernel: reserve setup_data: [mem 0x000000009b438018-0x000000009b474e57] usable Sep 10 00:49:12.096388 kernel: reserve setup_data: [mem 0x000000009b474e58-0x000000009b475017] usable Sep 10 00:49:12.096394 kernel: reserve setup_data: [mem 0x000000009b475018-0x000000009b47ec57] usable Sep 10 00:49:12.096400 kernel: reserve setup_data: [mem 0x000000009b47ec58-0x000000009c8eefff] usable Sep 10 00:49:12.096407 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Sep 10 00:49:12.096413 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Sep 10 00:49:12.096419 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 10 00:49:12.096425 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Sep 10 00:49:12.096434 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Sep 10 00:49:12.096441 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 10 00:49:12.096447 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 10 00:49:12.096455 kernel: efi: EFI v2.70 by EDK II Sep 10 00:49:12.096462 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b673018 RNG=0x9cb73018 Sep 10 00:49:12.096468 kernel: random: crng init done Sep 10 00:49:12.096475 kernel: SMBIOS 2.8 present. Sep 10 00:49:12.096481 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Sep 10 00:49:12.096488 kernel: Hypervisor detected: KVM Sep 10 00:49:12.096494 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 10 00:49:12.096501 kernel: kvm-clock: cpu 0, msr 1f19f001, primary cpu clock Sep 10 00:49:12.096507 kernel: kvm-clock: using sched offset of 7621469321 cycles Sep 10 00:49:12.096519 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 10 00:49:12.096538 kernel: tsc: Detected 2794.748 MHz processor Sep 10 00:49:12.096545 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 10 00:49:12.096551 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 10 00:49:12.096558 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Sep 10 00:49:12.096565 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 10 00:49:12.096580 kernel: Using GB pages for direct mapping Sep 10 00:49:12.096595 kernel: Secure boot disabled Sep 10 00:49:12.096615 kernel: ACPI: Early table checksum verification disabled Sep 10 00:49:12.096625 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 10 00:49:12.096632 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 10 00:49:12.096638 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:49:12.096645 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:49:12.096654 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 10 00:49:12.096661 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:49:12.096668 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:49:12.096677 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:49:12.096684 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:49:12.096701 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 10 00:49:12.096716 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 10 00:49:12.096731 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 10 00:49:12.096738 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 10 00:49:12.096744 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 10 00:49:12.096759 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 10 00:49:12.096766 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 10 00:49:12.096772 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 10 00:49:12.096787 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 10 00:49:12.096801 kernel: No NUMA configuration found Sep 10 00:49:12.096816 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Sep 10 00:49:12.096823 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Sep 10 00:49:12.096829 kernel: Zone ranges: Sep 10 00:49:12.096836 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 10 00:49:12.096843 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Sep 10 00:49:12.096850 kernel: Normal empty Sep 10 00:49:12.096856 kernel: Movable zone start for each node Sep 10 00:49:12.096871 kernel: Early memory node ranges Sep 10 00:49:12.096883 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 10 00:49:12.096890 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 10 00:49:12.096896 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 10 00:49:12.096911 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Sep 10 00:49:12.096918 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Sep 10 00:49:12.096925 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Sep 10 00:49:12.096932 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Sep 10 00:49:12.096946 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 10 00:49:12.096958 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 10 00:49:12.096975 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 10 00:49:12.096996 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 10 00:49:12.097004 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Sep 10 00:49:12.097013 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 10 00:49:12.097022 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Sep 10 00:49:12.097029 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 10 00:49:12.097036 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 10 00:49:12.097046 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 10 00:49:12.097066 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 10 00:49:12.097081 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 10 00:49:12.097093 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 10 00:49:12.097100 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 10 00:49:12.097107 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 10 00:49:12.097116 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 10 00:49:12.097125 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 10 00:49:12.097131 kernel: TSC deadline timer available Sep 10 00:49:12.097138 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 10 00:49:12.097144 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 10 00:49:12.097151 kernel: kvm-guest: setup PV sched yield Sep 10 00:49:12.097159 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 10 00:49:12.097166 kernel: Booting paravirtualized kernel on KVM Sep 10 00:49:12.097178 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 10 00:49:12.097186 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Sep 10 00:49:12.097193 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Sep 10 00:49:12.097200 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Sep 10 00:49:12.097207 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 10 00:49:12.097213 kernel: kvm-guest: setup async PF for cpu 0 Sep 10 00:49:12.097220 kernel: kvm-guest: stealtime: cpu 0, msr 9b21c0c0 Sep 10 00:49:12.097227 kernel: kvm-guest: PV spinlocks enabled Sep 10 00:49:12.097234 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 10 00:49:12.097241 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Sep 10 00:49:12.097250 kernel: Policy zone: DMA32 Sep 10 00:49:12.097258 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ebdf135b7dd8c9596dea7f2ca48bf31be0143f7cba32a9cc0282a66ca6db3272 Sep 10 00:49:12.097265 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 10 00:49:12.097272 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 10 00:49:12.097280 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 10 00:49:12.097287 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 10 00:49:12.097303 kernel: Memory: 2397432K/2567000K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 169308K reserved, 0K cma-reserved) Sep 10 00:49:12.097310 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 10 00:49:12.097317 kernel: ftrace: allocating 34612 entries in 136 pages Sep 10 00:49:12.097324 kernel: ftrace: allocated 136 pages with 2 groups Sep 10 00:49:12.097331 kernel: rcu: Hierarchical RCU implementation. Sep 10 00:49:12.097339 kernel: rcu: RCU event tracing is enabled. Sep 10 00:49:12.097346 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 10 00:49:12.097355 kernel: Rude variant of Tasks RCU enabled. Sep 10 00:49:12.097362 kernel: Tracing variant of Tasks RCU enabled. Sep 10 00:49:12.097369 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 10 00:49:12.097376 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 10 00:49:12.097383 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 10 00:49:12.097389 kernel: Console: colour dummy device 80x25 Sep 10 00:49:12.097396 kernel: printk: console [ttyS0] enabled Sep 10 00:49:12.097403 kernel: ACPI: Core revision 20210730 Sep 10 00:49:12.097410 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 10 00:49:12.097419 kernel: APIC: Switch to symmetric I/O mode setup Sep 10 00:49:12.097426 kernel: x2apic enabled Sep 10 00:49:12.097433 kernel: Switched APIC routing to physical x2apic. Sep 10 00:49:12.097439 kernel: kvm-guest: setup PV IPIs Sep 10 00:49:12.097446 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 10 00:49:12.097453 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 10 00:49:12.097460 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 10 00:49:12.097467 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 10 00:49:12.097477 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 10 00:49:12.097486 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 10 00:49:12.097493 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 10 00:49:12.097500 kernel: Spectre V2 : Mitigation: Retpolines Sep 10 00:49:12.097507 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 10 00:49:12.097514 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 10 00:49:12.097520 kernel: active return thunk: retbleed_return_thunk Sep 10 00:49:12.097538 kernel: RETBleed: Mitigation: untrained return thunk Sep 10 00:49:12.097548 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 10 00:49:12.097555 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 10 00:49:12.097564 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 10 00:49:12.097571 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 10 00:49:12.097578 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 10 00:49:12.097585 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 10 00:49:12.097592 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 10 00:49:12.097599 kernel: Freeing SMP alternatives memory: 32K Sep 10 00:49:12.097606 kernel: pid_max: default: 32768 minimum: 301 Sep 10 00:49:12.097613 kernel: LSM: Security Framework initializing Sep 10 00:49:12.097619 kernel: SELinux: Initializing. Sep 10 00:49:12.097628 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 00:49:12.097635 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 00:49:12.097642 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 10 00:49:12.097649 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 10 00:49:12.097656 kernel: ... version: 0 Sep 10 00:49:12.097663 kernel: ... bit width: 48 Sep 10 00:49:12.097669 kernel: ... generic registers: 6 Sep 10 00:49:12.097676 kernel: ... value mask: 0000ffffffffffff Sep 10 00:49:12.097683 kernel: ... max period: 00007fffffffffff Sep 10 00:49:12.097692 kernel: ... fixed-purpose events: 0 Sep 10 00:49:12.097699 kernel: ... event mask: 000000000000003f Sep 10 00:49:12.097706 kernel: signal: max sigframe size: 1776 Sep 10 00:49:12.097712 kernel: rcu: Hierarchical SRCU implementation. Sep 10 00:49:12.097719 kernel: smp: Bringing up secondary CPUs ... Sep 10 00:49:12.097726 kernel: x86: Booting SMP configuration: Sep 10 00:49:12.097733 kernel: .... node #0, CPUs: #1 Sep 10 00:49:12.097740 kernel: kvm-clock: cpu 1, msr 1f19f041, secondary cpu clock Sep 10 00:49:12.097747 kernel: kvm-guest: setup async PF for cpu 1 Sep 10 00:49:12.097755 kernel: kvm-guest: stealtime: cpu 1, msr 9b29c0c0 Sep 10 00:49:12.097762 kernel: #2 Sep 10 00:49:12.097769 kernel: kvm-clock: cpu 2, msr 1f19f081, secondary cpu clock Sep 10 00:49:12.097776 kernel: kvm-guest: setup async PF for cpu 2 Sep 10 00:49:12.097783 kernel: kvm-guest: stealtime: cpu 2, msr 9b31c0c0 Sep 10 00:49:12.097790 kernel: #3 Sep 10 00:49:12.097797 kernel: kvm-clock: cpu 3, msr 1f19f0c1, secondary cpu clock Sep 10 00:49:12.097808 kernel: kvm-guest: setup async PF for cpu 3 Sep 10 00:49:12.097831 kernel: kvm-guest: stealtime: cpu 3, msr 9b39c0c0 Sep 10 00:49:12.097851 kernel: smp: Brought up 1 node, 4 CPUs Sep 10 00:49:12.097866 kernel: smpboot: Max logical packages: 1 Sep 10 00:49:12.097873 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 10 00:49:12.097880 kernel: devtmpfs: initialized Sep 10 00:49:12.097887 kernel: x86/mm: Memory block size: 128MB Sep 10 00:49:12.097894 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 10 00:49:12.097901 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 10 00:49:12.097909 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Sep 10 00:49:12.097916 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 10 00:49:12.097923 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 10 00:49:12.097935 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 10 00:49:12.097944 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 10 00:49:12.097952 kernel: pinctrl core: initialized pinctrl subsystem Sep 10 00:49:12.097959 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 10 00:49:12.097966 kernel: audit: initializing netlink subsys (disabled) Sep 10 00:49:12.097973 kernel: audit: type=2000 audit(1757465350.758:1): state=initialized audit_enabled=0 res=1 Sep 10 00:49:12.097980 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 10 00:49:12.097987 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 10 00:49:12.097996 kernel: cpuidle: using governor menu Sep 10 00:49:12.098003 kernel: ACPI: bus type PCI registered Sep 10 00:49:12.098010 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 10 00:49:12.098017 kernel: dca service started, version 1.12.1 Sep 10 00:49:12.098024 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 10 00:49:12.098031 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Sep 10 00:49:12.098038 kernel: PCI: Using configuration type 1 for base access Sep 10 00:49:12.098045 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 10 00:49:12.098058 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 10 00:49:12.098067 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 10 00:49:12.098074 kernel: ACPI: Added _OSI(Module Device) Sep 10 00:49:12.098081 kernel: ACPI: Added _OSI(Processor Device) Sep 10 00:49:12.098088 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 10 00:49:12.098095 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 10 00:49:12.098102 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 10 00:49:12.098108 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 10 00:49:12.098115 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 10 00:49:12.098122 kernel: ACPI: Interpreter enabled Sep 10 00:49:12.098129 kernel: ACPI: PM: (supports S0 S3 S5) Sep 10 00:49:12.098138 kernel: ACPI: Using IOAPIC for interrupt routing Sep 10 00:49:12.098145 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 10 00:49:12.098152 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 10 00:49:12.098159 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 10 00:49:12.098315 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 10 00:49:12.098395 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 10 00:49:12.098467 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 10 00:49:12.098479 kernel: PCI host bridge to bus 0000:00 Sep 10 00:49:12.098580 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 10 00:49:12.098652 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 10 00:49:12.098718 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 10 00:49:12.098785 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 10 00:49:12.098866 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 10 00:49:12.098934 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Sep 10 00:49:12.099007 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 10 00:49:12.099112 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 10 00:49:12.099224 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 10 00:49:12.099314 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 10 00:49:12.099391 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Sep 10 00:49:12.099465 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 10 00:49:12.099592 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Sep 10 00:49:12.099671 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 10 00:49:12.099765 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 10 00:49:12.099852 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Sep 10 00:49:12.099930 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Sep 10 00:49:12.100005 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Sep 10 00:49:12.100137 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 10 00:49:12.100243 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Sep 10 00:49:12.100460 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 10 00:49:12.100556 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Sep 10 00:49:12.100663 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 10 00:49:12.100742 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Sep 10 00:49:12.100818 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 10 00:49:12.100893 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Sep 10 00:49:12.100991 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 10 00:49:12.101117 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 10 00:49:12.101210 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 10 00:49:12.101334 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 10 00:49:12.101414 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Sep 10 00:49:12.101488 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Sep 10 00:49:12.101590 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 10 00:49:12.101672 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Sep 10 00:49:12.101682 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 10 00:49:12.101689 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 10 00:49:12.101696 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 10 00:49:12.101703 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 10 00:49:12.101710 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 10 00:49:12.102544 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 10 00:49:12.102551 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 10 00:49:12.102562 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 10 00:49:12.102569 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 10 00:49:12.102576 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 10 00:49:12.102583 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 10 00:49:12.102590 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 10 00:49:12.102597 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 10 00:49:12.102604 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 10 00:49:12.102610 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 10 00:49:12.105380 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 10 00:49:12.105395 kernel: iommu: Default domain type: Translated Sep 10 00:49:12.105402 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 10 00:49:12.105493 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 10 00:49:12.105583 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 10 00:49:12.105660 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 10 00:49:12.105670 kernel: vgaarb: loaded Sep 10 00:49:12.105677 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 10 00:49:12.105684 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 10 00:49:12.105694 kernel: PTP clock support registered Sep 10 00:49:12.105701 kernel: Registered efivars operations Sep 10 00:49:12.105709 kernel: PCI: Using ACPI for IRQ routing Sep 10 00:49:12.105716 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 10 00:49:12.105723 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 10 00:49:12.105730 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Sep 10 00:49:12.105737 kernel: e820: reserve RAM buffer [mem 0x9b438018-0x9bffffff] Sep 10 00:49:12.105743 kernel: e820: reserve RAM buffer [mem 0x9b475018-0x9bffffff] Sep 10 00:49:12.105750 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Sep 10 00:49:12.105758 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Sep 10 00:49:12.105766 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 10 00:49:12.105774 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 10 00:49:12.105781 kernel: clocksource: Switched to clocksource kvm-clock Sep 10 00:49:12.105788 kernel: VFS: Disk quotas dquot_6.6.0 Sep 10 00:49:12.105795 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 10 00:49:12.105802 kernel: pnp: PnP ACPI init Sep 10 00:49:12.105903 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 10 00:49:12.105917 kernel: pnp: PnP ACPI: found 6 devices Sep 10 00:49:12.105924 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 10 00:49:12.105931 kernel: NET: Registered PF_INET protocol family Sep 10 00:49:12.105938 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 10 00:49:12.105945 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 10 00:49:12.105952 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 10 00:49:12.105959 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 10 00:49:12.105966 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 10 00:49:12.105973 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 10 00:49:12.105982 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 00:49:12.105989 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 00:49:12.105996 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 10 00:49:12.106003 kernel: NET: Registered PF_XDP protocol family Sep 10 00:49:12.106091 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 10 00:49:12.106168 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 10 00:49:12.106237 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 10 00:49:12.106320 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 10 00:49:12.106392 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 10 00:49:12.106459 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 10 00:49:12.106540 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 10 00:49:12.110303 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Sep 10 00:49:12.110318 kernel: PCI: CLS 0 bytes, default 64 Sep 10 00:49:12.110328 kernel: Initialise system trusted keyrings Sep 10 00:49:12.110337 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 10 00:49:12.110346 kernel: Key type asymmetric registered Sep 10 00:49:12.110355 kernel: Asymmetric key parser 'x509' registered Sep 10 00:49:12.110368 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 10 00:49:12.110377 kernel: io scheduler mq-deadline registered Sep 10 00:49:12.110396 kernel: io scheduler kyber registered Sep 10 00:49:12.110405 kernel: io scheduler bfq registered Sep 10 00:49:12.110412 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 10 00:49:12.110420 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 10 00:49:12.110428 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 10 00:49:12.110435 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 10 00:49:12.110443 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 10 00:49:12.110451 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 10 00:49:12.110459 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 10 00:49:12.110466 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 10 00:49:12.110473 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 10 00:49:12.110623 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 10 00:49:12.110636 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 10 00:49:12.110703 kernel: rtc_cmos 00:04: registered as rtc0 Sep 10 00:49:12.110770 kernel: rtc_cmos 00:04: setting system clock to 2025-09-10T00:49:11 UTC (1757465351) Sep 10 00:49:12.110841 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 10 00:49:12.110850 kernel: efifb: probing for efifb Sep 10 00:49:12.110858 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 10 00:49:12.110865 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 10 00:49:12.110873 kernel: efifb: scrolling: redraw Sep 10 00:49:12.110880 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 10 00:49:12.110887 kernel: Console: switching to colour frame buffer device 160x50 Sep 10 00:49:12.110895 kernel: fb0: EFI VGA frame buffer device Sep 10 00:49:12.110902 kernel: pstore: Registered efi as persistent store backend Sep 10 00:49:12.110912 kernel: NET: Registered PF_INET6 protocol family Sep 10 00:49:12.110919 kernel: Segment Routing with IPv6 Sep 10 00:49:12.110928 kernel: In-situ OAM (IOAM) with IPv6 Sep 10 00:49:12.110936 kernel: NET: Registered PF_PACKET protocol family Sep 10 00:49:12.110944 kernel: Key type dns_resolver registered Sep 10 00:49:12.110952 kernel: IPI shorthand broadcast: enabled Sep 10 00:49:12.110960 kernel: sched_clock: Marking stable (525001691, 148818830)->(945739709, -271919188) Sep 10 00:49:12.110968 kernel: registered taskstats version 1 Sep 10 00:49:12.110975 kernel: Loading compiled-in X.509 certificates Sep 10 00:49:12.110983 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.191-flatcar: 3af57cd809cc9e43d7af9f276bb20b532a4147af' Sep 10 00:49:12.110990 kernel: Key type .fscrypt registered Sep 10 00:49:12.110997 kernel: Key type fscrypt-provisioning registered Sep 10 00:49:12.111005 kernel: pstore: Using crash dump compression: deflate Sep 10 00:49:12.111012 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 10 00:49:12.111021 kernel: ima: Allocated hash algorithm: sha1 Sep 10 00:49:12.111028 kernel: ima: No architecture policies found Sep 10 00:49:12.111036 kernel: clk: Disabling unused clocks Sep 10 00:49:12.111043 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 10 00:49:12.111050 kernel: Write protecting the kernel read-only data: 28672k Sep 10 00:49:12.111059 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 10 00:49:12.111066 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 10 00:49:12.111074 kernel: Run /init as init process Sep 10 00:49:12.111081 kernel: with arguments: Sep 10 00:49:12.111090 kernel: /init Sep 10 00:49:12.111097 kernel: with environment: Sep 10 00:49:12.111104 kernel: HOME=/ Sep 10 00:49:12.111111 kernel: TERM=linux Sep 10 00:49:12.111119 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 10 00:49:12.111128 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 10 00:49:12.111138 systemd[1]: Detected virtualization kvm. Sep 10 00:49:12.111146 systemd[1]: Detected architecture x86-64. Sep 10 00:49:12.111155 systemd[1]: Running in initrd. Sep 10 00:49:12.111163 systemd[1]: No hostname configured, using default hostname. Sep 10 00:49:12.111170 systemd[1]: Hostname set to . Sep 10 00:49:12.111179 systemd[1]: Initializing machine ID from VM UUID. Sep 10 00:49:12.111186 systemd[1]: Queued start job for default target initrd.target. Sep 10 00:49:12.111194 systemd[1]: Started systemd-ask-password-console.path. Sep 10 00:49:12.111202 systemd[1]: Reached target cryptsetup.target. Sep 10 00:49:12.111209 systemd[1]: Reached target paths.target. Sep 10 00:49:12.111218 systemd[1]: Reached target slices.target. Sep 10 00:49:12.111226 systemd[1]: Reached target swap.target. Sep 10 00:49:12.111233 systemd[1]: Reached target timers.target. Sep 10 00:49:12.111242 systemd[1]: Listening on iscsid.socket. Sep 10 00:49:12.111250 systemd[1]: Listening on iscsiuio.socket. Sep 10 00:49:12.111258 systemd[1]: Listening on systemd-journald-audit.socket. Sep 10 00:49:12.111266 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 10 00:49:12.111273 systemd[1]: Listening on systemd-journald.socket. Sep 10 00:49:12.111282 systemd[1]: Listening on systemd-networkd.socket. Sep 10 00:49:12.111301 systemd[1]: Listening on systemd-udevd-control.socket. Sep 10 00:49:12.111309 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 10 00:49:12.111317 systemd[1]: Reached target sockets.target. Sep 10 00:49:12.111324 systemd[1]: Starting kmod-static-nodes.service... Sep 10 00:49:12.111333 systemd[1]: Finished network-cleanup.service. Sep 10 00:49:12.111341 systemd[1]: Starting systemd-fsck-usr.service... Sep 10 00:49:12.112037 systemd[1]: Starting systemd-journald.service... Sep 10 00:49:12.112047 systemd[1]: Starting systemd-modules-load.service... Sep 10 00:49:12.112057 systemd[1]: Starting systemd-resolved.service... Sep 10 00:49:12.112065 systemd[1]: Starting systemd-vconsole-setup.service... Sep 10 00:49:12.112073 systemd[1]: Finished kmod-static-nodes.service. Sep 10 00:49:12.112080 systemd[1]: Finished systemd-fsck-usr.service. Sep 10 00:49:12.112088 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 10 00:49:12.112096 systemd[1]: Finished systemd-vconsole-setup.service. Sep 10 00:49:12.112104 kernel: audit: type=1130 audit(1757465352.102:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:12.112112 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 10 00:49:12.112121 kernel: audit: type=1130 audit(1757465352.106:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:12.112130 systemd[1]: Starting dracut-cmdline-ask.service... Sep 10 00:49:12.112141 systemd-journald[197]: Journal started Sep 10 00:49:12.112189 systemd-journald[197]: Runtime Journal (/run/log/journal/501d3b3f6dcb497eaf6432908febd9d9) is 6.0M, max 48.4M, 42.4M free. Sep 10 00:49:12.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:12.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:12.095435 systemd-modules-load[198]: Inserted module 'overlay' Sep 10 00:49:12.115637 systemd[1]: Started systemd-journald.service. Sep 10 00:49:12.115668 kernel: audit: type=1130 audit(1757465352.114:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:12.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:12.160560 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 10 00:49:12.164996 systemd-modules-load[198]: Inserted module 'br_netfilter' Sep 10 00:49:12.165948 kernel: Bridge firewalling registered Sep 10 00:49:12.167412 systemd[1]: Finished dracut-cmdline-ask.service. Sep 10 00:49:12.193111 kernel: audit: type=1130 audit(1757465352.183:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:12.193128 kernel: SCSI subsystem initialized Sep 10 00:49:12.193138 kernel: audit: type=1130 audit(1757465352.188:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:12.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:12.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:12.169454 systemd-resolved[199]: Positive Trust Anchors: Sep 10 00:49:12.169472 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 00:49:12.169499 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 10 00:49:12.171949 systemd-resolved[199]: Defaulting to hostname 'linux'. Sep 10 00:49:12.205240 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 10 00:49:12.205259 kernel: device-mapper: uevent: version 1.0.3 Sep 10 00:49:12.205268 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 10 00:49:12.205278 dracut-cmdline[217]: dracut-dracut-053 Sep 10 00:49:12.205278 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ebdf135b7dd8c9596dea7f2ca48bf31be0143f7cba32a9cc0282a66ca6db3272 Sep 10 00:49:12.184142 systemd[1]: Started systemd-resolved.service. Sep 10 00:49:12.188213 systemd[1]: Reached target nss-lookup.target. Sep 10 00:49:12.191808 systemd[1]: Starting dracut-cmdline.service... Sep 10 00:49:12.211422 systemd-modules-load[198]: Inserted module 'dm_multipath' Sep 10 00:49:12.217667 kernel: audit: type=1130 audit(1757465352.213:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:12.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:12.212371 systemd[1]: Finished systemd-modules-load.service. Sep 10 00:49:12.214965 systemd[1]: Starting systemd-sysctl.service... Sep 10 00:49:12.225165 systemd[1]: Finished systemd-sysctl.service. Sep 10 00:49:12.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:12.229568 kernel: audit: type=1130 audit(1757465352.225:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:12.259561 kernel: Loading iSCSI transport class v2.0-870. Sep 10 00:49:12.291580 kernel: iscsi: registered transport (tcp) Sep 10 00:49:12.315579 kernel: iscsi: registered transport (qla4xxx) Sep 10 00:49:12.315632 kernel: QLogic iSCSI HBA Driver Sep 10 00:49:12.366476 systemd[1]: Finished dracut-cmdline.service. Sep 10 00:49:12.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:12.367630 systemd[1]: Starting dracut-pre-udev.service... Sep 10 00:49:12.372394 kernel: audit: type=1130 audit(1757465352.366:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:12.421613 kernel: raid6: avx2x4 gen() 23560 MB/s Sep 10 00:49:12.438591 kernel: raid6: avx2x4 xor() 6890 MB/s Sep 10 00:49:12.455589 kernel: raid6: avx2x2 gen() 31111 MB/s Sep 10 00:49:12.472577 kernel: raid6: avx2x2 xor() 16301 MB/s Sep 10 00:49:12.489570 kernel: raid6: avx2x1 gen() 23062 MB/s Sep 10 00:49:12.506586 kernel: raid6: avx2x1 xor() 14337 MB/s Sep 10 00:49:12.523587 kernel: raid6: sse2x4 gen() 11724 MB/s Sep 10 00:49:12.540573 kernel: raid6: sse2x4 xor() 6709 MB/s Sep 10 00:49:12.557572 kernel: raid6: sse2x2 gen() 15039 MB/s Sep 10 00:49:12.574578 kernel: raid6: sse2x2 xor() 9698 MB/s Sep 10 00:49:12.591592 kernel: raid6: sse2x1 gen() 11526 MB/s Sep 10 00:49:12.608943 kernel: raid6: sse2x1 xor() 7657 MB/s Sep 10 00:49:12.608996 kernel: raid6: using algorithm avx2x2 gen() 31111 MB/s Sep 10 00:49:12.609006 kernel: raid6: .... xor() 16301 MB/s, rmw enabled Sep 10 00:49:12.623517 kernel: raid6: using avx2x2 recovery algorithm Sep 10 00:49:12.636586 kernel: xor: automatically using best checksumming function avx Sep 10 00:49:12.798562 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 10 00:49:12.808356 systemd[1]: Finished dracut-pre-udev.service. Sep 10 00:49:12.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:12.809000 audit: BPF prog-id=7 op=LOAD Sep 10 00:49:12.812573 kernel: audit: type=1130 audit(1757465352.807:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:12.811000 audit: BPF prog-id=8 op=LOAD Sep 10 00:49:12.813036 systemd[1]: Starting systemd-udevd.service... Sep 10 00:49:12.828230 systemd-udevd[400]: Using default interface naming scheme 'v252'. Sep 10 00:49:12.832496 systemd[1]: Started systemd-udevd.service. Sep 10 00:49:12.834923 systemd[1]: Starting dracut-pre-trigger.service... Sep 10 00:49:12.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:12.848931 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Sep 10 00:49:12.878632 systemd[1]: Finished dracut-pre-trigger.service. Sep 10 00:49:12.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:12.880792 systemd[1]: Starting systemd-udev-trigger.service... Sep 10 00:49:12.938454 systemd[1]: Finished systemd-udev-trigger.service. Sep 10 00:49:12.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:12.969558 kernel: cryptd: max_cpu_qlen set to 1000 Sep 10 00:49:12.976579 kernel: libata version 3.00 loaded. Sep 10 00:49:12.980470 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 10 00:49:12.999178 kernel: AVX2 version of gcm_enc/dec engaged. Sep 10 00:49:12.999201 kernel: AES CTR mode by8 optimization enabled Sep 10 00:49:12.999214 kernel: ahci 0000:00:1f.2: version 3.0 Sep 10 00:49:13.033295 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 10 00:49:13.033315 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 10 00:49:13.033325 kernel: GPT:9289727 != 19775487 Sep 10 00:49:13.033342 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 10 00:49:13.033351 kernel: GPT:9289727 != 19775487 Sep 10 00:49:13.033360 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 10 00:49:13.033369 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:49:13.033378 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 10 00:49:13.033482 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 10 00:49:13.033607 kernel: scsi host0: ahci Sep 10 00:49:13.033754 kernel: scsi host1: ahci Sep 10 00:49:13.033941 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (449) Sep 10 00:49:13.033956 kernel: scsi host2: ahci Sep 10 00:49:13.034087 kernel: scsi host3: ahci Sep 10 00:49:13.034212 kernel: scsi host4: ahci Sep 10 00:49:13.034366 kernel: scsi host5: ahci Sep 10 00:49:13.034501 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Sep 10 00:49:13.034516 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Sep 10 00:49:13.034550 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Sep 10 00:49:13.034564 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Sep 10 00:49:13.034575 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Sep 10 00:49:13.034591 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Sep 10 00:49:13.027958 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 10 00:49:13.033686 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 10 00:49:13.034922 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 10 00:49:13.043759 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 10 00:49:13.049128 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 10 00:49:13.051726 systemd[1]: Starting disk-uuid.service... Sep 10 00:49:13.343561 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 10 00:49:13.343640 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 10 00:49:13.343651 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 10 00:49:13.345568 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 10 00:49:13.345666 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 10 00:49:13.346561 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 10 00:49:13.347879 kernel: ata3.00: applying bridge limits Sep 10 00:49:13.348550 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 10 00:49:13.349547 kernel: ata3.00: configured for UDMA/100 Sep 10 00:49:13.351552 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 10 00:49:13.354158 disk-uuid[536]: Primary Header is updated. Sep 10 00:49:13.354158 disk-uuid[536]: Secondary Entries is updated. Sep 10 00:49:13.354158 disk-uuid[536]: Secondary Header is updated. Sep 10 00:49:13.359606 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:49:13.364560 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:49:13.413501 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 10 00:49:13.431202 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 10 00:49:13.431218 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 10 00:49:14.368567 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:49:14.368877 disk-uuid[537]: The operation has completed successfully. Sep 10 00:49:14.392872 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 10 00:49:14.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:14.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:14.392951 systemd[1]: Finished disk-uuid.service. Sep 10 00:49:14.402733 systemd[1]: Starting verity-setup.service... Sep 10 00:49:14.416586 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 10 00:49:14.436892 systemd[1]: Found device dev-mapper-usr.device. Sep 10 00:49:14.439769 systemd[1]: Mounting sysusr-usr.mount... Sep 10 00:49:14.443463 systemd[1]: Finished verity-setup.service. Sep 10 00:49:14.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:14.502559 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 10 00:49:14.502592 systemd[1]: Mounted sysusr-usr.mount. Sep 10 00:49:14.504103 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 10 00:49:14.506103 systemd[1]: Starting ignition-setup.service... Sep 10 00:49:14.508620 systemd[1]: Starting parse-ip-for-networkd.service... Sep 10 00:49:14.514896 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 10 00:49:14.514927 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:49:14.514943 kernel: BTRFS info (device vda6): has skinny extents Sep 10 00:49:14.524743 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 10 00:49:14.533869 systemd[1]: Finished ignition-setup.service. Sep 10 00:49:14.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:14.534902 systemd[1]: Starting ignition-fetch-offline.service... Sep 10 00:49:14.601483 systemd[1]: Finished parse-ip-for-networkd.service. Sep 10 00:49:14.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:14.602000 audit: BPF prog-id=9 op=LOAD Sep 10 00:49:14.604142 systemd[1]: Starting systemd-networkd.service... Sep 10 00:49:14.606019 ignition[632]: Ignition 2.14.0 Sep 10 00:49:14.606033 ignition[632]: Stage: fetch-offline Sep 10 00:49:14.606147 ignition[632]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:49:14.606158 ignition[632]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:49:14.606808 ignition[632]: parsed url from cmdline: "" Sep 10 00:49:14.606812 ignition[632]: no config URL provided Sep 10 00:49:14.606817 ignition[632]: reading system config file "/usr/lib/ignition/user.ign" Sep 10 00:49:14.606825 ignition[632]: no config at "/usr/lib/ignition/user.ign" Sep 10 00:49:14.609462 ignition[632]: op(1): [started] loading QEMU firmware config module Sep 10 00:49:14.609469 ignition[632]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 10 00:49:14.618438 ignition[632]: op(1): [finished] loading QEMU firmware config module Sep 10 00:49:14.630947 systemd-networkd[714]: lo: Link UP Sep 10 00:49:14.630958 systemd-networkd[714]: lo: Gained carrier Sep 10 00:49:14.631557 systemd-networkd[714]: Enumeration completed Sep 10 00:49:14.631656 systemd[1]: Started systemd-networkd.service. Sep 10 00:49:14.632097 systemd-networkd[714]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 00:49:14.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:14.633511 systemd-networkd[714]: eth0: Link UP Sep 10 00:49:14.633519 systemd-networkd[714]: eth0: Gained carrier Sep 10 00:49:14.635560 systemd[1]: Reached target network.target. Sep 10 00:49:14.639728 systemd[1]: Starting iscsiuio.service... Sep 10 00:49:14.670830 ignition[632]: parsing config with SHA512: 8ad335e158703dccbcf55993a0255b1d3ffae5cdae31a83825eb9c4ecca6418fa285f03f809200e5f2f4c597a7cf479d24fb0c8c9f67dd9e127019abfe421166 Sep 10 00:49:14.735654 unknown[632]: fetched base config from "system" Sep 10 00:49:14.735669 unknown[632]: fetched user config from "qemu" Sep 10 00:49:14.737930 ignition[632]: fetch-offline: fetch-offline passed Sep 10 00:49:14.738034 ignition[632]: Ignition finished successfully Sep 10 00:49:14.739610 systemd[1]: Started iscsiuio.service. Sep 10 00:49:14.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:14.741961 systemd[1]: Starting iscsid.service... Sep 10 00:49:14.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:14.742984 systemd[1]: Finished ignition-fetch-offline.service. Sep 10 00:49:14.744132 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 10 00:49:14.745165 systemd[1]: Starting ignition-kargs.service... Sep 10 00:49:14.749686 systemd-networkd[714]: eth0: DHCPv4 address 10.0.0.131/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 00:49:14.753477 iscsid[721]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 10 00:49:14.753477 iscsid[721]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 10 00:49:14.753477 iscsid[721]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 10 00:49:14.753477 iscsid[721]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 10 00:49:14.763409 iscsid[721]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 10 00:49:14.763409 iscsid[721]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 10 00:49:14.767923 systemd[1]: Started iscsid.service. Sep 10 00:49:14.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:14.771866 systemd[1]: Starting dracut-initqueue.service... Sep 10 00:49:14.787719 systemd[1]: Finished dracut-initqueue.service. Sep 10 00:49:14.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:14.787976 systemd[1]: Reached target remote-fs-pre.target. Sep 10 00:49:14.789765 systemd[1]: Reached target remote-cryptsetup.target. Sep 10 00:49:14.790206 systemd[1]: Reached target remote-fs.target. Sep 10 00:49:14.794280 systemd[1]: Starting dracut-pre-mount.service... Sep 10 00:49:14.803310 systemd[1]: Finished dracut-pre-mount.service. Sep 10 00:49:14.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:14.805771 ignition[722]: Ignition 2.14.0 Sep 10 00:49:14.805783 ignition[722]: Stage: kargs Sep 10 00:49:14.805936 ignition[722]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:49:14.805946 ignition[722]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:49:14.807841 ignition[722]: kargs: kargs passed Sep 10 00:49:14.810226 systemd[1]: Finished ignition-kargs.service. Sep 10 00:49:14.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:14.807889 ignition[722]: Ignition finished successfully Sep 10 00:49:14.812940 systemd[1]: Starting ignition-disks.service... Sep 10 00:49:14.829431 ignition[741]: Ignition 2.14.0 Sep 10 00:49:14.829443 ignition[741]: Stage: disks Sep 10 00:49:14.829608 ignition[741]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:49:14.829624 ignition[741]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:49:14.834102 ignition[741]: disks: disks passed Sep 10 00:49:14.834156 ignition[741]: Ignition finished successfully Sep 10 00:49:14.836372 systemd[1]: Finished ignition-disks.service. Sep 10 00:49:14.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:14.837626 systemd[1]: Reached target initrd-root-device.target. Sep 10 00:49:14.839155 systemd[1]: Reached target local-fs-pre.target. Sep 10 00:49:14.840094 systemd[1]: Reached target local-fs.target. Sep 10 00:49:14.841007 systemd[1]: Reached target sysinit.target. Sep 10 00:49:14.842983 systemd[1]: Reached target basic.target. Sep 10 00:49:14.846783 systemd[1]: Starting systemd-fsck-root.service... Sep 10 00:49:14.863268 systemd-fsck[749]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 10 00:49:14.869480 systemd[1]: Finished systemd-fsck-root.service. Sep 10 00:49:14.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:14.870447 systemd[1]: Mounting sysroot.mount... Sep 10 00:49:14.882599 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 10 00:49:14.883123 systemd[1]: Mounted sysroot.mount. Sep 10 00:49:14.884132 systemd[1]: Reached target initrd-root-fs.target. Sep 10 00:49:14.887011 systemd[1]: Mounting sysroot-usr.mount... Sep 10 00:49:14.888735 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 10 00:49:14.888787 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 10 00:49:14.888809 systemd[1]: Reached target ignition-diskful.target. Sep 10 00:49:14.891623 systemd[1]: Mounted sysroot-usr.mount. Sep 10 00:49:14.894739 systemd[1]: Starting initrd-setup-root.service... Sep 10 00:49:14.902030 initrd-setup-root[759]: cut: /sysroot/etc/passwd: No such file or directory Sep 10 00:49:14.907133 initrd-setup-root[767]: cut: /sysroot/etc/group: No such file or directory Sep 10 00:49:14.911969 initrd-setup-root[775]: cut: /sysroot/etc/shadow: No such file or directory Sep 10 00:49:14.915928 initrd-setup-root[783]: cut: /sysroot/etc/gshadow: No such file or directory Sep 10 00:49:14.944408 systemd[1]: Finished initrd-setup-root.service. Sep 10 00:49:14.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:14.946257 systemd[1]: Starting ignition-mount.service... Sep 10 00:49:14.948000 systemd[1]: Starting sysroot-boot.service... Sep 10 00:49:14.951862 bash[800]: umount: /sysroot/usr/share/oem: not mounted. Sep 10 00:49:14.964244 ignition[801]: INFO : Ignition 2.14.0 Sep 10 00:49:14.964244 ignition[801]: INFO : Stage: mount Sep 10 00:49:14.975829 ignition[801]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:49:14.975829 ignition[801]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:49:14.975829 ignition[801]: INFO : mount: mount passed Sep 10 00:49:14.975829 ignition[801]: INFO : Ignition finished successfully Sep 10 00:49:14.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:14.965861 systemd[1]: Finished ignition-mount.service. Sep 10 00:49:14.985653 systemd[1]: Finished sysroot-boot.service. Sep 10 00:49:14.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:15.448308 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 10 00:49:15.455548 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (811) Sep 10 00:49:15.455575 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 10 00:49:15.455586 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:49:15.457026 kernel: BTRFS info (device vda6): has skinny extents Sep 10 00:49:15.460179 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 10 00:49:15.462000 systemd[1]: Starting ignition-files.service... Sep 10 00:49:15.589087 ignition[831]: INFO : Ignition 2.14.0 Sep 10 00:49:15.589087 ignition[831]: INFO : Stage: files Sep 10 00:49:15.591099 ignition[831]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:49:15.591099 ignition[831]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:49:15.591099 ignition[831]: DEBUG : files: compiled without relabeling support, skipping Sep 10 00:49:15.594858 ignition[831]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 10 00:49:15.594858 ignition[831]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 10 00:49:15.598164 ignition[831]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 10 00:49:15.598164 ignition[831]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 10 00:49:15.600843 unknown[831]: wrote ssh authorized keys file for user: core Sep 10 00:49:15.601839 ignition[831]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 10 00:49:15.603232 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 10 00:49:15.603232 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 10 00:49:15.658843 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 10 00:49:16.192372 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 10 00:49:16.194458 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 10 00:49:16.194458 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 10 00:49:16.428972 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 10 00:49:16.575782 systemd-networkd[714]: eth0: Gained IPv6LL Sep 10 00:49:16.739457 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 10 00:49:16.739457 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 10 00:49:16.743640 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 10 00:49:16.743640 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 10 00:49:16.743640 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 10 00:49:16.743640 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 00:49:16.743640 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 00:49:16.743640 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 00:49:16.743640 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 00:49:16.743640 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 00:49:16.743640 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 00:49:16.743640 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 00:49:16.743640 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 00:49:16.743640 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 00:49:16.743640 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 10 00:49:16.984973 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 10 00:49:17.876932 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 00:49:17.876932 ignition[831]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 10 00:49:17.881616 ignition[831]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 00:49:17.881616 ignition[831]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 00:49:17.881616 ignition[831]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 10 00:49:17.881616 ignition[831]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 10 00:49:17.881616 ignition[831]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 00:49:17.881616 ignition[831]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 00:49:17.881616 ignition[831]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 10 00:49:17.881616 ignition[831]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 10 00:49:17.881616 ignition[831]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 00:49:18.039950 ignition[831]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 00:49:18.041656 ignition[831]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 10 00:49:18.041656 ignition[831]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 10 00:49:18.041656 ignition[831]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 10 00:49:18.041656 ignition[831]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 10 00:49:18.041656 ignition[831]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 10 00:49:18.041656 ignition[831]: INFO : files: files passed Sep 10 00:49:18.041656 ignition[831]: INFO : Ignition finished successfully Sep 10 00:49:18.051582 systemd[1]: Finished ignition-files.service. Sep 10 00:49:18.056952 kernel: kauditd_printk_skb: 23 callbacks suppressed Sep 10 00:49:18.056976 kernel: audit: type=1130 audit(1757465358.052:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.057046 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 10 00:49:18.058892 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 10 00:49:18.059654 systemd[1]: Starting ignition-quench.service... Sep 10 00:49:18.062253 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 10 00:49:18.069461 kernel: audit: type=1130 audit(1757465358.063:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.069481 kernel: audit: type=1131 audit(1757465358.063:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.062323 systemd[1]: Finished ignition-quench.service. Sep 10 00:49:18.073985 initrd-setup-root-after-ignition[856]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Sep 10 00:49:18.085135 initrd-setup-root-after-ignition[858]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 00:49:18.087129 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 10 00:49:18.092041 kernel: audit: type=1130 audit(1757465358.088:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.088258 systemd[1]: Reached target ignition-complete.target. Sep 10 00:49:18.093725 systemd[1]: Starting initrd-parse-etc.service... Sep 10 00:49:18.105798 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 10 00:49:18.105878 systemd[1]: Finished initrd-parse-etc.service. Sep 10 00:49:18.114934 kernel: audit: type=1130 audit(1757465358.108:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.114955 kernel: audit: type=1131 audit(1757465358.108:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.108735 systemd[1]: Reached target initrd-fs.target. Sep 10 00:49:18.115687 systemd[1]: Reached target initrd.target. Sep 10 00:49:18.117039 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 10 00:49:18.117730 systemd[1]: Starting dracut-pre-pivot.service... Sep 10 00:49:18.130670 systemd[1]: Finished dracut-pre-pivot.service. Sep 10 00:49:18.135416 kernel: audit: type=1130 audit(1757465358.130:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.135467 systemd[1]: Starting initrd-cleanup.service... Sep 10 00:49:18.145025 systemd[1]: Stopped target nss-lookup.target. Sep 10 00:49:18.145957 systemd[1]: Stopped target remote-cryptsetup.target. Sep 10 00:49:18.147518 systemd[1]: Stopped target timers.target. Sep 10 00:49:18.149019 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 10 00:49:18.154028 kernel: audit: type=1131 audit(1757465358.149:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.149133 systemd[1]: Stopped dracut-pre-pivot.service. Sep 10 00:49:18.150613 systemd[1]: Stopped target initrd.target. Sep 10 00:49:18.154845 systemd[1]: Stopped target basic.target. Sep 10 00:49:18.156264 systemd[1]: Stopped target ignition-complete.target. Sep 10 00:49:18.181140 systemd[1]: Stopped target ignition-diskful.target. Sep 10 00:49:18.184260 systemd[1]: Stopped target initrd-root-device.target. Sep 10 00:49:18.186023 systemd[1]: Stopped target remote-fs.target. Sep 10 00:49:18.187597 systemd[1]: Stopped target remote-fs-pre.target. Sep 10 00:49:18.189245 systemd[1]: Stopped target sysinit.target. Sep 10 00:49:18.190756 systemd[1]: Stopped target local-fs.target. Sep 10 00:49:18.192319 systemd[1]: Stopped target local-fs-pre.target. Sep 10 00:49:18.193979 systemd[1]: Stopped target swap.target. Sep 10 00:49:18.195523 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 10 00:49:18.196549 systemd[1]: Stopped dracut-pre-mount.service. Sep 10 00:49:18.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.198722 systemd[1]: Stopped target cryptsetup.target. Sep 10 00:49:18.203012 kernel: audit: type=1131 audit(1757465358.198:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.203025 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 10 00:49:18.203988 systemd[1]: Stopped dracut-initqueue.service. Sep 10 00:49:18.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.205622 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 10 00:49:18.210249 kernel: audit: type=1131 audit(1757465358.205:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.205714 systemd[1]: Stopped ignition-fetch-offline.service. Sep 10 00:49:18.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.210472 systemd[1]: Stopped target paths.target. Sep 10 00:49:18.211908 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 10 00:49:18.215576 systemd[1]: Stopped systemd-ask-password-console.path. Sep 10 00:49:18.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.215725 systemd[1]: Stopped target slices.target. Sep 10 00:49:18.215883 systemd[1]: Stopped target sockets.target. Sep 10 00:49:18.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.216055 systemd[1]: iscsid.socket: Deactivated successfully. Sep 10 00:49:18.216133 systemd[1]: Closed iscsid.socket. Sep 10 00:49:18.231711 ignition[871]: INFO : Ignition 2.14.0 Sep 10 00:49:18.231711 ignition[871]: INFO : Stage: umount Sep 10 00:49:18.231711 ignition[871]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:49:18.231711 ignition[871]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:49:18.216408 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 10 00:49:18.237033 ignition[871]: INFO : umount: umount passed Sep 10 00:49:18.237033 ignition[871]: INFO : Ignition finished successfully Sep 10 00:49:18.216505 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 10 00:49:18.216884 systemd[1]: ignition-files.service: Deactivated successfully. Sep 10 00:49:18.216964 systemd[1]: Stopped ignition-files.service. Sep 10 00:49:18.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.218053 systemd[1]: Stopping ignition-mount.service... Sep 10 00:49:18.218553 systemd[1]: Stopping iscsiuio.service... Sep 10 00:49:18.218689 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 10 00:49:18.218804 systemd[1]: Stopped kmod-static-nodes.service. Sep 10 00:49:18.219577 systemd[1]: Stopping sysroot-boot.service... Sep 10 00:49:18.219877 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 10 00:49:18.220008 systemd[1]: Stopped systemd-udev-trigger.service. Sep 10 00:49:18.220425 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 10 00:49:18.220556 systemd[1]: Stopped dracut-pre-trigger.service. Sep 10 00:49:18.224162 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 10 00:49:18.224276 systemd[1]: Stopped iscsiuio.service. Sep 10 00:49:18.226150 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 10 00:49:18.226256 systemd[1]: Closed iscsiuio.socket. Sep 10 00:49:18.227845 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 10 00:49:18.227935 systemd[1]: Finished initrd-cleanup.service. Sep 10 00:49:18.238719 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 10 00:49:18.239757 systemd[1]: Stopped ignition-mount.service. Sep 10 00:49:18.257394 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 10 00:49:18.258679 systemd[1]: Stopped target network.target. Sep 10 00:49:18.260315 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 10 00:49:18.260374 systemd[1]: Stopped ignition-disks.service. Sep 10 00:49:18.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.262949 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 10 00:49:18.262987 systemd[1]: Stopped ignition-kargs.service. Sep 10 00:49:18.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.265682 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 10 00:49:18.266753 systemd[1]: Stopped ignition-setup.service. Sep 10 00:49:18.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.268763 systemd[1]: Stopping systemd-networkd.service... Sep 10 00:49:18.270884 systemd[1]: Stopping systemd-resolved.service... Sep 10 00:49:18.274606 systemd-networkd[714]: eth0: DHCPv6 lease lost Sep 10 00:49:18.276236 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 10 00:49:18.276369 systemd[1]: Stopped systemd-networkd.service. Sep 10 00:49:18.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.278377 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 10 00:49:18.278415 systemd[1]: Closed systemd-networkd.socket. Sep 10 00:49:18.281733 systemd[1]: Stopping network-cleanup.service... Sep 10 00:49:18.283231 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 10 00:49:18.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.284000 audit: BPF prog-id=9 op=UNLOAD Sep 10 00:49:18.283297 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 10 00:49:18.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.285015 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 00:49:18.285055 systemd[1]: Stopped systemd-sysctl.service. Sep 10 00:49:18.290708 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 10 00:49:18.290785 systemd[1]: Stopped systemd-modules-load.service. Sep 10 00:49:18.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.293839 systemd[1]: Stopping systemd-udevd.service... Sep 10 00:49:18.296003 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 10 00:49:18.296614 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 10 00:49:18.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.296713 systemd[1]: Stopped systemd-resolved.service. Sep 10 00:49:18.301188 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 10 00:49:18.302000 audit: BPF prog-id=6 op=UNLOAD Sep 10 00:49:18.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.301280 systemd[1]: Stopped network-cleanup.service. Sep 10 00:49:18.305096 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 10 00:49:18.305219 systemd[1]: Stopped systemd-udevd.service. Sep 10 00:49:18.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.308136 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 10 00:49:18.308218 systemd[1]: Closed systemd-udevd-control.socket. Sep 10 00:49:18.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.309445 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 10 00:49:18.309484 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 10 00:49:18.311508 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 10 00:49:18.311609 systemd[1]: Stopped dracut-pre-udev.service. Sep 10 00:49:18.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:18.311749 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 10 00:49:18.311794 systemd[1]: Stopped dracut-cmdline.service. Sep 10 00:49:18.312046 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 00:49:18.312088 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 10 00:49:18.313302 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 10 00:49:18.313514 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 00:49:18.313593 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 10 00:49:18.314915 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 10 00:49:18.314995 systemd[1]: Stopped sysroot-boot.service. Sep 10 00:49:18.315324 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 10 00:49:18.315359 systemd[1]: Stopped initrd-setup-root.service. Sep 10 00:49:18.319582 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 10 00:49:18.319654 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 10 00:49:18.321500 systemd[1]: Reached target initrd-switch-root.target. Sep 10 00:49:18.323599 systemd[1]: Starting initrd-switch-root.service... Sep 10 00:49:18.339783 systemd[1]: Switching root. Sep 10 00:49:18.360508 iscsid[721]: iscsid shutting down. Sep 10 00:49:18.361264 systemd-journald[197]: Received SIGTERM from PID 1 (systemd). Sep 10 00:49:18.361303 systemd-journald[197]: Journal stopped Sep 10 00:49:23.562603 kernel: SELinux: Class mctp_socket not defined in policy. Sep 10 00:49:23.562643 kernel: SELinux: Class anon_inode not defined in policy. Sep 10 00:49:23.562657 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 10 00:49:23.562669 kernel: SELinux: policy capability network_peer_controls=1 Sep 10 00:49:23.562679 kernel: SELinux: policy capability open_perms=1 Sep 10 00:49:23.562689 kernel: SELinux: policy capability extended_socket_class=1 Sep 10 00:49:23.562699 kernel: SELinux: policy capability always_check_network=0 Sep 10 00:49:23.562708 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 10 00:49:23.562718 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 10 00:49:23.562727 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 10 00:49:23.562740 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 10 00:49:23.562751 systemd[1]: Successfully loaded SELinux policy in 42.782ms. Sep 10 00:49:23.562770 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.163ms. Sep 10 00:49:23.562782 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 10 00:49:23.562793 systemd[1]: Detected virtualization kvm. Sep 10 00:49:23.562804 systemd[1]: Detected architecture x86-64. Sep 10 00:49:23.562814 systemd[1]: Detected first boot. Sep 10 00:49:23.562836 systemd[1]: Initializing machine ID from VM UUID. Sep 10 00:49:23.562846 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 10 00:49:23.562858 systemd[1]: Populated /etc with preset unit settings. Sep 10 00:49:23.562869 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 10 00:49:23.562883 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 10 00:49:23.562895 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:49:23.562908 kernel: kauditd_printk_skb: 46 callbacks suppressed Sep 10 00:49:23.562919 kernel: audit: type=1334 audit(1757465363.401:83): prog-id=12 op=LOAD Sep 10 00:49:23.562928 kernel: audit: type=1334 audit(1757465363.401:84): prog-id=3 op=UNLOAD Sep 10 00:49:23.562938 kernel: audit: type=1334 audit(1757465363.403:85): prog-id=13 op=LOAD Sep 10 00:49:23.562948 kernel: audit: type=1334 audit(1757465363.404:86): prog-id=14 op=LOAD Sep 10 00:49:23.562965 kernel: audit: type=1334 audit(1757465363.404:87): prog-id=4 op=UNLOAD Sep 10 00:49:23.562975 systemd[1]: iscsid.service: Deactivated successfully. Sep 10 00:49:23.562985 kernel: audit: type=1334 audit(1757465363.404:88): prog-id=5 op=UNLOAD Sep 10 00:49:23.562995 systemd[1]: Stopped iscsid.service. Sep 10 00:49:23.563006 kernel: audit: type=1131 audit(1757465363.406:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:23.563018 kernel: audit: type=1131 audit(1757465363.416:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:23.563029 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 10 00:49:23.563039 systemd[1]: Stopped initrd-switch-root.service. Sep 10 00:49:23.563050 kernel: audit: type=1130 audit(1757465363.423:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:23.563060 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 10 00:49:23.563072 kernel: audit: type=1131 audit(1757465363.423:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:23.563083 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 10 00:49:23.563095 systemd[1]: Created slice system-addon\x2drun.slice. Sep 10 00:49:23.563106 systemd[1]: Created slice system-getty.slice. Sep 10 00:49:23.563116 systemd[1]: Created slice system-modprobe.slice. Sep 10 00:49:23.563128 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 10 00:49:23.563141 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 10 00:49:23.563152 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 10 00:49:23.563162 systemd[1]: Created slice user.slice. Sep 10 00:49:23.563174 systemd[1]: Started systemd-ask-password-console.path. Sep 10 00:49:23.563185 systemd[1]: Started systemd-ask-password-wall.path. Sep 10 00:49:23.563900 systemd[1]: Set up automount boot.automount. Sep 10 00:49:23.563913 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 10 00:49:23.563924 systemd[1]: Stopped target initrd-switch-root.target. Sep 10 00:49:23.563934 systemd[1]: Stopped target initrd-fs.target. Sep 10 00:49:23.563944 systemd[1]: Stopped target initrd-root-fs.target. Sep 10 00:49:23.563968 systemd[1]: Reached target integritysetup.target. Sep 10 00:49:23.563981 systemd[1]: Reached target remote-cryptsetup.target. Sep 10 00:49:23.563991 systemd[1]: Reached target remote-fs.target. Sep 10 00:49:23.564005 systemd[1]: Reached target slices.target. Sep 10 00:49:23.564026 systemd[1]: Reached target swap.target. Sep 10 00:49:23.564049 systemd[1]: Reached target torcx.target. Sep 10 00:49:23.564063 systemd[1]: Reached target veritysetup.target. Sep 10 00:49:23.564077 systemd[1]: Listening on systemd-coredump.socket. Sep 10 00:49:23.564091 systemd[1]: Listening on systemd-initctl.socket. Sep 10 00:49:23.564105 systemd[1]: Listening on systemd-networkd.socket. Sep 10 00:49:23.564119 systemd[1]: Listening on systemd-udevd-control.socket. Sep 10 00:49:23.564131 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 10 00:49:23.564145 systemd[1]: Listening on systemd-userdbd.socket. Sep 10 00:49:23.564155 systemd[1]: Mounting dev-hugepages.mount... Sep 10 00:49:23.564166 systemd[1]: Mounting dev-mqueue.mount... Sep 10 00:49:23.564176 systemd[1]: Mounting media.mount... Sep 10 00:49:23.564186 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:49:23.564196 systemd[1]: Mounting sys-kernel-debug.mount... Sep 10 00:49:23.564206 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 10 00:49:23.564217 systemd[1]: Mounting tmp.mount... Sep 10 00:49:23.564227 systemd[1]: Starting flatcar-tmpfiles.service... Sep 10 00:49:23.564239 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 10 00:49:23.564249 systemd[1]: Starting kmod-static-nodes.service... Sep 10 00:49:23.564259 systemd[1]: Starting modprobe@configfs.service... Sep 10 00:49:23.564269 systemd[1]: Starting modprobe@dm_mod.service... Sep 10 00:49:23.564279 systemd[1]: Starting modprobe@drm.service... Sep 10 00:49:23.564293 systemd[1]: Starting modprobe@efi_pstore.service... Sep 10 00:49:23.564304 systemd[1]: Starting modprobe@fuse.service... Sep 10 00:49:23.564314 systemd[1]: Starting modprobe@loop.service... Sep 10 00:49:23.564325 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 10 00:49:23.564337 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 10 00:49:23.564347 systemd[1]: Stopped systemd-fsck-root.service. Sep 10 00:49:23.564358 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 10 00:49:23.564368 systemd[1]: Stopped systemd-fsck-usr.service. Sep 10 00:49:23.564379 systemd[1]: Stopped systemd-journald.service. Sep 10 00:49:23.564389 kernel: loop: module loaded Sep 10 00:49:23.564401 kernel: fuse: init (API version 7.34) Sep 10 00:49:23.564412 systemd[1]: Starting systemd-journald.service... Sep 10 00:49:23.564422 systemd[1]: Starting systemd-modules-load.service... Sep 10 00:49:23.564433 systemd[1]: Starting systemd-network-generator.service... Sep 10 00:49:23.564444 systemd[1]: Starting systemd-remount-fs.service... Sep 10 00:49:23.564454 systemd[1]: Starting systemd-udev-trigger.service... Sep 10 00:49:23.564465 systemd[1]: verity-setup.service: Deactivated successfully. Sep 10 00:49:23.564475 systemd[1]: Stopped verity-setup.service. Sep 10 00:49:23.564486 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:49:23.564496 systemd[1]: Mounted dev-hugepages.mount. Sep 10 00:49:23.564507 systemd[1]: Mounted dev-mqueue.mount. Sep 10 00:49:23.564521 systemd-journald[986]: Journal started Sep 10 00:49:23.564582 systemd-journald[986]: Runtime Journal (/run/log/journal/501d3b3f6dcb497eaf6432908febd9d9) is 6.0M, max 48.4M, 42.4M free. Sep 10 00:49:18.425000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 10 00:49:19.613000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 10 00:49:19.613000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 10 00:49:19.613000 audit: BPF prog-id=10 op=LOAD Sep 10 00:49:19.613000 audit: BPF prog-id=10 op=UNLOAD Sep 10 00:49:19.614000 audit: BPF prog-id=11 op=LOAD Sep 10 00:49:19.614000 audit: BPF prog-id=11 op=UNLOAD Sep 10 00:49:19.645000 audit[905]: AVC avc: denied { associate } for pid=905 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 10 00:49:19.645000 audit[905]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001858d2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=888 pid=905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:49:19.645000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 10 00:49:19.670000 audit[905]: AVC avc: denied { associate } for pid=905 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 10 00:49:19.670000 audit[905]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001859a9 a2=1ed a3=0 items=2 ppid=888 pid=905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:49:19.670000 audit: CWD cwd="/" Sep 10 00:49:19.670000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:19.670000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:19.670000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 10 00:49:23.401000 audit: BPF prog-id=12 op=LOAD Sep 10 00:49:23.401000 audit: BPF prog-id=3 op=UNLOAD Sep 10 00:49:23.403000 audit: BPF prog-id=13 op=LOAD Sep 10 00:49:23.404000 audit: BPF prog-id=14 op=LOAD Sep 10 00:49:23.404000 audit: BPF prog-id=4 op=UNLOAD Sep 10 00:49:23.404000 audit: BPF prog-id=5 op=UNLOAD Sep 10 00:49:23.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:23.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:23.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:23.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:23.431000 audit: BPF prog-id=12 op=UNLOAD Sep 10 00:49:23.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:23.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:23.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:23.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:23.538000 audit: BPF prog-id=15 op=LOAD Sep 10 00:49:23.538000 audit: BPF prog-id=16 op=LOAD Sep 10 00:49:23.538000 audit: BPF prog-id=17 op=LOAD Sep 10 00:49:23.538000 audit: BPF prog-id=13 op=UNLOAD Sep 10 00:49:23.538000 audit: BPF prog-id=14 op=UNLOAD Sep 10 00:49:23.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:23.560000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 10 00:49:23.566185 systemd[1]: Started systemd-journald.service. Sep 10 00:49:23.560000 audit[986]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffc2e040a50 a2=4000 a3=7ffc2e040aec items=0 ppid=1 pid=986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:49:23.560000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 10 00:49:23.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:23.399377 systemd[1]: Queued start job for default target multi-user.target. Sep 10 00:49:19.644612 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-10T00:49:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 10 00:49:23.399390 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 10 00:49:19.644908 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-10T00:49:19Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 10 00:49:23.406348 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 10 00:49:19.644925 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-10T00:49:19Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 10 00:49:23.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:23.567111 systemd[1]: Mounted media.mount. Sep 10 00:49:19.644956 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-10T00:49:19Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 10 00:49:23.567829 systemd[1]: Mounted sys-kernel-debug.mount. Sep 10 00:49:19.644966 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-10T00:49:19Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 10 00:49:23.568665 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 10 00:49:23.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:19.645006 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-10T00:49:19Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 10 00:49:23.569508 systemd[1]: Mounted tmp.mount. Sep 10 00:49:19.645017 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-10T00:49:19Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 10 00:49:23.570416 systemd[1]: Finished flatcar-tmpfiles.service. Sep 10 00:49:19.645223 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-10T00:49:19Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 10 00:49:23.571514 systemd[1]: Finished kmod-static-nodes.service. Sep 10 00:49:23.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:23.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:19.645261 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-10T00:49:19Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 10 00:49:23.572610 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 10 00:49:19.645274 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-10T00:49:19Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 10 00:49:23.572763 systemd[1]: Finished modprobe@configfs.service. Sep 10 00:49:19.645690 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-10T00:49:19Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 10 00:49:23.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:23.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:23.573853 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:49:19.645734 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-10T00:49:19Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 10 00:49:23.573984 systemd[1]: Finished modprobe@dm_mod.service. Sep 10 00:49:19.645756 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-10T00:49:19Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 10 00:49:23.575061 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 00:49:19.645770 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-10T00:49:19Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 10 00:49:23.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:23.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:19.645789 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-10T00:49:19Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 10 00:49:23.575209 systemd[1]: Finished modprobe@drm.service. Sep 10 00:49:19.645805 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-10T00:49:19Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 10 00:49:23.576377 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:49:23.073081 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-10T00:49:23Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 10 00:49:23.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:23.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:23.576535 systemd[1]: Finished modprobe@efi_pstore.service. Sep 10 00:49:23.073336 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-10T00:49:23Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 10 00:49:23.073442 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-10T00:49:23Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 10 00:49:23.073680 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-10T00:49:23Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 10 00:49:23.073770 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-10T00:49:23Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 10 00:49:23.073868 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-10T00:49:23Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 10 00:49:23.577788 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 10 00:49:23.577931 systemd[1]: Finished modprobe@fuse.service. Sep 10 00:49:23.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:23.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:23.578975 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:49:23.579101 systemd[1]: Finished modprobe@loop.service. Sep 10 00:49:23.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:23.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:23.580131 systemd[1]: Finished systemd-modules-load.service. Sep 10 00:49:23.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:23.581222 systemd[1]: Finished systemd-network-generator.service. Sep 10 00:49:23.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:23.582497 systemd[1]: Finished systemd-remount-fs.service. Sep 10 00:49:23.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:23.583780 systemd[1]: Reached target network-pre.target. Sep 10 00:49:23.585986 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 10 00:49:23.588190 systemd[1]: Mounting sys-kernel-config.mount... Sep 10 00:49:23.589018 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 10 00:49:23.591796 systemd[1]: Starting systemd-hwdb-update.service... Sep 10 00:49:23.593915 systemd[1]: Starting systemd-journal-flush.service... Sep 10 00:49:23.594924 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:49:23.596290 systemd[1]: Starting systemd-random-seed.service... Sep 10 00:49:23.597238 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 10 00:49:23.640581 systemd[1]: Starting systemd-sysctl.service... Sep 10 00:49:23.642756 systemd-journald[986]: Time spent on flushing to /var/log/journal/501d3b3f6dcb497eaf6432908febd9d9 is 28.584ms for 1167 entries. Sep 10 00:49:23.642756 systemd-journald[986]: System Journal (/var/log/journal/501d3b3f6dcb497eaf6432908febd9d9) is 8.0M, max 195.6M, 187.6M free. Sep 10 00:49:23.687551 systemd-journald[986]: Received client request to flush runtime journal. Sep 10 00:49:23.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:23.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:23.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:23.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:23.643163 systemd[1]: Starting systemd-sysusers.service... Sep 10 00:49:23.648297 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 10 00:49:23.649394 systemd[1]: Mounted sys-kernel-config.mount. Sep 10 00:49:23.688417 udevadm[1009]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 10 00:49:23.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:23.650980 systemd[1]: Finished systemd-udev-trigger.service. Sep 10 00:49:23.653333 systemd[1]: Starting systemd-udev-settle.service... Sep 10 00:49:23.665033 systemd[1]: Finished systemd-sysctl.service. Sep 10 00:49:23.666213 systemd[1]: Finished systemd-random-seed.service. Sep 10 00:49:23.667234 systemd[1]: Reached target first-boot-complete.target. Sep 10 00:49:23.676370 systemd[1]: Finished systemd-sysusers.service. Sep 10 00:49:23.688473 systemd[1]: Finished systemd-journal-flush.service. Sep 10 00:49:24.433452 systemd[1]: Finished systemd-hwdb-update.service. Sep 10 00:49:24.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:24.434000 audit: BPF prog-id=18 op=LOAD Sep 10 00:49:24.434000 audit: BPF prog-id=19 op=LOAD Sep 10 00:49:24.434000 audit: BPF prog-id=7 op=UNLOAD Sep 10 00:49:24.434000 audit: BPF prog-id=8 op=UNLOAD Sep 10 00:49:24.436207 systemd[1]: Starting systemd-udevd.service... Sep 10 00:49:24.458382 systemd-udevd[1013]: Using default interface naming scheme 'v252'. Sep 10 00:49:24.474222 systemd[1]: Started systemd-udevd.service. Sep 10 00:49:24.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:24.477000 audit: BPF prog-id=20 op=LOAD Sep 10 00:49:24.478561 systemd[1]: Starting systemd-networkd.service... Sep 10 00:49:24.482000 audit: BPF prog-id=21 op=LOAD Sep 10 00:49:24.482000 audit: BPF prog-id=22 op=LOAD Sep 10 00:49:24.482000 audit: BPF prog-id=23 op=LOAD Sep 10 00:49:24.484062 systemd[1]: Starting systemd-userdbd.service... Sep 10 00:49:24.525800 systemd[1]: Started systemd-userdbd.service. Sep 10 00:49:24.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:24.556171 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 10 00:49:24.557441 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Sep 10 00:49:24.559561 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 10 00:49:24.563543 kernel: ACPI: button: Power Button [PWRF] Sep 10 00:49:24.579000 audit[1015]: AVC avc: denied { confidentiality } for pid=1015 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 10 00:49:24.579000 audit[1015]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55581294e0d0 a1=338ec a2=7f83efdacbc5 a3=5 items=110 ppid=1013 pid=1015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:49:24.596331 systemd-networkd[1022]: lo: Link UP Sep 10 00:49:24.596702 systemd-networkd[1022]: lo: Gained carrier Sep 10 00:49:24.597168 systemd-networkd[1022]: Enumeration completed Sep 10 00:49:24.597337 systemd[1]: Started systemd-networkd.service. Sep 10 00:49:24.597698 systemd-networkd[1022]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 00:49:24.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:24.598909 systemd-networkd[1022]: eth0: Link UP Sep 10 00:49:24.599094 systemd-networkd[1022]: eth0: Gained carrier Sep 10 00:49:24.579000 audit: CWD cwd="/" Sep 10 00:49:24.579000 audit: PATH item=0 name=(null) inode=50 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=1 name=(null) inode=14497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=2 name=(null) inode=14497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=3 name=(null) inode=14498 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=4 name=(null) inode=14497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=5 name=(null) inode=14499 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=6 name=(null) inode=14497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=7 name=(null) inode=14500 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=8 name=(null) inode=14500 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=9 name=(null) inode=14501 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=10 name=(null) inode=14500 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=11 name=(null) inode=14502 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=12 name=(null) inode=14500 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=13 name=(null) inode=14503 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=14 name=(null) inode=14500 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=15 name=(null) inode=14504 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=16 name=(null) inode=14500 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=17 name=(null) inode=14505 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=18 name=(null) inode=14497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=19 name=(null) inode=14506 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=20 name=(null) inode=14506 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=21 name=(null) inode=14507 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=22 name=(null) inode=14506 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=23 name=(null) inode=14508 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=24 name=(null) inode=14506 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=25 name=(null) inode=14509 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=26 name=(null) inode=14506 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=27 name=(null) inode=14510 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=28 name=(null) inode=14506 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=29 name=(null) inode=14511 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=30 name=(null) inode=14497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=31 name=(null) inode=14512 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=32 name=(null) inode=14512 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=33 name=(null) inode=14513 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=34 name=(null) inode=14512 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=35 name=(null) inode=14514 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=36 name=(null) inode=14512 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=37 name=(null) inode=14515 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=38 name=(null) inode=14512 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=39 name=(null) inode=14516 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=40 name=(null) inode=14512 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=41 name=(null) inode=14517 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=42 name=(null) inode=14497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=43 name=(null) inode=14518 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=44 name=(null) inode=14518 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=45 name=(null) inode=14519 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=46 name=(null) inode=14518 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=47 name=(null) inode=14520 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=48 name=(null) inode=14518 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=49 name=(null) inode=14521 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=50 name=(null) inode=14518 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=51 name=(null) inode=14522 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=52 name=(null) inode=14518 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=53 name=(null) inode=14523 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=54 name=(null) inode=50 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=55 name=(null) inode=14524 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=56 name=(null) inode=14524 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=57 name=(null) inode=14525 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=58 name=(null) inode=14524 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=59 name=(null) inode=14526 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=60 name=(null) inode=14524 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=61 name=(null) inode=14527 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=62 name=(null) inode=14527 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=63 name=(null) inode=14528 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=64 name=(null) inode=14527 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=65 name=(null) inode=14529 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=66 name=(null) inode=14527 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=67 name=(null) inode=14530 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=68 name=(null) inode=14527 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=69 name=(null) inode=14531 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=70 name=(null) inode=14527 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=71 name=(null) inode=14532 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=72 name=(null) inode=14524 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=73 name=(null) inode=14533 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=74 name=(null) inode=14533 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=75 name=(null) inode=14534 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=76 name=(null) inode=14533 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=77 name=(null) inode=14535 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=78 name=(null) inode=14533 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=79 name=(null) inode=14536 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=80 name=(null) inode=14533 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=81 name=(null) inode=14537 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=82 name=(null) inode=14533 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=83 name=(null) inode=14538 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=84 name=(null) inode=14524 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=85 name=(null) inode=14539 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=86 name=(null) inode=14539 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=87 name=(null) inode=14540 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=88 name=(null) inode=14539 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=89 name=(null) inode=14541 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=90 name=(null) inode=14539 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=91 name=(null) inode=14542 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=92 name=(null) inode=14539 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=93 name=(null) inode=14543 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=94 name=(null) inode=14539 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=95 name=(null) inode=14544 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=96 name=(null) inode=14524 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=97 name=(null) inode=14545 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=98 name=(null) inode=14545 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=99 name=(null) inode=14546 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=100 name=(null) inode=14545 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=101 name=(null) inode=14547 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=102 name=(null) inode=14545 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=103 name=(null) inode=14548 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=104 name=(null) inode=14545 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=105 name=(null) inode=14549 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=106 name=(null) inode=14545 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=107 name=(null) inode=14550 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PATH item=109 name=(null) inode=14551 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:49:24.579000 audit: PROCTITLE proctitle="(udev-worker)" Sep 10 00:49:24.613687 systemd-networkd[1022]: eth0: DHCPv4 address 10.0.0.131/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 00:49:24.623552 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 10 00:49:24.627554 kernel: mousedev: PS/2 mouse device common for all mice Sep 10 00:49:24.636778 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 10 00:49:24.639492 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 10 00:49:24.639641 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 10 00:49:24.639780 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 10 00:49:24.669656 kernel: kvm: Nested Virtualization enabled Sep 10 00:49:24.669760 kernel: SVM: kvm: Nested Paging enabled Sep 10 00:49:24.669802 kernel: SVM: Virtual VMLOAD VMSAVE supported Sep 10 00:49:24.670838 kernel: SVM: Virtual GIF supported Sep 10 00:49:24.689553 kernel: EDAC MC: Ver: 3.0.0 Sep 10 00:49:24.714426 systemd[1]: Finished systemd-udev-settle.service. Sep 10 00:49:24.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:24.716874 systemd[1]: Starting lvm2-activation-early.service... Sep 10 00:49:24.725172 lvm[1049]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 10 00:49:24.755308 systemd[1]: Finished lvm2-activation-early.service. Sep 10 00:49:24.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:24.756776 systemd[1]: Reached target cryptsetup.target. Sep 10 00:49:24.760172 systemd[1]: Starting lvm2-activation.service... Sep 10 00:49:24.765217 lvm[1050]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 10 00:49:24.794954 systemd[1]: Finished lvm2-activation.service. Sep 10 00:49:24.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:24.796234 systemd[1]: Reached target local-fs-pre.target. Sep 10 00:49:24.797231 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 10 00:49:24.797255 systemd[1]: Reached target local-fs.target. Sep 10 00:49:24.798120 systemd[1]: Reached target machines.target. Sep 10 00:49:24.800611 systemd[1]: Starting ldconfig.service... Sep 10 00:49:24.801972 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 10 00:49:24.802028 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 10 00:49:24.803359 systemd[1]: Starting systemd-boot-update.service... Sep 10 00:49:24.805956 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 10 00:49:24.809186 systemd[1]: Starting systemd-machine-id-commit.service... Sep 10 00:49:24.812434 systemd[1]: Starting systemd-sysext.service... Sep 10 00:49:24.815587 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1052 (bootctl) Sep 10 00:49:24.817136 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 10 00:49:24.819654 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 10 00:49:24.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:24.828892 systemd[1]: Unmounting usr-share-oem.mount... Sep 10 00:49:24.832608 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 10 00:49:24.832763 systemd[1]: Unmounted usr-share-oem.mount. Sep 10 00:49:24.843559 kernel: loop0: detected capacity change from 0 to 221472 Sep 10 00:49:25.054636 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 10 00:49:25.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:25.055434 systemd[1]: Finished systemd-machine-id-commit.service. Sep 10 00:49:25.064564 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 10 00:49:25.069444 systemd-fsck[1059]: fsck.fat 4.2 (2021-01-31) Sep 10 00:49:25.069444 systemd-fsck[1059]: /dev/vda1: 791 files, 120785/258078 clusters Sep 10 00:49:25.071165 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 10 00:49:25.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:25.075052 systemd[1]: Mounting boot.mount... Sep 10 00:49:25.078563 kernel: loop1: detected capacity change from 0 to 221472 Sep 10 00:49:25.084002 (sd-sysext)[1065]: Using extensions 'kubernetes'. Sep 10 00:49:25.084451 (sd-sysext)[1065]: Merged extensions into '/usr'. Sep 10 00:49:25.094047 systemd[1]: Mounted boot.mount. Sep 10 00:49:25.102350 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:49:25.104182 systemd[1]: Mounting usr-share-oem.mount... Sep 10 00:49:25.105672 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 10 00:49:25.107240 systemd[1]: Starting modprobe@dm_mod.service... Sep 10 00:49:25.110039 systemd[1]: Starting modprobe@efi_pstore.service... Sep 10 00:49:25.112317 systemd[1]: Starting modprobe@loop.service... Sep 10 00:49:25.113416 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 10 00:49:25.113548 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 10 00:49:25.113668 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:49:25.116642 systemd[1]: Finished systemd-boot-update.service. Sep 10 00:49:25.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:25.118091 systemd[1]: Mounted usr-share-oem.mount. Sep 10 00:49:25.119377 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:49:25.119504 systemd[1]: Finished modprobe@dm_mod.service. Sep 10 00:49:25.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:25.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:25.120987 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:49:25.121126 systemd[1]: Finished modprobe@efi_pstore.service. Sep 10 00:49:25.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:25.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:25.122740 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:49:25.122847 systemd[1]: Finished modprobe@loop.service. Sep 10 00:49:25.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:25.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:25.124388 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:49:25.124519 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 10 00:49:25.125471 systemd[1]: Finished systemd-sysext.service. Sep 10 00:49:25.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:25.129285 systemd[1]: Starting ensure-sysext.service... Sep 10 00:49:25.131302 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 10 00:49:25.135125 systemd[1]: Reloading. Sep 10 00:49:25.145120 systemd-tmpfiles[1072]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 10 00:49:25.149190 systemd-tmpfiles[1072]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 10 00:49:25.152860 ldconfig[1051]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 10 00:49:25.153037 systemd-tmpfiles[1072]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 10 00:49:25.259084 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2025-09-10T00:49:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 10 00:49:25.259477 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2025-09-10T00:49:25Z" level=info msg="torcx already run" Sep 10 00:49:25.339488 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 10 00:49:25.339507 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 10 00:49:25.356799 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:49:25.415000 audit: BPF prog-id=24 op=LOAD Sep 10 00:49:25.415000 audit: BPF prog-id=15 op=UNLOAD Sep 10 00:49:25.416000 audit: BPF prog-id=25 op=LOAD Sep 10 00:49:25.416000 audit: BPF prog-id=26 op=LOAD Sep 10 00:49:25.416000 audit: BPF prog-id=16 op=UNLOAD Sep 10 00:49:25.416000 audit: BPF prog-id=17 op=UNLOAD Sep 10 00:49:25.417000 audit: BPF prog-id=27 op=LOAD Sep 10 00:49:25.417000 audit: BPF prog-id=21 op=UNLOAD Sep 10 00:49:25.417000 audit: BPF prog-id=28 op=LOAD Sep 10 00:49:25.417000 audit: BPF prog-id=29 op=LOAD Sep 10 00:49:25.417000 audit: BPF prog-id=22 op=UNLOAD Sep 10 00:49:25.417000 audit: BPF prog-id=23 op=UNLOAD Sep 10 00:49:25.418000 audit: BPF prog-id=30 op=LOAD Sep 10 00:49:25.418000 audit: BPF prog-id=31 op=LOAD Sep 10 00:49:25.418000 audit: BPF prog-id=18 op=UNLOAD Sep 10 00:49:25.418000 audit: BPF prog-id=19 op=UNLOAD Sep 10 00:49:25.419000 audit: BPF prog-id=32 op=LOAD Sep 10 00:49:25.419000 audit: BPF prog-id=20 op=UNLOAD Sep 10 00:49:25.422266 systemd[1]: Finished ldconfig.service. Sep 10 00:49:25.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:25.423355 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 10 00:49:25.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:25.427989 systemd[1]: Starting audit-rules.service... Sep 10 00:49:25.430020 systemd[1]: Starting clean-ca-certificates.service... Sep 10 00:49:25.433000 audit: BPF prog-id=33 op=LOAD Sep 10 00:49:25.432114 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 10 00:49:25.434497 systemd[1]: Starting systemd-resolved.service... Sep 10 00:49:25.435000 audit: BPF prog-id=34 op=LOAD Sep 10 00:49:25.436963 systemd[1]: Starting systemd-timesyncd.service... Sep 10 00:49:25.440965 systemd[1]: Starting systemd-update-utmp.service... Sep 10 00:49:25.442407 systemd[1]: Finished clean-ca-certificates.service. Sep 10 00:49:25.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:25.446929 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 10 00:49:25.448192 systemd[1]: Starting modprobe@dm_mod.service... Sep 10 00:49:25.450000 audit[1145]: SYSTEM_BOOT pid=1145 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 10 00:49:25.451871 systemd[1]: Starting modprobe@efi_pstore.service... Sep 10 00:49:25.454016 systemd[1]: Starting modprobe@loop.service... Sep 10 00:49:25.454787 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 10 00:49:25.454946 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 10 00:49:25.455058 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 00:49:25.456105 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 10 00:49:25.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:25.457447 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:49:25.457584 systemd[1]: Finished modprobe@dm_mod.service. Sep 10 00:49:25.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:25.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:25.458745 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:49:25.458862 systemd[1]: Finished modprobe@efi_pstore.service. Sep 10 00:49:25.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:25.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:25.460020 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:49:25.460118 systemd[1]: Finished modprobe@loop.service. Sep 10 00:49:25.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:25.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:49:25.463407 augenrules[1157]: No rules Sep 10 00:49:25.463798 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:49:25.463981 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 10 00:49:25.462000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 10 00:49:25.462000 audit[1157]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe9dc31d10 a2=420 a3=0 items=0 ppid=1134 pid=1157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:49:25.462000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 10 00:49:25.465616 systemd[1]: Starting systemd-update-done.service... Sep 10 00:49:25.467345 systemd[1]: Finished audit-rules.service. Sep 10 00:49:25.469762 systemd[1]: Finished systemd-update-utmp.service. Sep 10 00:49:25.471580 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 10 00:49:25.472761 systemd[1]: Starting modprobe@dm_mod.service... Sep 10 00:49:25.475002 systemd[1]: Starting modprobe@efi_pstore.service... Sep 10 00:49:25.477891 systemd[1]: Starting modprobe@loop.service... Sep 10 00:49:25.478692 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 10 00:49:25.478805 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 10 00:49:25.478896 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 00:49:25.479741 systemd[1]: Finished systemd-update-done.service. Sep 10 00:49:25.481173 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:49:25.481390 systemd[1]: Finished modprobe@dm_mod.service. Sep 10 00:49:25.482852 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:49:25.483047 systemd[1]: Finished modprobe@efi_pstore.service. Sep 10 00:49:25.484570 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:49:25.484814 systemd[1]: Finished modprobe@loop.service. Sep 10 00:49:25.489109 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 10 00:49:25.490657 systemd[1]: Starting modprobe@dm_mod.service... Sep 10 00:49:25.492888 systemd[1]: Starting modprobe@drm.service... Sep 10 00:49:25.495102 systemd[1]: Starting modprobe@efi_pstore.service... Sep 10 00:49:25.497264 systemd[1]: Starting modprobe@loop.service... Sep 10 00:49:25.498110 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 10 00:49:25.498270 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 10 00:49:25.499208 systemd-resolved[1138]: Positive Trust Anchors: Sep 10 00:49:25.499219 systemd-resolved[1138]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 00:49:25.499260 systemd-resolved[1138]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 10 00:49:25.500282 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 10 00:49:25.503659 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 00:49:25.505074 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:49:25.505262 systemd[1]: Finished modprobe@dm_mod.service. Sep 10 00:49:25.506611 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 00:49:25.506765 systemd[1]: Finished modprobe@drm.service. Sep 10 00:49:25.508055 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:49:25.508201 systemd[1]: Finished modprobe@efi_pstore.service. Sep 10 00:49:25.508367 systemd-resolved[1138]: Defaulting to hostname 'linux'. Sep 10 00:49:25.509420 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:49:25.509554 systemd[1]: Finished modprobe@loop.service. Sep 10 00:49:25.510554 systemd[1]: Started systemd-resolved.service. Sep 10 00:49:25.511833 systemd[1]: Reached target network.target. Sep 10 00:49:25.512685 systemd[1]: Reached target nss-lookup.target. Sep 10 00:49:25.513546 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:49:25.513600 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 10 00:49:25.513998 systemd[1]: Finished ensure-sysext.service. Sep 10 00:49:25.519269 systemd[1]: Started systemd-timesyncd.service. Sep 10 00:49:25.520216 systemd[1]: Reached target sysinit.target. Sep 10 00:49:25.521124 systemd[1]: Started motdgen.path. Sep 10 00:49:25.521956 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 10 00:49:25.521968 systemd-timesyncd[1141]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 10 00:49:25.523025 systemd-timesyncd[1141]: Initial clock synchronization to Wed 2025-09-10 00:49:25.591562 UTC. Sep 10 00:49:25.523061 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 10 00:49:25.523912 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 10 00:49:25.523945 systemd[1]: Reached target paths.target. Sep 10 00:49:25.524711 systemd[1]: Reached target time-set.target. Sep 10 00:49:25.525863 systemd[1]: Started logrotate.timer. Sep 10 00:49:25.526720 systemd[1]: Started mdadm.timer. Sep 10 00:49:25.527402 systemd[1]: Reached target timers.target. Sep 10 00:49:25.528678 systemd[1]: Listening on dbus.socket. Sep 10 00:49:25.530592 systemd[1]: Starting docker.socket... Sep 10 00:49:25.533465 systemd[1]: Listening on sshd.socket. Sep 10 00:49:25.534281 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 10 00:49:25.534722 systemd[1]: Listening on docker.socket. Sep 10 00:49:25.535488 systemd[1]: Reached target sockets.target. Sep 10 00:49:25.536239 systemd[1]: Reached target basic.target. Sep 10 00:49:25.536980 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 10 00:49:25.537004 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 10 00:49:25.537942 systemd[1]: Starting containerd.service... Sep 10 00:49:25.539848 systemd[1]: Starting dbus.service... Sep 10 00:49:25.541670 systemd[1]: Starting enable-oem-cloudinit.service... Sep 10 00:49:25.543854 systemd[1]: Starting extend-filesystems.service... Sep 10 00:49:25.544819 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 10 00:49:25.546280 systemd[1]: Starting motdgen.service... Sep 10 00:49:25.547786 jq[1176]: false Sep 10 00:49:25.549270 systemd[1]: Starting prepare-helm.service... Sep 10 00:49:25.551483 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 10 00:49:25.553777 systemd[1]: Starting sshd-keygen.service... Sep 10 00:49:25.557825 systemd[1]: Starting systemd-logind.service... Sep 10 00:49:25.558617 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 10 00:49:25.558712 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 10 00:49:25.559201 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 10 00:49:25.560182 systemd[1]: Starting update-engine.service... Sep 10 00:49:25.563098 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 10 00:49:25.581378 jq[1194]: true Sep 10 00:49:25.619265 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 10 00:49:25.619464 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 10 00:49:25.620794 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 10 00:49:25.621017 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 10 00:49:25.622695 systemd[1]: motdgen.service: Deactivated successfully. Sep 10 00:49:25.622830 systemd[1]: Finished motdgen.service. Sep 10 00:49:25.628425 jq[1198]: true Sep 10 00:49:25.634756 tar[1197]: linux-amd64/helm Sep 10 00:49:25.637521 extend-filesystems[1177]: Found loop1 Sep 10 00:49:25.637521 extend-filesystems[1177]: Found sr0 Sep 10 00:49:25.637521 extend-filesystems[1177]: Found vda Sep 10 00:49:25.637521 extend-filesystems[1177]: Found vda1 Sep 10 00:49:25.637521 extend-filesystems[1177]: Found vda2 Sep 10 00:49:25.637521 extend-filesystems[1177]: Found vda3 Sep 10 00:49:25.637521 extend-filesystems[1177]: Found usr Sep 10 00:49:25.637521 extend-filesystems[1177]: Found vda4 Sep 10 00:49:25.637521 extend-filesystems[1177]: Found vda6 Sep 10 00:49:25.637521 extend-filesystems[1177]: Found vda7 Sep 10 00:49:25.637521 extend-filesystems[1177]: Found vda9 Sep 10 00:49:25.637521 extend-filesystems[1177]: Checking size of /dev/vda9 Sep 10 00:49:25.643249 dbus-daemon[1175]: [system] SELinux support is enabled Sep 10 00:49:25.643680 systemd[1]: Started dbus.service. Sep 10 00:49:25.645556 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 10 00:49:25.645576 systemd[1]: Reached target system-config.target. Sep 10 00:49:25.646887 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 10 00:49:25.646911 systemd[1]: Reached target user-config.target. Sep 10 00:49:25.659302 update_engine[1192]: I0910 00:49:25.659051 1192 main.cc:92] Flatcar Update Engine starting Sep 10 00:49:25.661995 systemd[1]: Started update-engine.service. Sep 10 00:49:25.664183 update_engine[1192]: I0910 00:49:25.662081 1192 update_check_scheduler.cc:74] Next update check in 11m9s Sep 10 00:49:25.665408 extend-filesystems[1177]: Resized partition /dev/vda9 Sep 10 00:49:25.665765 systemd[1]: Started locksmithd.service. Sep 10 00:49:25.668996 extend-filesystems[1224]: resize2fs 1.46.5 (30-Dec-2021) Sep 10 00:49:25.694014 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 10 00:49:25.772023 systemd-logind[1190]: Watching system buttons on /dev/input/event1 (Power Button) Sep 10 00:49:25.772048 systemd-logind[1190]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 10 00:49:25.772258 systemd-logind[1190]: New seat seat0. Sep 10 00:49:25.774224 systemd[1]: Started systemd-logind.service. Sep 10 00:49:25.777541 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 10 00:49:25.833681 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:49:25.833755 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:49:25.860566 env[1199]: time="2025-09-10T00:49:25.851128558Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 10 00:49:25.853354 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 10 00:49:25.861022 bash[1226]: Updated "/home/core/.ssh/authorized_keys" Sep 10 00:49:25.862318 extend-filesystems[1224]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 10 00:49:25.862318 extend-filesystems[1224]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 10 00:49:25.862318 extend-filesystems[1224]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 10 00:49:25.895763 extend-filesystems[1177]: Resized filesystem in /dev/vda9 Sep 10 00:49:25.862851 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 10 00:49:25.863066 systemd[1]: Finished extend-filesystems.service. Sep 10 00:49:25.898765 env[1199]: time="2025-09-10T00:49:25.898473748Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 10 00:49:25.898765 env[1199]: time="2025-09-10T00:49:25.898697126Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:49:25.902800 env[1199]: time="2025-09-10T00:49:25.902759515Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.191-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:49:25.902800 env[1199]: time="2025-09-10T00:49:25.902794931Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:49:25.903119 env[1199]: time="2025-09-10T00:49:25.903035933Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:49:25.903119 env[1199]: time="2025-09-10T00:49:25.903055590Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 10 00:49:25.903119 env[1199]: time="2025-09-10T00:49:25.903071420Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 10 00:49:25.903119 env[1199]: time="2025-09-10T00:49:25.903082511Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 10 00:49:25.903373 env[1199]: time="2025-09-10T00:49:25.903150829Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:49:25.903427 env[1199]: time="2025-09-10T00:49:25.903402901Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:49:25.903566 env[1199]: time="2025-09-10T00:49:25.903523217Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:49:25.903566 env[1199]: time="2025-09-10T00:49:25.903563322Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 10 00:49:25.903638 env[1199]: time="2025-09-10T00:49:25.903607786Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 10 00:49:25.903638 env[1199]: time="2025-09-10T00:49:25.903621932Z" level=info msg="metadata content store policy set" policy=shared Sep 10 00:49:25.910127 env[1199]: time="2025-09-10T00:49:25.910102435Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 10 00:49:25.911093 env[1199]: time="2025-09-10T00:49:25.910192094Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 10 00:49:25.911302 env[1199]: time="2025-09-10T00:49:25.911263152Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 10 00:49:25.911512 env[1199]: time="2025-09-10T00:49:25.911345417Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 10 00:49:25.911512 env[1199]: time="2025-09-10T00:49:25.911366536Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 10 00:49:25.911512 env[1199]: time="2025-09-10T00:49:25.911379971Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 10 00:49:25.911512 env[1199]: time="2025-09-10T00:49:25.911390531Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 10 00:49:25.911512 env[1199]: time="2025-09-10T00:49:25.911402243Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 10 00:49:25.911512 env[1199]: time="2025-09-10T00:49:25.911415819Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 10 00:49:25.911512 env[1199]: time="2025-09-10T00:49:25.911431398Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 10 00:49:25.911512 env[1199]: time="2025-09-10T00:49:25.911447157Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 10 00:49:25.911512 env[1199]: time="2025-09-10T00:49:25.911457447Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 10 00:49:25.911706 env[1199]: time="2025-09-10T00:49:25.911576901Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 10 00:49:25.911767 env[1199]: time="2025-09-10T00:49:25.911747190Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 10 00:49:25.913633 env[1199]: time="2025-09-10T00:49:25.912292212Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 10 00:49:25.913633 env[1199]: time="2025-09-10T00:49:25.912327719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 10 00:49:25.913633 env[1199]: time="2025-09-10T00:49:25.912339391Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 10 00:49:25.913633 env[1199]: time="2025-09-10T00:49:25.912395356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 10 00:49:25.913633 env[1199]: time="2025-09-10T00:49:25.912407288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 10 00:49:25.913633 env[1199]: time="2025-09-10T00:49:25.912420192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 10 00:49:25.913633 env[1199]: time="2025-09-10T00:49:25.912430051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 10 00:49:25.913633 env[1199]: time="2025-09-10T00:49:25.912448455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 10 00:49:25.913633 env[1199]: time="2025-09-10T00:49:25.912462602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 10 00:49:25.913633 env[1199]: time="2025-09-10T00:49:25.912474484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 10 00:49:25.913633 env[1199]: time="2025-09-10T00:49:25.912484473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 10 00:49:25.913633 env[1199]: time="2025-09-10T00:49:25.912506173Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 10 00:49:25.913633 env[1199]: time="2025-09-10T00:49:25.912649021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 10 00:49:25.913633 env[1199]: time="2025-09-10T00:49:25.912662707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 10 00:49:25.913633 env[1199]: time="2025-09-10T00:49:25.912675551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 10 00:49:25.913977 env[1199]: time="2025-09-10T00:49:25.912687023Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 10 00:49:25.913977 env[1199]: time="2025-09-10T00:49:25.912699877Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 10 00:49:25.913977 env[1199]: time="2025-09-10T00:49:25.912737487Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 10 00:49:25.913977 env[1199]: time="2025-09-10T00:49:25.912766502Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 10 00:49:25.913977 env[1199]: time="2025-09-10T00:49:25.912809442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 10 00:49:25.914076 env[1199]: time="2025-09-10T00:49:25.913066514Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 10 00:49:25.914076 env[1199]: time="2025-09-10T00:49:25.913130644Z" level=info msg="Connect containerd service" Sep 10 00:49:25.914076 env[1199]: time="2025-09-10T00:49:25.913178144Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 10 00:49:25.914076 env[1199]: time="2025-09-10T00:49:25.913743724Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 00:49:25.916273 env[1199]: time="2025-09-10T00:49:25.914154014Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 10 00:49:25.916273 env[1199]: time="2025-09-10T00:49:25.914187887Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 10 00:49:25.916273 env[1199]: time="2025-09-10T00:49:25.914227021Z" level=info msg="containerd successfully booted in 0.105538s" Sep 10 00:49:25.914307 systemd[1]: Started containerd.service. Sep 10 00:49:25.918352 env[1199]: time="2025-09-10T00:49:25.918258110Z" level=info msg="Start subscribing containerd event" Sep 10 00:49:25.918352 env[1199]: time="2025-09-10T00:49:25.918327931Z" level=info msg="Start recovering state" Sep 10 00:49:25.918417 env[1199]: time="2025-09-10T00:49:25.918406318Z" level=info msg="Start event monitor" Sep 10 00:49:25.918456 env[1199]: time="2025-09-10T00:49:25.918434100Z" level=info msg="Start snapshots syncer" Sep 10 00:49:25.918456 env[1199]: time="2025-09-10T00:49:25.918456593Z" level=info msg="Start cni network conf syncer for default" Sep 10 00:49:25.918546 env[1199]: time="2025-09-10T00:49:25.918463686Z" level=info msg="Start streaming server" Sep 10 00:49:25.918773 locksmithd[1223]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 10 00:49:26.211685 sshd_keygen[1195]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 10 00:49:26.232438 systemd[1]: Finished sshd-keygen.service. Sep 10 00:49:26.235398 systemd[1]: Starting issuegen.service... Sep 10 00:49:26.242266 systemd[1]: issuegen.service: Deactivated successfully. Sep 10 00:49:26.242412 systemd[1]: Finished issuegen.service. Sep 10 00:49:26.244743 systemd[1]: Starting systemd-user-sessions.service... Sep 10 00:49:26.252218 systemd[1]: Finished systemd-user-sessions.service. Sep 10 00:49:26.255150 systemd[1]: Started getty@tty1.service. Sep 10 00:49:26.257511 systemd[1]: Started serial-getty@ttyS0.service. Sep 10 00:49:26.258829 systemd[1]: Reached target getty.target. Sep 10 00:49:26.335274 tar[1197]: linux-amd64/LICENSE Sep 10 00:49:26.335450 tar[1197]: linux-amd64/README.md Sep 10 00:49:26.340255 systemd[1]: Finished prepare-helm.service. Sep 10 00:49:26.367884 systemd-networkd[1022]: eth0: Gained IPv6LL Sep 10 00:49:26.370164 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 10 00:49:26.371618 systemd[1]: Reached target network-online.target. Sep 10 00:49:26.374285 systemd[1]: Starting kubelet.service... Sep 10 00:49:27.402263 systemd[1]: Started kubelet.service. Sep 10 00:49:27.403788 systemd[1]: Reached target multi-user.target. Sep 10 00:49:27.406305 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 10 00:49:27.511684 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 10 00:49:27.511870 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 10 00:49:27.513149 systemd[1]: Startup finished in 883ms (kernel) + 6.502s (initrd) + 9.131s (userspace) = 16.517s. Sep 10 00:49:28.147716 kubelet[1256]: E0910 00:49:28.147638 1256 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:49:28.149509 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:49:28.149661 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:49:28.149943 systemd[1]: kubelet.service: Consumed 1.630s CPU time. Sep 10 00:49:35.359578 systemd[1]: Created slice system-sshd.slice. Sep 10 00:49:35.360761 systemd[1]: Started sshd@0-10.0.0.131:22-10.0.0.1:39214.service. Sep 10 00:49:35.397988 sshd[1265]: Accepted publickey for core from 10.0.0.1 port 39214 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:49:35.399478 sshd[1265]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:49:35.409410 systemd-logind[1190]: New session 1 of user core. Sep 10 00:49:35.410490 systemd[1]: Created slice user-500.slice. Sep 10 00:49:35.411728 systemd[1]: Starting user-runtime-dir@500.service... Sep 10 00:49:35.421408 systemd[1]: Finished user-runtime-dir@500.service. Sep 10 00:49:35.423697 systemd[1]: Starting user@500.service... Sep 10 00:49:35.426666 (systemd)[1268]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:49:35.511030 systemd[1268]: Queued start job for default target default.target. Sep 10 00:49:35.511667 systemd[1268]: Reached target paths.target. Sep 10 00:49:35.511693 systemd[1268]: Reached target sockets.target. Sep 10 00:49:35.511710 systemd[1268]: Reached target timers.target. Sep 10 00:49:35.511726 systemd[1268]: Reached target basic.target. Sep 10 00:49:35.511776 systemd[1268]: Reached target default.target. Sep 10 00:49:35.511813 systemd[1268]: Startup finished in 79ms. Sep 10 00:49:35.512018 systemd[1]: Started user@500.service. Sep 10 00:49:35.513253 systemd[1]: Started session-1.scope. Sep 10 00:49:35.565796 systemd[1]: Started sshd@1-10.0.0.131:22-10.0.0.1:39222.service. Sep 10 00:49:35.600779 sshd[1277]: Accepted publickey for core from 10.0.0.1 port 39222 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:49:35.602480 sshd[1277]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:49:35.607078 systemd-logind[1190]: New session 2 of user core. Sep 10 00:49:35.608497 systemd[1]: Started session-2.scope. Sep 10 00:49:35.663766 sshd[1277]: pam_unix(sshd:session): session closed for user core Sep 10 00:49:35.666809 systemd[1]: Started sshd@2-10.0.0.131:22-10.0.0.1:39224.service. Sep 10 00:49:35.667328 systemd[1]: sshd@1-10.0.0.131:22-10.0.0.1:39222.service: Deactivated successfully. Sep 10 00:49:35.667883 systemd[1]: session-2.scope: Deactivated successfully. Sep 10 00:49:35.668467 systemd-logind[1190]: Session 2 logged out. Waiting for processes to exit. Sep 10 00:49:35.669483 systemd-logind[1190]: Removed session 2. Sep 10 00:49:35.702641 sshd[1282]: Accepted publickey for core from 10.0.0.1 port 39224 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:49:35.704092 sshd[1282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:49:35.708201 systemd-logind[1190]: New session 3 of user core. Sep 10 00:49:35.709003 systemd[1]: Started session-3.scope. Sep 10 00:49:35.760239 sshd[1282]: pam_unix(sshd:session): session closed for user core Sep 10 00:49:35.763472 systemd[1]: sshd@2-10.0.0.131:22-10.0.0.1:39224.service: Deactivated successfully. Sep 10 00:49:35.764078 systemd[1]: session-3.scope: Deactivated successfully. Sep 10 00:49:35.764700 systemd-logind[1190]: Session 3 logged out. Waiting for processes to exit. Sep 10 00:49:35.766044 systemd[1]: Started sshd@3-10.0.0.131:22-10.0.0.1:39238.service. Sep 10 00:49:35.766950 systemd-logind[1190]: Removed session 3. Sep 10 00:49:35.798106 sshd[1290]: Accepted publickey for core from 10.0.0.1 port 39238 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:49:35.799128 sshd[1290]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:49:35.802631 systemd-logind[1190]: New session 4 of user core. Sep 10 00:49:35.803488 systemd[1]: Started session-4.scope. Sep 10 00:49:35.857648 sshd[1290]: pam_unix(sshd:session): session closed for user core Sep 10 00:49:35.860637 systemd[1]: sshd@3-10.0.0.131:22-10.0.0.1:39238.service: Deactivated successfully. Sep 10 00:49:35.861199 systemd[1]: session-4.scope: Deactivated successfully. Sep 10 00:49:35.861793 systemd-logind[1190]: Session 4 logged out. Waiting for processes to exit. Sep 10 00:49:35.863018 systemd[1]: Started sshd@4-10.0.0.131:22-10.0.0.1:39250.service. Sep 10 00:49:35.863714 systemd-logind[1190]: Removed session 4. Sep 10 00:49:35.894776 sshd[1296]: Accepted publickey for core from 10.0.0.1 port 39250 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:49:35.895967 sshd[1296]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:49:35.899958 systemd-logind[1190]: New session 5 of user core. Sep 10 00:49:35.901481 systemd[1]: Started session-5.scope. Sep 10 00:49:35.956685 sudo[1299]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 10 00:49:35.956873 sudo[1299]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 10 00:49:35.989194 systemd[1]: Starting docker.service... Sep 10 00:49:37.422978 env[1311]: time="2025-09-10T00:49:37.422907434Z" level=info msg="Starting up" Sep 10 00:49:37.424414 env[1311]: time="2025-09-10T00:49:37.424378360Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 10 00:49:37.424414 env[1311]: time="2025-09-10T00:49:37.424399311Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 10 00:49:37.424512 env[1311]: time="2025-09-10T00:49:37.424429940Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 10 00:49:37.424512 env[1311]: time="2025-09-10T00:49:37.424440772Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 10 00:49:37.426884 env[1311]: time="2025-09-10T00:49:37.426859759Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 10 00:49:37.426884 env[1311]: time="2025-09-10T00:49:37.426876514Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 10 00:49:37.426983 env[1311]: time="2025-09-10T00:49:37.426888138Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 10 00:49:37.426983 env[1311]: time="2025-09-10T00:49:37.426896129Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 10 00:49:37.643318 env[1311]: time="2025-09-10T00:49:37.643254579Z" level=info msg="Loading containers: start." Sep 10 00:49:37.845564 kernel: Initializing XFRM netlink socket Sep 10 00:49:37.873030 env[1311]: time="2025-09-10T00:49:37.872974037Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 10 00:49:38.099744 systemd-networkd[1022]: docker0: Link UP Sep 10 00:49:38.113723 env[1311]: time="2025-09-10T00:49:38.113678939Z" level=info msg="Loading containers: done." Sep 10 00:49:38.126825 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4284719543-merged.mount: Deactivated successfully. Sep 10 00:49:38.128056 env[1311]: time="2025-09-10T00:49:38.128012721Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 10 00:49:38.128247 env[1311]: time="2025-09-10T00:49:38.128215195Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 10 00:49:38.128339 env[1311]: time="2025-09-10T00:49:38.128317546Z" level=info msg="Daemon has completed initialization" Sep 10 00:49:38.148011 systemd[1]: Started docker.service. Sep 10 00:49:38.156251 env[1311]: time="2025-09-10T00:49:38.156169095Z" level=info msg="API listen on /run/docker.sock" Sep 10 00:49:38.400755 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 10 00:49:38.401005 systemd[1]: Stopped kubelet.service. Sep 10 00:49:38.401057 systemd[1]: kubelet.service: Consumed 1.630s CPU time. Sep 10 00:49:38.402742 systemd[1]: Starting kubelet.service... Sep 10 00:49:38.543980 systemd[1]: Started kubelet.service. Sep 10 00:49:38.589340 kubelet[1442]: E0910 00:49:38.589272 1442 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:49:38.591819 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:49:38.591943 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:49:38.864596 env[1199]: time="2025-09-10T00:49:38.864445611Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 10 00:49:39.857987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3783711356.mount: Deactivated successfully. Sep 10 00:49:43.579026 env[1199]: time="2025-09-10T00:49:43.578956383Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:49:43.588018 env[1199]: time="2025-09-10T00:49:43.587929561Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:49:43.592008 env[1199]: time="2025-09-10T00:49:43.591971578Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:49:43.594201 env[1199]: time="2025-09-10T00:49:43.594125730Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:49:43.595132 env[1199]: time="2025-09-10T00:49:43.595070550Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 10 00:49:43.596186 env[1199]: time="2025-09-10T00:49:43.596135502Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 10 00:49:47.281033 env[1199]: time="2025-09-10T00:49:47.280956227Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:49:47.284758 env[1199]: time="2025-09-10T00:49:47.284717619Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:49:47.287004 env[1199]: time="2025-09-10T00:49:47.286955624Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:49:47.289455 env[1199]: time="2025-09-10T00:49:47.289405527Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:49:47.290492 env[1199]: time="2025-09-10T00:49:47.290417678Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 10 00:49:47.291594 env[1199]: time="2025-09-10T00:49:47.291553957Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 10 00:49:48.866843 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 10 00:49:48.867129 systemd[1]: Stopped kubelet.service. Sep 10 00:49:48.869336 systemd[1]: Starting kubelet.service... Sep 10 00:49:49.026900 systemd[1]: Started kubelet.service. Sep 10 00:49:49.219013 kubelet[1456]: E0910 00:49:49.218811 1456 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:49:49.220853 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:49:49.220985 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:49:51.893141 env[1199]: time="2025-09-10T00:49:51.893058847Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:49:52.166316 env[1199]: time="2025-09-10T00:49:52.166146769Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:49:52.310482 env[1199]: time="2025-09-10T00:49:52.310256963Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:49:52.465773 env[1199]: time="2025-09-10T00:49:52.465611525Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:49:52.466837 env[1199]: time="2025-09-10T00:49:52.466785734Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 10 00:49:52.467418 env[1199]: time="2025-09-10T00:49:52.467393948Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 10 00:49:54.228718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4088269225.mount: Deactivated successfully. Sep 10 00:49:55.590553 env[1199]: time="2025-09-10T00:49:55.590441383Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:49:55.594859 env[1199]: time="2025-09-10T00:49:55.594787789Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:49:55.596832 env[1199]: time="2025-09-10T00:49:55.596800027Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:49:55.599584 env[1199]: time="2025-09-10T00:49:55.599455057Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:49:55.601936 env[1199]: time="2025-09-10T00:49:55.601863498Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 10 00:49:55.602663 env[1199]: time="2025-09-10T00:49:55.602624083Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 10 00:49:56.905793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1224854436.mount: Deactivated successfully. Sep 10 00:49:58.361377 env[1199]: time="2025-09-10T00:49:58.361263319Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:49:58.458184 env[1199]: time="2025-09-10T00:49:58.458127632Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:49:58.551684 env[1199]: time="2025-09-10T00:49:58.551617466Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:49:58.584358 env[1199]: time="2025-09-10T00:49:58.584283770Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:49:58.585369 env[1199]: time="2025-09-10T00:49:58.585289956Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 10 00:49:58.586010 env[1199]: time="2025-09-10T00:49:58.585967975Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 10 00:49:59.303969 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 10 00:49:59.304163 systemd[1]: Stopped kubelet.service. Sep 10 00:49:59.305864 systemd[1]: Starting kubelet.service... Sep 10 00:49:59.404251 systemd[1]: Started kubelet.service. Sep 10 00:49:59.488912 kubelet[1467]: E0910 00:49:59.488847 1467 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:49:59.490694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:49:59.490852 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:50:00.286741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3485529085.mount: Deactivated successfully. Sep 10 00:50:00.292433 env[1199]: time="2025-09-10T00:50:00.292385479Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:50:00.294520 env[1199]: time="2025-09-10T00:50:00.294489212Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:50:00.296201 env[1199]: time="2025-09-10T00:50:00.296161954Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:50:00.297596 env[1199]: time="2025-09-10T00:50:00.297556297Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:50:00.298057 env[1199]: time="2025-09-10T00:50:00.298017006Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 10 00:50:00.298789 env[1199]: time="2025-09-10T00:50:00.298759984Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 10 00:50:00.883042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4275882327.mount: Deactivated successfully. Sep 10 00:50:04.548624 env[1199]: time="2025-09-10T00:50:04.548543232Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:50:04.550478 env[1199]: time="2025-09-10T00:50:04.550421777Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:50:04.552450 env[1199]: time="2025-09-10T00:50:04.552418127Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:50:04.554621 env[1199]: time="2025-09-10T00:50:04.554582320Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:50:04.555352 env[1199]: time="2025-09-10T00:50:04.555320348Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 10 00:50:06.844634 systemd[1]: Stopped kubelet.service. Sep 10 00:50:06.847463 systemd[1]: Starting kubelet.service... Sep 10 00:50:06.868146 systemd[1]: Reloading. Sep 10 00:50:06.951659 /usr/lib/systemd/system-generators/torcx-generator[1523]: time="2025-09-10T00:50:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 10 00:50:06.952043 /usr/lib/systemd/system-generators/torcx-generator[1523]: time="2025-09-10T00:50:06Z" level=info msg="torcx already run" Sep 10 00:50:08.058467 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 10 00:50:08.058486 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 10 00:50:08.076331 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:50:08.155984 systemd[1]: Started kubelet.service. Sep 10 00:50:08.157622 systemd[1]: Stopping kubelet.service... Sep 10 00:50:08.157880 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 00:50:08.158076 systemd[1]: Stopped kubelet.service. Sep 10 00:50:08.159627 systemd[1]: Starting kubelet.service... Sep 10 00:50:08.271784 systemd[1]: Started kubelet.service. Sep 10 00:50:08.321152 kubelet[1571]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:50:08.321152 kubelet[1571]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 10 00:50:08.321152 kubelet[1571]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:50:08.321152 kubelet[1571]: I0910 00:50:08.321108 1571 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 00:50:08.566410 kubelet[1571]: I0910 00:50:08.566325 1571 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 10 00:50:08.566410 kubelet[1571]: I0910 00:50:08.566379 1571 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 00:50:08.566760 kubelet[1571]: I0910 00:50:08.566726 1571 server.go:934] "Client rotation is on, will bootstrap in background" Sep 10 00:50:08.617996 kubelet[1571]: E0910 00:50:08.617846 1571 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.131:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:50:08.618618 kubelet[1571]: I0910 00:50:08.618589 1571 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 00:50:08.625066 kubelet[1571]: E0910 00:50:08.625011 1571 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 10 00:50:08.625066 kubelet[1571]: I0910 00:50:08.625061 1571 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 10 00:50:08.635450 kubelet[1571]: I0910 00:50:08.635402 1571 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 00:50:08.636142 kubelet[1571]: I0910 00:50:08.636111 1571 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 10 00:50:08.636326 kubelet[1571]: I0910 00:50:08.636279 1571 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 00:50:08.636587 kubelet[1571]: I0910 00:50:08.636320 1571 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 00:50:08.636705 kubelet[1571]: I0910 00:50:08.636607 1571 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 00:50:08.636705 kubelet[1571]: I0910 00:50:08.636618 1571 container_manager_linux.go:300] "Creating device plugin manager" Sep 10 00:50:08.636776 kubelet[1571]: I0910 00:50:08.636762 1571 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:50:08.643899 kubelet[1571]: I0910 00:50:08.643778 1571 kubelet.go:408] "Attempting to sync node with API server" Sep 10 00:50:08.643899 kubelet[1571]: I0910 00:50:08.643821 1571 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 00:50:08.643899 kubelet[1571]: I0910 00:50:08.643881 1571 kubelet.go:314] "Adding apiserver pod source" Sep 10 00:50:08.643899 kubelet[1571]: I0910 00:50:08.643914 1571 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 00:50:08.670122 kubelet[1571]: W0910 00:50:08.670043 1571 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Sep 10 00:50:08.670217 kubelet[1571]: E0910 00:50:08.670137 1571 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:50:08.670217 kubelet[1571]: W0910 00:50:08.670148 1571 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Sep 10 00:50:08.670306 kubelet[1571]: E0910 00:50:08.670254 1571 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:50:08.671379 kubelet[1571]: I0910 00:50:08.671357 1571 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 10 00:50:08.671831 kubelet[1571]: I0910 00:50:08.671805 1571 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 00:50:08.672506 kubelet[1571]: W0910 00:50:08.672458 1571 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 10 00:50:08.674370 kubelet[1571]: I0910 00:50:08.674328 1571 server.go:1274] "Started kubelet" Sep 10 00:50:08.674456 kubelet[1571]: I0910 00:50:08.674399 1571 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 00:50:08.675253 kubelet[1571]: I0910 00:50:08.675096 1571 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 00:50:08.675614 kubelet[1571]: I0910 00:50:08.675588 1571 server.go:449] "Adding debug handlers to kubelet server" Sep 10 00:50:08.675736 kubelet[1571]: I0910 00:50:08.675714 1571 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 00:50:08.678564 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 10 00:50:08.678734 kubelet[1571]: I0910 00:50:08.678687 1571 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 00:50:08.685949 kubelet[1571]: I0910 00:50:08.685904 1571 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 00:50:08.687433 kubelet[1571]: I0910 00:50:08.687412 1571 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 10 00:50:08.687734 kubelet[1571]: E0910 00:50:08.687694 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:50:08.690163 kubelet[1571]: I0910 00:50:08.690142 1571 factory.go:221] Registration of the systemd container factory successfully Sep 10 00:50:08.690364 kubelet[1571]: I0910 00:50:08.690339 1571 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 00:50:08.690729 kubelet[1571]: I0910 00:50:08.690698 1571 reconciler.go:26] "Reconciler: start to sync state" Sep 10 00:50:08.691063 kubelet[1571]: I0910 00:50:08.691039 1571 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 10 00:50:08.691477 kubelet[1571]: W0910 00:50:08.691426 1571 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Sep 10 00:50:08.691631 kubelet[1571]: E0910 00:50:08.691603 1571 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:50:08.691746 kubelet[1571]: E0910 00:50:08.691465 1571 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="200ms" Sep 10 00:50:08.692201 kubelet[1571]: I0910 00:50:08.692180 1571 factory.go:221] Registration of the containerd container factory successfully Sep 10 00:50:08.692333 kubelet[1571]: E0910 00:50:08.692291 1571 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 00:50:08.693496 kubelet[1571]: E0910 00:50:08.688818 1571 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.131:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.131:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863c57a4d116cee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 00:50:08.674295022 +0000 UTC m=+0.399454712,LastTimestamp:2025-09-10 00:50:08.674295022 +0000 UTC m=+0.399454712,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 00:50:08.704087 kubelet[1571]: I0910 00:50:08.704024 1571 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 00:50:08.706870 kubelet[1571]: I0910 00:50:08.706809 1571 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 00:50:08.707028 kubelet[1571]: I0910 00:50:08.707016 1571 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 10 00:50:08.707097 kubelet[1571]: I0910 00:50:08.707082 1571 kubelet.go:2321] "Starting kubelet main sync loop" Sep 10 00:50:08.707190 kubelet[1571]: E0910 00:50:08.707169 1571 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 00:50:08.707906 kubelet[1571]: I0910 00:50:08.707877 1571 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 10 00:50:08.707906 kubelet[1571]: I0910 00:50:08.707899 1571 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 10 00:50:08.708008 kubelet[1571]: I0910 00:50:08.707920 1571 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:50:08.709577 kubelet[1571]: W0910 00:50:08.709501 1571 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Sep 10 00:50:08.709637 kubelet[1571]: E0910 00:50:08.709594 1571 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:50:08.788003 kubelet[1571]: E0910 00:50:08.787962 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:50:08.807913 kubelet[1571]: E0910 00:50:08.807863 1571 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 00:50:08.888220 kubelet[1571]: E0910 00:50:08.888075 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:50:08.892875 kubelet[1571]: E0910 00:50:08.892806 1571 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="400ms" Sep 10 00:50:08.989238 kubelet[1571]: E0910 00:50:08.989167 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:50:09.008757 kubelet[1571]: E0910 00:50:09.008678 1571 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 00:50:09.089997 kubelet[1571]: E0910 00:50:09.089924 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:50:09.190122 kubelet[1571]: E0910 00:50:09.190009 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:50:09.290683 kubelet[1571]: E0910 00:50:09.290616 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:50:09.294429 kubelet[1571]: E0910 00:50:09.294391 1571 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="800ms" Sep 10 00:50:09.391765 kubelet[1571]: E0910 00:50:09.391696 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:50:09.409160 kubelet[1571]: E0910 00:50:09.409097 1571 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 00:50:09.492731 kubelet[1571]: E0910 00:50:09.492567 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:50:09.593253 kubelet[1571]: E0910 00:50:09.593172 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:50:09.694098 kubelet[1571]: E0910 00:50:09.694041 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:50:09.794937 kubelet[1571]: E0910 00:50:09.794780 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:50:09.889012 kubelet[1571]: W0910 00:50:09.888894 1571 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Sep 10 00:50:09.889012 kubelet[1571]: E0910 00:50:09.889001 1571 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:50:09.893021 kubelet[1571]: W0910 00:50:09.892931 1571 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Sep 10 00:50:09.893097 kubelet[1571]: E0910 00:50:09.893038 1571 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:50:09.895350 kubelet[1571]: E0910 00:50:09.895316 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:50:09.996111 kubelet[1571]: E0910 00:50:09.996051 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:50:10.035022 kubelet[1571]: W0910 00:50:10.034951 1571 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Sep 10 00:50:10.035022 kubelet[1571]: E0910 00:50:10.035015 1571 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:50:10.095081 kubelet[1571]: E0910 00:50:10.095025 1571 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="1.6s" Sep 10 00:50:10.097195 kubelet[1571]: E0910 00:50:10.097133 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:50:10.100840 kubelet[1571]: W0910 00:50:10.100764 1571 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Sep 10 00:50:10.100900 kubelet[1571]: E0910 00:50:10.100850 1571 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:50:10.197339 kubelet[1571]: E0910 00:50:10.197247 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:50:10.209644 kubelet[1571]: E0910 00:50:10.209588 1571 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 00:50:10.220931 kubelet[1571]: I0910 00:50:10.220883 1571 policy_none.go:49] "None policy: Start" Sep 10 00:50:10.221904 kubelet[1571]: I0910 00:50:10.221858 1571 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 10 00:50:10.221904 kubelet[1571]: I0910 00:50:10.221906 1571 state_mem.go:35] "Initializing new in-memory state store" Sep 10 00:50:10.298344 kubelet[1571]: E0910 00:50:10.298283 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:50:10.399337 kubelet[1571]: E0910 00:50:10.399208 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:50:10.405994 systemd[1]: Created slice kubepods.slice. Sep 10 00:50:10.410242 systemd[1]: Created slice kubepods-burstable.slice. Sep 10 00:50:10.412778 systemd[1]: Created slice kubepods-besteffort.slice. Sep 10 00:50:10.421353 kubelet[1571]: I0910 00:50:10.421308 1571 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 00:50:10.421489 kubelet[1571]: I0910 00:50:10.421472 1571 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 00:50:10.421726 kubelet[1571]: I0910 00:50:10.421486 1571 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 00:50:10.422224 kubelet[1571]: I0910 00:50:10.422208 1571 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 00:50:10.423001 kubelet[1571]: E0910 00:50:10.422971 1571 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 10 00:50:10.523946 kubelet[1571]: I0910 00:50:10.523891 1571 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:50:10.524286 kubelet[1571]: E0910 00:50:10.524254 1571 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" Sep 10 00:50:10.726148 kubelet[1571]: I0910 00:50:10.726016 1571 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:50:10.726502 kubelet[1571]: E0910 00:50:10.726458 1571 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" Sep 10 00:50:10.799096 kubelet[1571]: E0910 00:50:10.799047 1571 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.131:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:50:10.996907 update_engine[1192]: I0910 00:50:10.996712 1192 update_attempter.cc:509] Updating boot flags... Sep 10 00:50:11.134601 kubelet[1571]: I0910 00:50:11.128274 1571 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:50:11.134601 kubelet[1571]: E0910 00:50:11.128759 1571 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" Sep 10 00:50:11.685075 kubelet[1571]: W0910 00:50:11.685020 1571 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Sep 10 00:50:11.685442 kubelet[1571]: E0910 00:50:11.685081 1571 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:50:11.695843 kubelet[1571]: E0910 00:50:11.695787 1571 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="3.2s" Sep 10 00:50:11.816936 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Sep 10 00:50:11.836216 kubelet[1571]: W0910 00:50:11.836184 1571 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Sep 10 00:50:11.836338 kubelet[1571]: E0910 00:50:11.836237 1571 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:50:11.838169 systemd[1]: Created slice kubepods-burstable-podaef276636f76ecb0b9a23a6322b5494c.slice. Sep 10 00:50:11.843643 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Sep 10 00:50:11.907819 kubelet[1571]: I0910 00:50:11.907754 1571 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:50:11.907819 kubelet[1571]: I0910 00:50:11.907804 1571 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:50:11.907819 kubelet[1571]: I0910 00:50:11.907835 1571 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aef276636f76ecb0b9a23a6322b5494c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"aef276636f76ecb0b9a23a6322b5494c\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:50:11.908111 kubelet[1571]: I0910 00:50:11.907858 1571 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:50:11.908111 kubelet[1571]: I0910 00:50:11.907879 1571 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:50:11.908111 kubelet[1571]: I0910 00:50:11.907901 1571 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:50:11.908111 kubelet[1571]: I0910 00:50:11.907925 1571 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 10 00:50:11.908111 kubelet[1571]: I0910 00:50:11.907945 1571 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aef276636f76ecb0b9a23a6322b5494c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"aef276636f76ecb0b9a23a6322b5494c\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:50:11.908263 kubelet[1571]: I0910 00:50:11.907966 1571 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aef276636f76ecb0b9a23a6322b5494c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"aef276636f76ecb0b9a23a6322b5494c\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:50:11.930880 kubelet[1571]: I0910 00:50:11.930857 1571 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:50:11.931182 kubelet[1571]: E0910 00:50:11.931153 1571 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" Sep 10 00:50:12.137026 kubelet[1571]: E0910 00:50:12.136968 1571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:12.137820 env[1199]: time="2025-09-10T00:50:12.137774027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 10 00:50:12.140857 kubelet[1571]: E0910 00:50:12.140834 1571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:12.141221 env[1199]: time="2025-09-10T00:50:12.141180890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:aef276636f76ecb0b9a23a6322b5494c,Namespace:kube-system,Attempt:0,}" Sep 10 00:50:12.146647 kubelet[1571]: E0910 00:50:12.146603 1571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:12.147264 env[1199]: time="2025-09-10T00:50:12.147219134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 10 00:50:12.594558 kubelet[1571]: W0910 00:50:12.594463 1571 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Sep 10 00:50:12.594558 kubelet[1571]: E0910 00:50:12.594558 1571 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:50:12.858580 kubelet[1571]: W0910 00:50:12.858415 1571 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Sep 10 00:50:12.858580 kubelet[1571]: E0910 00:50:12.858474 1571 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:50:13.533418 kubelet[1571]: I0910 00:50:13.533360 1571 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:50:13.533724 kubelet[1571]: E0910 00:50:13.533689 1571 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" Sep 10 00:50:13.888841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2114312059.mount: Deactivated successfully. Sep 10 00:50:13.897604 env[1199]: time="2025-09-10T00:50:13.897522772Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:50:13.901671 env[1199]: time="2025-09-10T00:50:13.901627823Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:50:13.902736 env[1199]: time="2025-09-10T00:50:13.902693777Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:50:13.904091 env[1199]: time="2025-09-10T00:50:13.904048344Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:50:13.909442 env[1199]: time="2025-09-10T00:50:13.909403688Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:50:13.910741 env[1199]: time="2025-09-10T00:50:13.910710671Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:50:13.911933 env[1199]: time="2025-09-10T00:50:13.911904996Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:50:13.913971 env[1199]: time="2025-09-10T00:50:13.913930940Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:50:13.914724 env[1199]: time="2025-09-10T00:50:13.914680118Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:50:13.916223 env[1199]: time="2025-09-10T00:50:13.916191741Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:50:13.917762 env[1199]: time="2025-09-10T00:50:13.917737239Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:50:13.920312 env[1199]: time="2025-09-10T00:50:13.920289295Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:50:13.974742 env[1199]: time="2025-09-10T00:50:13.974636398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:50:13.974984 env[1199]: time="2025-09-10T00:50:13.974716134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:50:13.974984 env[1199]: time="2025-09-10T00:50:13.974729198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:50:13.974984 env[1199]: time="2025-09-10T00:50:13.974889190Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8e26e2a69eb8a49be01cc8382156910c6922b7d62c74ff33d2d7afb4800fc5da pid=1637 runtime=io.containerd.runc.v2 Sep 10 00:50:13.978318 env[1199]: time="2025-09-10T00:50:13.978237287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:50:13.978407 env[1199]: time="2025-09-10T00:50:13.978331210Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:50:13.978407 env[1199]: time="2025-09-10T00:50:13.978390385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:50:13.978644 env[1199]: time="2025-09-10T00:50:13.978601696Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f09c2dd0c3c01bebf6186fb43def23c62bba0656d70a85ccdf02e9a8211d152 pid=1627 runtime=io.containerd.runc.v2 Sep 10 00:50:14.060075 systemd[1]: Started cri-containerd-8e26e2a69eb8a49be01cc8382156910c6922b7d62c74ff33d2d7afb4800fc5da.scope. Sep 10 00:50:14.089929 env[1199]: time="2025-09-10T00:50:14.089560162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:50:14.089929 env[1199]: time="2025-09-10T00:50:14.089631850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:50:14.089929 env[1199]: time="2025-09-10T00:50:14.089642892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:50:14.090297 env[1199]: time="2025-09-10T00:50:14.090101583Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/469c3643279daa708094b624db3fd28e3ec2b8d5da95851060d157e67a2a930a pid=1672 runtime=io.containerd.runc.v2 Sep 10 00:50:14.089936 systemd[1]: Started cri-containerd-8f09c2dd0c3c01bebf6186fb43def23c62bba0656d70a85ccdf02e9a8211d152.scope. Sep 10 00:50:14.138719 systemd[1]: Started cri-containerd-469c3643279daa708094b624db3fd28e3ec2b8d5da95851060d157e67a2a930a.scope. Sep 10 00:50:14.197208 env[1199]: time="2025-09-10T00:50:14.197161871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e26e2a69eb8a49be01cc8382156910c6922b7d62c74ff33d2d7afb4800fc5da\"" Sep 10 00:50:14.199167 kubelet[1571]: E0910 00:50:14.198945 1571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:14.201456 env[1199]: time="2025-09-10T00:50:14.201423281Z" level=info msg="CreateContainer within sandbox \"8e26e2a69eb8a49be01cc8382156910c6922b7d62c74ff33d2d7afb4800fc5da\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 10 00:50:14.208769 env[1199]: time="2025-09-10T00:50:14.208693667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:aef276636f76ecb0b9a23a6322b5494c,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f09c2dd0c3c01bebf6186fb43def23c62bba0656d70a85ccdf02e9a8211d152\"" Sep 10 00:50:14.210705 kubelet[1571]: E0910 00:50:14.210681 1571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:14.213572 env[1199]: time="2025-09-10T00:50:14.213027919Z" level=info msg="CreateContainer within sandbox \"8f09c2dd0c3c01bebf6186fb43def23c62bba0656d70a85ccdf02e9a8211d152\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 10 00:50:14.220679 env[1199]: time="2025-09-10T00:50:14.220617466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"469c3643279daa708094b624db3fd28e3ec2b8d5da95851060d157e67a2a930a\"" Sep 10 00:50:14.221622 kubelet[1571]: E0910 00:50:14.221587 1571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:14.223179 env[1199]: time="2025-09-10T00:50:14.223147643Z" level=info msg="CreateContainer within sandbox \"8e26e2a69eb8a49be01cc8382156910c6922b7d62c74ff33d2d7afb4800fc5da\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ed8b36834e535c5feb816a5922618635249ca4f0af883748ed1b87a3b27c89a6\"" Sep 10 00:50:14.223641 env[1199]: time="2025-09-10T00:50:14.223617546Z" level=info msg="StartContainer for \"ed8b36834e535c5feb816a5922618635249ca4f0af883748ed1b87a3b27c89a6\"" Sep 10 00:50:14.225032 env[1199]: time="2025-09-10T00:50:14.224982558Z" level=info msg="CreateContainer within sandbox \"469c3643279daa708094b624db3fd28e3ec2b8d5da95851060d157e67a2a930a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 10 00:50:14.239896 systemd[1]: Started cri-containerd-ed8b36834e535c5feb816a5922618635249ca4f0af883748ed1b87a3b27c89a6.scope. Sep 10 00:50:14.367870 env[1199]: time="2025-09-10T00:50:14.367778632Z" level=info msg="StartContainer for \"ed8b36834e535c5feb816a5922618635249ca4f0af883748ed1b87a3b27c89a6\" returns successfully" Sep 10 00:50:14.382817 env[1199]: time="2025-09-10T00:50:14.382734854Z" level=info msg="CreateContainer within sandbox \"8f09c2dd0c3c01bebf6186fb43def23c62bba0656d70a85ccdf02e9a8211d152\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"28c5b249064b1a61b4a0aa1ecd98bf8c14dd7d962cce0e7c9f71c9e164514e27\"" Sep 10 00:50:14.383739 env[1199]: time="2025-09-10T00:50:14.383621817Z" level=info msg="StartContainer for \"28c5b249064b1a61b4a0aa1ecd98bf8c14dd7d962cce0e7c9f71c9e164514e27\"" Sep 10 00:50:14.388096 env[1199]: time="2025-09-10T00:50:14.388041966Z" level=info msg="CreateContainer within sandbox \"469c3643279daa708094b624db3fd28e3ec2b8d5da95851060d157e67a2a930a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"daba71b647a3f82b513bae7d38ad1b9d9409f222d709bc5fa55a4342b4d14288\"" Sep 10 00:50:14.388554 env[1199]: time="2025-09-10T00:50:14.388509514Z" level=info msg="StartContainer for \"daba71b647a3f82b513bae7d38ad1b9d9409f222d709bc5fa55a4342b4d14288\"" Sep 10 00:50:14.406703 systemd[1]: Started cri-containerd-28c5b249064b1a61b4a0aa1ecd98bf8c14dd7d962cce0e7c9f71c9e164514e27.scope. Sep 10 00:50:14.409970 systemd[1]: Started cri-containerd-daba71b647a3f82b513bae7d38ad1b9d9409f222d709bc5fa55a4342b4d14288.scope. Sep 10 00:50:14.616149 env[1199]: time="2025-09-10T00:50:14.616030466Z" level=info msg="StartContainer for \"28c5b249064b1a61b4a0aa1ecd98bf8c14dd7d962cce0e7c9f71c9e164514e27\" returns successfully" Sep 10 00:50:14.618513 env[1199]: time="2025-09-10T00:50:14.618464366Z" level=info msg="StartContainer for \"daba71b647a3f82b513bae7d38ad1b9d9409f222d709bc5fa55a4342b4d14288\" returns successfully" Sep 10 00:50:14.720387 kubelet[1571]: E0910 00:50:14.720255 1571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:14.722692 kubelet[1571]: E0910 00:50:14.722666 1571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:14.723962 kubelet[1571]: E0910 00:50:14.723942 1571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:15.725935 kubelet[1571]: E0910 00:50:15.725882 1571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:15.921331 kubelet[1571]: E0910 00:50:15.921257 1571 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 10 00:50:16.443585 kubelet[1571]: E0910 00:50:16.443504 1571 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 10 00:50:16.735689 kubelet[1571]: I0910 00:50:16.735569 1571 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:50:16.745012 kubelet[1571]: I0910 00:50:16.744950 1571 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 10 00:50:16.745012 kubelet[1571]: E0910 00:50:16.745004 1571 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 10 00:50:16.753860 kubelet[1571]: E0910 00:50:16.753826 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:50:16.854691 kubelet[1571]: E0910 00:50:16.854602 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:50:16.955336 kubelet[1571]: E0910 00:50:16.955263 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:50:17.055967 kubelet[1571]: E0910 00:50:17.055927 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:50:17.156912 kubelet[1571]: E0910 00:50:17.156852 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:50:17.257885 kubelet[1571]: E0910 00:50:17.257832 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:50:17.358563 kubelet[1571]: E0910 00:50:17.358380 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:50:17.459393 kubelet[1571]: E0910 00:50:17.459338 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:50:17.560038 kubelet[1571]: E0910 00:50:17.559979 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:50:17.661016 kubelet[1571]: E0910 00:50:17.660893 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:50:17.761271 kubelet[1571]: E0910 00:50:17.761221 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:50:17.861950 kubelet[1571]: E0910 00:50:17.861880 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:50:18.284100 systemd[1]: Reloading. Sep 10 00:50:18.351717 /usr/lib/systemd/system-generators/torcx-generator[1883]: time="2025-09-10T00:50:18Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 10 00:50:18.351749 /usr/lib/systemd/system-generators/torcx-generator[1883]: time="2025-09-10T00:50:18Z" level=info msg="torcx already run" Sep 10 00:50:18.414254 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 10 00:50:18.414268 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 10 00:50:18.433033 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:50:18.529185 systemd[1]: Stopping kubelet.service... Sep 10 00:50:18.549874 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 00:50:18.550096 systemd[1]: Stopped kubelet.service. Sep 10 00:50:18.551833 systemd[1]: Starting kubelet.service... Sep 10 00:50:18.645761 systemd[1]: Started kubelet.service. Sep 10 00:50:18.680430 kubelet[1929]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:50:18.680430 kubelet[1929]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 10 00:50:18.680430 kubelet[1929]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:50:18.680841 kubelet[1929]: I0910 00:50:18.680496 1929 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 00:50:18.686104 kubelet[1929]: I0910 00:50:18.686014 1929 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 10 00:50:18.686104 kubelet[1929]: I0910 00:50:18.686048 1929 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 00:50:18.686366 kubelet[1929]: I0910 00:50:18.686340 1929 server.go:934] "Client rotation is on, will bootstrap in background" Sep 10 00:50:18.687883 kubelet[1929]: I0910 00:50:18.687854 1929 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 10 00:50:18.690079 kubelet[1929]: I0910 00:50:18.690021 1929 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 00:50:18.694204 kubelet[1929]: E0910 00:50:18.694164 1929 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 10 00:50:18.694204 kubelet[1929]: I0910 00:50:18.694201 1929 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 10 00:50:18.697789 kubelet[1929]: I0910 00:50:18.697760 1929 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 00:50:18.697926 kubelet[1929]: I0910 00:50:18.697897 1929 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 10 00:50:18.698040 kubelet[1929]: I0910 00:50:18.697992 1929 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 00:50:18.698187 kubelet[1929]: I0910 00:50:18.698029 1929 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 00:50:18.698273 kubelet[1929]: I0910 00:50:18.698188 1929 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 00:50:18.698273 kubelet[1929]: I0910 00:50:18.698197 1929 container_manager_linux.go:300] "Creating device plugin manager" Sep 10 00:50:18.698273 kubelet[1929]: I0910 00:50:18.698220 1929 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:50:18.698360 kubelet[1929]: I0910 00:50:18.698312 1929 kubelet.go:408] "Attempting to sync node with API server" Sep 10 00:50:18.698360 kubelet[1929]: I0910 00:50:18.698324 1929 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 00:50:18.698360 kubelet[1929]: I0910 00:50:18.698350 1929 kubelet.go:314] "Adding apiserver pod source" Sep 10 00:50:18.698360 kubelet[1929]: I0910 00:50:18.698359 1929 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 00:50:18.698978 kubelet[1929]: I0910 00:50:18.698948 1929 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 10 00:50:18.699420 kubelet[1929]: I0910 00:50:18.699394 1929 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 00:50:18.699922 kubelet[1929]: I0910 00:50:18.699886 1929 server.go:1274] "Started kubelet" Sep 10 00:50:18.701784 kubelet[1929]: I0910 00:50:18.701632 1929 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 00:50:18.710083 kubelet[1929]: E0910 00:50:18.710043 1929 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 00:50:18.713215 kubelet[1929]: I0910 00:50:18.713180 1929 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 00:50:18.714369 kubelet[1929]: I0910 00:50:18.714340 1929 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 10 00:50:18.714369 kubelet[1929]: I0910 00:50:18.714363 1929 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 00:50:18.715743 kubelet[1929]: I0910 00:50:18.715653 1929 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 00:50:18.716635 kubelet[1929]: I0910 00:50:18.716607 1929 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 10 00:50:18.716758 kubelet[1929]: I0910 00:50:18.716739 1929 reconciler.go:26] "Reconciler: start to sync state" Sep 10 00:50:18.718517 kubelet[1929]: I0910 00:50:18.718483 1929 server.go:449] "Adding debug handlers to kubelet server" Sep 10 00:50:18.718877 kubelet[1929]: I0910 00:50:18.718836 1929 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 00:50:18.719670 kubelet[1929]: I0910 00:50:18.719646 1929 factory.go:221] Registration of the systemd container factory successfully Sep 10 00:50:18.719815 kubelet[1929]: I0910 00:50:18.719782 1929 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 00:50:18.722088 kubelet[1929]: I0910 00:50:18.722062 1929 factory.go:221] Registration of the containerd container factory successfully Sep 10 00:50:18.733925 kubelet[1929]: I0910 00:50:18.733886 1929 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 00:50:18.735407 kubelet[1929]: I0910 00:50:18.735391 1929 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 00:50:18.735518 kubelet[1929]: I0910 00:50:18.735503 1929 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 10 00:50:18.735635 kubelet[1929]: I0910 00:50:18.735614 1929 kubelet.go:2321] "Starting kubelet main sync loop" Sep 10 00:50:18.735704 kubelet[1929]: E0910 00:50:18.735666 1929 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 00:50:18.752218 kubelet[1929]: I0910 00:50:18.751804 1929 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 10 00:50:18.752218 kubelet[1929]: I0910 00:50:18.751821 1929 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 10 00:50:18.752218 kubelet[1929]: I0910 00:50:18.751840 1929 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:50:18.752218 kubelet[1929]: I0910 00:50:18.752022 1929 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 10 00:50:18.752218 kubelet[1929]: I0910 00:50:18.752032 1929 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 10 00:50:18.752218 kubelet[1929]: I0910 00:50:18.752052 1929 policy_none.go:49] "None policy: Start" Sep 10 00:50:18.752936 kubelet[1929]: I0910 00:50:18.752517 1929 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 10 00:50:18.752936 kubelet[1929]: I0910 00:50:18.752579 1929 state_mem.go:35] "Initializing new in-memory state store" Sep 10 00:50:18.752936 kubelet[1929]: I0910 00:50:18.752702 1929 state_mem.go:75] "Updated machine memory state" Sep 10 00:50:18.757247 kubelet[1929]: I0910 00:50:18.757220 1929 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 00:50:18.757475 kubelet[1929]: I0910 00:50:18.757427 1929 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 00:50:18.757475 kubelet[1929]: I0910 00:50:18.757453 1929 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 00:50:18.758176 kubelet[1929]: I0910 00:50:18.758156 1929 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 00:50:18.867883 kubelet[1929]: I0910 00:50:18.867820 1929 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:50:18.949675 kubelet[1929]: I0910 00:50:18.949631 1929 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 10 00:50:18.949891 kubelet[1929]: I0910 00:50:18.949740 1929 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 10 00:50:19.018245 kubelet[1929]: I0910 00:50:19.018170 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 10 00:50:19.018245 kubelet[1929]: I0910 00:50:19.018222 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aef276636f76ecb0b9a23a6322b5494c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"aef276636f76ecb0b9a23a6322b5494c\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:50:19.018493 kubelet[1929]: I0910 00:50:19.018266 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:50:19.018493 kubelet[1929]: I0910 00:50:19.018286 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:50:19.018493 kubelet[1929]: I0910 00:50:19.018308 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aef276636f76ecb0b9a23a6322b5494c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"aef276636f76ecb0b9a23a6322b5494c\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:50:19.018493 kubelet[1929]: I0910 00:50:19.018324 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aef276636f76ecb0b9a23a6322b5494c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"aef276636f76ecb0b9a23a6322b5494c\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:50:19.018493 kubelet[1929]: I0910 00:50:19.018341 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:50:19.018667 kubelet[1929]: I0910 00:50:19.018358 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:50:19.018667 kubelet[1929]: I0910 00:50:19.018376 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:50:19.200866 kubelet[1929]: E0910 00:50:19.200718 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:19.205647 kubelet[1929]: E0910 00:50:19.205583 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:19.205817 kubelet[1929]: E0910 00:50:19.205656 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:19.215615 sudo[1964]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 10 00:50:19.215899 sudo[1964]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 10 00:50:19.686473 sudo[1964]: pam_unix(sudo:session): session closed for user root Sep 10 00:50:19.698931 kubelet[1929]: I0910 00:50:19.698879 1929 apiserver.go:52] "Watching apiserver" Sep 10 00:50:19.717679 kubelet[1929]: I0910 00:50:19.717612 1929 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 10 00:50:19.745448 kubelet[1929]: E0910 00:50:19.745409 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:19.745669 kubelet[1929]: E0910 00:50:19.745632 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:19.745911 kubelet[1929]: E0910 00:50:19.745882 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:20.592545 kubelet[1929]: I0910 00:50:20.592439 1929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.592407438 podStartE2EDuration="2.592407438s" podCreationTimestamp="2025-09-10 00:50:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:50:20.59184791 +0000 UTC m=+1.942437325" watchObservedRunningTime="2025-09-10 00:50:20.592407438 +0000 UTC m=+1.942996853" Sep 10 00:50:20.592790 kubelet[1929]: I0910 00:50:20.592667 1929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.592658292 podStartE2EDuration="2.592658292s" podCreationTimestamp="2025-09-10 00:50:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:50:20.563639202 +0000 UTC m=+1.914228607" watchObservedRunningTime="2025-09-10 00:50:20.592658292 +0000 UTC m=+1.943247707" Sep 10 00:50:20.612328 kubelet[1929]: I0910 00:50:20.612236 1929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.6122145469999998 podStartE2EDuration="2.612214547s" podCreationTimestamp="2025-09-10 00:50:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:50:20.601800408 +0000 UTC m=+1.952389824" watchObservedRunningTime="2025-09-10 00:50:20.612214547 +0000 UTC m=+1.962803962" Sep 10 00:50:20.747547 kubelet[1929]: E0910 00:50:20.747482 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:21.749441 kubelet[1929]: E0910 00:50:21.749404 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:21.926546 sudo[1299]: pam_unix(sudo:session): session closed for user root Sep 10 00:50:21.928169 sshd[1296]: pam_unix(sshd:session): session closed for user core Sep 10 00:50:21.930553 systemd[1]: sshd@4-10.0.0.131:22-10.0.0.1:39250.service: Deactivated successfully. Sep 10 00:50:21.931281 systemd[1]: session-5.scope: Deactivated successfully. Sep 10 00:50:21.931418 systemd[1]: session-5.scope: Consumed 4.038s CPU time. Sep 10 00:50:21.932112 systemd-logind[1190]: Session 5 logged out. Waiting for processes to exit. Sep 10 00:50:21.932880 systemd-logind[1190]: Removed session 5. Sep 10 00:50:24.996496 kubelet[1929]: I0910 00:50:24.996446 1929 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 10 00:50:24.997115 env[1199]: time="2025-09-10T00:50:24.996947024Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 10 00:50:24.997409 kubelet[1929]: I0910 00:50:24.997176 1929 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 10 00:50:25.974365 systemd[1]: Created slice kubepods-besteffort-pod6407eb50_e10c_4e71_be22_c5e5bd3a5a91.slice. Sep 10 00:50:25.987819 systemd[1]: Created slice kubepods-burstable-pod72015ac2_1c73_4b4f_83e7_9cb61d325200.slice. Sep 10 00:50:26.009780 systemd[1]: Created slice kubepods-besteffort-pod08089bcd_5d61_4c4d_bb10_c99d7259adea.slice. Sep 10 00:50:26.059818 kubelet[1929]: I0910 00:50:26.059755 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkq7q\" (UniqueName: \"kubernetes.io/projected/72015ac2-1c73-4b4f-83e7-9cb61d325200-kube-api-access-gkq7q\") pod \"cilium-qhm8q\" (UID: \"72015ac2-1c73-4b4f-83e7-9cb61d325200\") " pod="kube-system/cilium-qhm8q" Sep 10 00:50:26.059818 kubelet[1929]: I0910 00:50:26.059805 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-lib-modules\") pod \"cilium-qhm8q\" (UID: \"72015ac2-1c73-4b4f-83e7-9cb61d325200\") " pod="kube-system/cilium-qhm8q" Sep 10 00:50:26.060272 kubelet[1929]: I0910 00:50:26.059830 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpk86\" (UniqueName: \"kubernetes.io/projected/6407eb50-e10c-4e71-be22-c5e5bd3a5a91-kube-api-access-bpk86\") pod \"kube-proxy-p6rlx\" (UID: \"6407eb50-e10c-4e71-be22-c5e5bd3a5a91\") " pod="kube-system/kube-proxy-p6rlx" Sep 10 00:50:26.060272 kubelet[1929]: I0910 00:50:26.059854 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6407eb50-e10c-4e71-be22-c5e5bd3a5a91-xtables-lock\") pod \"kube-proxy-p6rlx\" (UID: \"6407eb50-e10c-4e71-be22-c5e5bd3a5a91\") " pod="kube-system/kube-proxy-p6rlx" Sep 10 00:50:26.060272 kubelet[1929]: I0910 00:50:26.059874 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6407eb50-e10c-4e71-be22-c5e5bd3a5a91-lib-modules\") pod \"kube-proxy-p6rlx\" (UID: \"6407eb50-e10c-4e71-be22-c5e5bd3a5a91\") " pod="kube-system/kube-proxy-p6rlx" Sep 10 00:50:26.060272 kubelet[1929]: I0910 00:50:26.059894 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-bpf-maps\") pod \"cilium-qhm8q\" (UID: \"72015ac2-1c73-4b4f-83e7-9cb61d325200\") " pod="kube-system/cilium-qhm8q" Sep 10 00:50:26.060272 kubelet[1929]: I0910 00:50:26.059918 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/72015ac2-1c73-4b4f-83e7-9cb61d325200-clustermesh-secrets\") pod \"cilium-qhm8q\" (UID: \"72015ac2-1c73-4b4f-83e7-9cb61d325200\") " pod="kube-system/cilium-qhm8q" Sep 10 00:50:26.060272 kubelet[1929]: I0910 00:50:26.059951 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-hostproc\") pod \"cilium-qhm8q\" (UID: \"72015ac2-1c73-4b4f-83e7-9cb61d325200\") " pod="kube-system/cilium-qhm8q" Sep 10 00:50:26.060426 kubelet[1929]: I0910 00:50:26.059977 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-xtables-lock\") pod \"cilium-qhm8q\" (UID: \"72015ac2-1c73-4b4f-83e7-9cb61d325200\") " pod="kube-system/cilium-qhm8q" Sep 10 00:50:26.060426 kubelet[1929]: I0910 00:50:26.060014 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/72015ac2-1c73-4b4f-83e7-9cb61d325200-cilium-config-path\") pod \"cilium-qhm8q\" (UID: \"72015ac2-1c73-4b4f-83e7-9cb61d325200\") " pod="kube-system/cilium-qhm8q" Sep 10 00:50:26.060426 kubelet[1929]: I0910 00:50:26.060040 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-host-proc-sys-net\") pod \"cilium-qhm8q\" (UID: \"72015ac2-1c73-4b4f-83e7-9cb61d325200\") " pod="kube-system/cilium-qhm8q" Sep 10 00:50:26.060426 kubelet[1929]: I0910 00:50:26.060060 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkv6r\" (UniqueName: \"kubernetes.io/projected/08089bcd-5d61-4c4d-bb10-c99d7259adea-kube-api-access-vkv6r\") pod \"cilium-operator-5d85765b45-65g9m\" (UID: \"08089bcd-5d61-4c4d-bb10-c99d7259adea\") " pod="kube-system/cilium-operator-5d85765b45-65g9m" Sep 10 00:50:26.060426 kubelet[1929]: I0910 00:50:26.060122 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-etc-cni-netd\") pod \"cilium-qhm8q\" (UID: \"72015ac2-1c73-4b4f-83e7-9cb61d325200\") " pod="kube-system/cilium-qhm8q" Sep 10 00:50:26.060566 kubelet[1929]: I0910 00:50:26.060157 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/08089bcd-5d61-4c4d-bb10-c99d7259adea-cilium-config-path\") pod \"cilium-operator-5d85765b45-65g9m\" (UID: \"08089bcd-5d61-4c4d-bb10-c99d7259adea\") " pod="kube-system/cilium-operator-5d85765b45-65g9m" Sep 10 00:50:26.060566 kubelet[1929]: I0910 00:50:26.060181 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-cilium-run\") pod \"cilium-qhm8q\" (UID: \"72015ac2-1c73-4b4f-83e7-9cb61d325200\") " pod="kube-system/cilium-qhm8q" Sep 10 00:50:26.060566 kubelet[1929]: I0910 00:50:26.060200 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-cilium-cgroup\") pod \"cilium-qhm8q\" (UID: \"72015ac2-1c73-4b4f-83e7-9cb61d325200\") " pod="kube-system/cilium-qhm8q" Sep 10 00:50:26.060566 kubelet[1929]: I0910 00:50:26.060222 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-cni-path\") pod \"cilium-qhm8q\" (UID: \"72015ac2-1c73-4b4f-83e7-9cb61d325200\") " pod="kube-system/cilium-qhm8q" Sep 10 00:50:26.060566 kubelet[1929]: I0910 00:50:26.060243 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-host-proc-sys-kernel\") pod \"cilium-qhm8q\" (UID: \"72015ac2-1c73-4b4f-83e7-9cb61d325200\") " pod="kube-system/cilium-qhm8q" Sep 10 00:50:26.060690 kubelet[1929]: I0910 00:50:26.060273 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/72015ac2-1c73-4b4f-83e7-9cb61d325200-hubble-tls\") pod \"cilium-qhm8q\" (UID: \"72015ac2-1c73-4b4f-83e7-9cb61d325200\") " pod="kube-system/cilium-qhm8q" Sep 10 00:50:26.060690 kubelet[1929]: I0910 00:50:26.060298 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6407eb50-e10c-4e71-be22-c5e5bd3a5a91-kube-proxy\") pod \"kube-proxy-p6rlx\" (UID: \"6407eb50-e10c-4e71-be22-c5e5bd3a5a91\") " pod="kube-system/kube-proxy-p6rlx" Sep 10 00:50:26.161258 kubelet[1929]: I0910 00:50:26.161215 1929 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 10 00:50:26.285185 kubelet[1929]: E0910 00:50:26.284346 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:26.285392 env[1199]: time="2025-09-10T00:50:26.285243627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p6rlx,Uid:6407eb50-e10c-4e71-be22-c5e5bd3a5a91,Namespace:kube-system,Attempt:0,}" Sep 10 00:50:26.290737 kubelet[1929]: E0910 00:50:26.290703 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:26.291463 env[1199]: time="2025-09-10T00:50:26.291261467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qhm8q,Uid:72015ac2-1c73-4b4f-83e7-9cb61d325200,Namespace:kube-system,Attempt:0,}" Sep 10 00:50:26.313689 kubelet[1929]: E0910 00:50:26.313641 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:26.314289 env[1199]: time="2025-09-10T00:50:26.314229916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-65g9m,Uid:08089bcd-5d61-4c4d-bb10-c99d7259adea,Namespace:kube-system,Attempt:0,}" Sep 10 00:50:26.529209 kubelet[1929]: E0910 00:50:26.529170 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:26.757419 kubelet[1929]: E0910 00:50:26.757377 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:26.764112 env[1199]: time="2025-09-10T00:50:26.763873712Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:50:26.764112 env[1199]: time="2025-09-10T00:50:26.763918347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:50:26.764112 env[1199]: time="2025-09-10T00:50:26.763933076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:50:26.764112 env[1199]: time="2025-09-10T00:50:26.764074798Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5cb06011b789f98cdfcc28fda9c8bfb3ce46444d6d66e091d21a70f0492e2e40 pid=2024 runtime=io.containerd.runc.v2 Sep 10 00:50:26.766987 env[1199]: time="2025-09-10T00:50:26.766893238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:50:26.766987 env[1199]: time="2025-09-10T00:50:26.766966308Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:50:26.767071 env[1199]: time="2025-09-10T00:50:26.766983630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:50:26.767443 env[1199]: time="2025-09-10T00:50:26.767182071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:50:26.767443 env[1199]: time="2025-09-10T00:50:26.767408615Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:50:26.767443 env[1199]: time="2025-09-10T00:50:26.767422051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:50:26.767748 env[1199]: time="2025-09-10T00:50:26.767714120Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c9966f5e420f8f88fa4a31343f1867caced97fb4c26b4d6240d9ef1c762229d6 pid=2044 runtime=io.containerd.runc.v2 Sep 10 00:50:26.768737 env[1199]: time="2025-09-10T00:50:26.767188884Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/aeb761ff97c535caca8ad5c81af23864623790e53242498c36588d4b6d4e7485 pid=2048 runtime=io.containerd.runc.v2 Sep 10 00:50:26.778702 systemd[1]: Started cri-containerd-5cb06011b789f98cdfcc28fda9c8bfb3ce46444d6d66e091d21a70f0492e2e40.scope. Sep 10 00:50:26.788706 systemd[1]: Started cri-containerd-aeb761ff97c535caca8ad5c81af23864623790e53242498c36588d4b6d4e7485.scope. Sep 10 00:50:26.793458 systemd[1]: Started cri-containerd-c9966f5e420f8f88fa4a31343f1867caced97fb4c26b4d6240d9ef1c762229d6.scope. Sep 10 00:50:26.822121 env[1199]: time="2025-09-10T00:50:26.822038709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qhm8q,Uid:72015ac2-1c73-4b4f-83e7-9cb61d325200,Namespace:kube-system,Attempt:0,} returns sandbox id \"5cb06011b789f98cdfcc28fda9c8bfb3ce46444d6d66e091d21a70f0492e2e40\"" Sep 10 00:50:26.822864 kubelet[1929]: E0910 00:50:26.822841 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:26.824276 env[1199]: time="2025-09-10T00:50:26.824252041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p6rlx,Uid:6407eb50-e10c-4e71-be22-c5e5bd3a5a91,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9966f5e420f8f88fa4a31343f1867caced97fb4c26b4d6240d9ef1c762229d6\"" Sep 10 00:50:26.831092 kubelet[1929]: E0910 00:50:26.831065 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:26.832834 env[1199]: time="2025-09-10T00:50:26.832800479Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 10 00:50:26.835019 env[1199]: time="2025-09-10T00:50:26.834850628Z" level=info msg="CreateContainer within sandbox \"c9966f5e420f8f88fa4a31343f1867caced97fb4c26b4d6240d9ef1c762229d6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 10 00:50:26.838442 env[1199]: time="2025-09-10T00:50:26.838411671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-65g9m,Uid:08089bcd-5d61-4c4d-bb10-c99d7259adea,Namespace:kube-system,Attempt:0,} returns sandbox id \"aeb761ff97c535caca8ad5c81af23864623790e53242498c36588d4b6d4e7485\"" Sep 10 00:50:26.839021 kubelet[1929]: E0910 00:50:26.838990 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:26.905473 env[1199]: time="2025-09-10T00:50:26.905396418Z" level=info msg="CreateContainer within sandbox \"c9966f5e420f8f88fa4a31343f1867caced97fb4c26b4d6240d9ef1c762229d6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8c7bbd82c0d50507dfdc42063e0dcf29d42ccb2de353825791308218c7e8603b\"" Sep 10 00:50:26.906445 env[1199]: time="2025-09-10T00:50:26.906387988Z" level=info msg="StartContainer for \"8c7bbd82c0d50507dfdc42063e0dcf29d42ccb2de353825791308218c7e8603b\"" Sep 10 00:50:26.925002 systemd[1]: Started cri-containerd-8c7bbd82c0d50507dfdc42063e0dcf29d42ccb2de353825791308218c7e8603b.scope. Sep 10 00:50:26.954759 env[1199]: time="2025-09-10T00:50:26.954691049Z" level=info msg="StartContainer for \"8c7bbd82c0d50507dfdc42063e0dcf29d42ccb2de353825791308218c7e8603b\" returns successfully" Sep 10 00:50:27.761281 kubelet[1929]: E0910 00:50:27.761248 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:28.500359 kubelet[1929]: E0910 00:50:28.500267 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:28.671981 kubelet[1929]: I0910 00:50:28.671803 1929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-p6rlx" podStartSLOduration=3.6717761319999997 podStartE2EDuration="3.671776132s" podCreationTimestamp="2025-09-10 00:50:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:50:27.770208592 +0000 UTC m=+9.120798007" watchObservedRunningTime="2025-09-10 00:50:28.671776132 +0000 UTC m=+10.022365547" Sep 10 00:50:28.764275 kubelet[1929]: E0910 00:50:28.764099 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:29.778658 kubelet[1929]: E0910 00:50:29.778611 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:30.769034 kubelet[1929]: E0910 00:50:30.768993 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:35.179905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3886396860.mount: Deactivated successfully. Sep 10 00:50:39.560633 env[1199]: time="2025-09-10T00:50:39.560549271Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:50:39.563035 env[1199]: time="2025-09-10T00:50:39.562979105Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:50:39.565307 env[1199]: time="2025-09-10T00:50:39.565252671Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:50:39.565928 env[1199]: time="2025-09-10T00:50:39.565880215Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 10 00:50:39.567248 env[1199]: time="2025-09-10T00:50:39.567207441Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 10 00:50:39.568094 env[1199]: time="2025-09-10T00:50:39.568030788Z" level=info msg="CreateContainer within sandbox \"5cb06011b789f98cdfcc28fda9c8bfb3ce46444d6d66e091d21a70f0492e2e40\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 10 00:50:39.580612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3668204782.mount: Deactivated successfully. Sep 10 00:50:39.581653 env[1199]: time="2025-09-10T00:50:39.581599458Z" level=info msg="CreateContainer within sandbox \"5cb06011b789f98cdfcc28fda9c8bfb3ce46444d6d66e091d21a70f0492e2e40\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"39a0e1deb017bfba542779dd34c39f127eeed678f795ad8ec416c9fd9419f664\"" Sep 10 00:50:39.582180 env[1199]: time="2025-09-10T00:50:39.582154635Z" level=info msg="StartContainer for \"39a0e1deb017bfba542779dd34c39f127eeed678f795ad8ec416c9fd9419f664\"" Sep 10 00:50:39.604322 systemd[1]: Started cri-containerd-39a0e1deb017bfba542779dd34c39f127eeed678f795ad8ec416c9fd9419f664.scope. Sep 10 00:50:39.638076 systemd[1]: cri-containerd-39a0e1deb017bfba542779dd34c39f127eeed678f795ad8ec416c9fd9419f664.scope: Deactivated successfully. Sep 10 00:50:39.701406 env[1199]: time="2025-09-10T00:50:39.701319438Z" level=info msg="StartContainer for \"39a0e1deb017bfba542779dd34c39f127eeed678f795ad8ec416c9fd9419f664\" returns successfully" Sep 10 00:50:39.785351 kubelet[1929]: E0910 00:50:39.785294 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:40.094589 env[1199]: time="2025-09-10T00:50:40.094545670Z" level=info msg="shim disconnected" id=39a0e1deb017bfba542779dd34c39f127eeed678f795ad8ec416c9fd9419f664 Sep 10 00:50:40.094589 env[1199]: time="2025-09-10T00:50:40.094587129Z" level=warning msg="cleaning up after shim disconnected" id=39a0e1deb017bfba542779dd34c39f127eeed678f795ad8ec416c9fd9419f664 namespace=k8s.io Sep 10 00:50:40.094839 env[1199]: time="2025-09-10T00:50:40.094595585Z" level=info msg="cleaning up dead shim" Sep 10 00:50:40.100628 env[1199]: time="2025-09-10T00:50:40.100580270Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:50:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2358 runtime=io.containerd.runc.v2\n" Sep 10 00:50:40.579044 systemd[1]: run-containerd-runc-k8s.io-39a0e1deb017bfba542779dd34c39f127eeed678f795ad8ec416c9fd9419f664-runc.eQ5002.mount: Deactivated successfully. Sep 10 00:50:40.579154 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39a0e1deb017bfba542779dd34c39f127eeed678f795ad8ec416c9fd9419f664-rootfs.mount: Deactivated successfully. Sep 10 00:50:40.787914 kubelet[1929]: E0910 00:50:40.787872 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:40.789561 env[1199]: time="2025-09-10T00:50:40.789504632Z" level=info msg="CreateContainer within sandbox \"5cb06011b789f98cdfcc28fda9c8bfb3ce46444d6d66e091d21a70f0492e2e40\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 10 00:50:40.960400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3980740595.mount: Deactivated successfully. Sep 10 00:50:41.332657 env[1199]: time="2025-09-10T00:50:41.332602603Z" level=info msg="CreateContainer within sandbox \"5cb06011b789f98cdfcc28fda9c8bfb3ce46444d6d66e091d21a70f0492e2e40\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0106c3784053b48fd5033bf9bfb41c436ea8cdedbcdd88d3f95f916c9fd7c8a1\"" Sep 10 00:50:41.333133 env[1199]: time="2025-09-10T00:50:41.333095471Z" level=info msg="StartContainer for \"0106c3784053b48fd5033bf9bfb41c436ea8cdedbcdd88d3f95f916c9fd7c8a1\"" Sep 10 00:50:41.351330 systemd[1]: Started cri-containerd-0106c3784053b48fd5033bf9bfb41c436ea8cdedbcdd88d3f95f916c9fd7c8a1.scope. Sep 10 00:50:41.391548 env[1199]: time="2025-09-10T00:50:41.390279570Z" level=info msg="StartContainer for \"0106c3784053b48fd5033bf9bfb41c436ea8cdedbcdd88d3f95f916c9fd7c8a1\" returns successfully" Sep 10 00:50:41.396910 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 00:50:41.397172 systemd[1]: Stopped systemd-sysctl.service. Sep 10 00:50:41.397346 systemd[1]: Stopping systemd-sysctl.service... Sep 10 00:50:41.398829 systemd[1]: Starting systemd-sysctl.service... Sep 10 00:50:41.401901 systemd[1]: cri-containerd-0106c3784053b48fd5033bf9bfb41c436ea8cdedbcdd88d3f95f916c9fd7c8a1.scope: Deactivated successfully. Sep 10 00:50:41.408136 systemd[1]: Finished systemd-sysctl.service. Sep 10 00:50:41.430670 env[1199]: time="2025-09-10T00:50:41.430621379Z" level=info msg="shim disconnected" id=0106c3784053b48fd5033bf9bfb41c436ea8cdedbcdd88d3f95f916c9fd7c8a1 Sep 10 00:50:41.430670 env[1199]: time="2025-09-10T00:50:41.430666344Z" level=warning msg="cleaning up after shim disconnected" id=0106c3784053b48fd5033bf9bfb41c436ea8cdedbcdd88d3f95f916c9fd7c8a1 namespace=k8s.io Sep 10 00:50:41.430670 env[1199]: time="2025-09-10T00:50:41.430674259Z" level=info msg="cleaning up dead shim" Sep 10 00:50:41.437956 env[1199]: time="2025-09-10T00:50:41.437896925Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:50:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2422 runtime=io.containerd.runc.v2\n" Sep 10 00:50:41.578645 systemd[1]: run-containerd-runc-k8s.io-0106c3784053b48fd5033bf9bfb41c436ea8cdedbcdd88d3f95f916c9fd7c8a1-runc.ua1CPy.mount: Deactivated successfully. Sep 10 00:50:41.578758 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0106c3784053b48fd5033bf9bfb41c436ea8cdedbcdd88d3f95f916c9fd7c8a1-rootfs.mount: Deactivated successfully. Sep 10 00:50:41.764205 systemd[1]: Started sshd@5-10.0.0.131:22-10.0.0.1:37572.service. Sep 10 00:50:41.791450 kubelet[1929]: E0910 00:50:41.791385 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:41.799687 env[1199]: time="2025-09-10T00:50:41.799627429Z" level=info msg="CreateContainer within sandbox \"5cb06011b789f98cdfcc28fda9c8bfb3ce46444d6d66e091d21a70f0492e2e40\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 10 00:50:41.801761 sshd[2435]: Accepted publickey for core from 10.0.0.1 port 37572 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:50:41.803269 sshd[2435]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:50:41.808460 systemd-logind[1190]: New session 6 of user core. Sep 10 00:50:41.809259 systemd[1]: Started session-6.scope. Sep 10 00:50:41.823107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2158595517.mount: Deactivated successfully. Sep 10 00:50:41.825542 env[1199]: time="2025-09-10T00:50:41.825479582Z" level=info msg="CreateContainer within sandbox \"5cb06011b789f98cdfcc28fda9c8bfb3ce46444d6d66e091d21a70f0492e2e40\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d207163557ac6025337fce945b45e42fa60b2f48ab7637c956044a949d42b5c8\"" Sep 10 00:50:41.826104 env[1199]: time="2025-09-10T00:50:41.826072480Z" level=info msg="StartContainer for \"d207163557ac6025337fce945b45e42fa60b2f48ab7637c956044a949d42b5c8\"" Sep 10 00:50:41.851038 systemd[1]: Started cri-containerd-d207163557ac6025337fce945b45e42fa60b2f48ab7637c956044a949d42b5c8.scope. Sep 10 00:50:41.897951 systemd[1]: cri-containerd-d207163557ac6025337fce945b45e42fa60b2f48ab7637c956044a949d42b5c8.scope: Deactivated successfully. Sep 10 00:50:41.899242 env[1199]: time="2025-09-10T00:50:41.899205099Z" level=info msg="StartContainer for \"d207163557ac6025337fce945b45e42fa60b2f48ab7637c956044a949d42b5c8\" returns successfully" Sep 10 00:50:41.950236 sshd[2435]: pam_unix(sshd:session): session closed for user core Sep 10 00:50:41.952428 systemd[1]: sshd@5-10.0.0.131:22-10.0.0.1:37572.service: Deactivated successfully. Sep 10 00:50:41.953135 systemd[1]: session-6.scope: Deactivated successfully. Sep 10 00:50:41.953668 systemd-logind[1190]: Session 6 logged out. Waiting for processes to exit. Sep 10 00:50:41.954341 systemd-logind[1190]: Removed session 6. Sep 10 00:50:41.968398 env[1199]: time="2025-09-10T00:50:41.968350375Z" level=info msg="shim disconnected" id=d207163557ac6025337fce945b45e42fa60b2f48ab7637c956044a949d42b5c8 Sep 10 00:50:41.968610 env[1199]: time="2025-09-10T00:50:41.968412172Z" level=warning msg="cleaning up after shim disconnected" id=d207163557ac6025337fce945b45e42fa60b2f48ab7637c956044a949d42b5c8 namespace=k8s.io Sep 10 00:50:41.968610 env[1199]: time="2025-09-10T00:50:41.968423374Z" level=info msg="cleaning up dead shim" Sep 10 00:50:41.975392 env[1199]: time="2025-09-10T00:50:41.975343594Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:50:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2490 runtime=io.containerd.runc.v2\n" Sep 10 00:50:42.578887 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d207163557ac6025337fce945b45e42fa60b2f48ab7637c956044a949d42b5c8-rootfs.mount: Deactivated successfully. Sep 10 00:50:42.794176 kubelet[1929]: E0910 00:50:42.794142 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:42.796116 env[1199]: time="2025-09-10T00:50:42.796063436Z" level=info msg="CreateContainer within sandbox \"5cb06011b789f98cdfcc28fda9c8bfb3ce46444d6d66e091d21a70f0492e2e40\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 10 00:50:42.989896 env[1199]: time="2025-09-10T00:50:42.989758927Z" level=info msg="CreateContainer within sandbox \"5cb06011b789f98cdfcc28fda9c8bfb3ce46444d6d66e091d21a70f0492e2e40\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"52cbdcf49bc0793f6e4485f6f57d97390febc05728551f97200cdba790d26ef2\"" Sep 10 00:50:42.990645 env[1199]: time="2025-09-10T00:50:42.990447867Z" level=info msg="StartContainer for \"52cbdcf49bc0793f6e4485f6f57d97390febc05728551f97200cdba790d26ef2\"" Sep 10 00:50:43.003028 env[1199]: time="2025-09-10T00:50:43.002965554Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:50:43.005103 env[1199]: time="2025-09-10T00:50:43.005036390Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:50:43.007393 env[1199]: time="2025-09-10T00:50:43.007347353Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:50:43.007622 env[1199]: time="2025-09-10T00:50:43.007578441Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 10 00:50:43.012413 env[1199]: time="2025-09-10T00:50:43.010672472Z" level=info msg="CreateContainer within sandbox \"aeb761ff97c535caca8ad5c81af23864623790e53242498c36588d4b6d4e7485\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 10 00:50:43.017805 systemd[1]: Started cri-containerd-52cbdcf49bc0793f6e4485f6f57d97390febc05728551f97200cdba790d26ef2.scope. Sep 10 00:50:43.033022 env[1199]: time="2025-09-10T00:50:43.032933783Z" level=info msg="CreateContainer within sandbox \"aeb761ff97c535caca8ad5c81af23864623790e53242498c36588d4b6d4e7485\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"dbbf16960d87a5855be6e9fdb3e37bc65ef05c497a64602010d463053efab253\"" Sep 10 00:50:43.036027 env[1199]: time="2025-09-10T00:50:43.035976067Z" level=info msg="StartContainer for \"dbbf16960d87a5855be6e9fdb3e37bc65ef05c497a64602010d463053efab253\"" Sep 10 00:50:43.047468 systemd[1]: cri-containerd-52cbdcf49bc0793f6e4485f6f57d97390febc05728551f97200cdba790d26ef2.scope: Deactivated successfully. Sep 10 00:50:43.048980 env[1199]: time="2025-09-10T00:50:43.048936409Z" level=info msg="StartContainer for \"52cbdcf49bc0793f6e4485f6f57d97390febc05728551f97200cdba790d26ef2\" returns successfully" Sep 10 00:50:43.053420 systemd[1]: Started cri-containerd-dbbf16960d87a5855be6e9fdb3e37bc65ef05c497a64602010d463053efab253.scope. Sep 10 00:50:43.183696 env[1199]: time="2025-09-10T00:50:43.183633754Z" level=info msg="StartContainer for \"dbbf16960d87a5855be6e9fdb3e37bc65ef05c497a64602010d463053efab253\" returns successfully" Sep 10 00:50:43.184923 env[1199]: time="2025-09-10T00:50:43.184883590Z" level=info msg="shim disconnected" id=52cbdcf49bc0793f6e4485f6f57d97390febc05728551f97200cdba790d26ef2 Sep 10 00:50:43.184989 env[1199]: time="2025-09-10T00:50:43.184924578Z" level=warning msg="cleaning up after shim disconnected" id=52cbdcf49bc0793f6e4485f6f57d97390febc05728551f97200cdba790d26ef2 namespace=k8s.io Sep 10 00:50:43.184989 env[1199]: time="2025-09-10T00:50:43.184936160Z" level=info msg="cleaning up dead shim" Sep 10 00:50:43.201429 env[1199]: time="2025-09-10T00:50:43.201359185Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:50:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2587 runtime=io.containerd.runc.v2\n" Sep 10 00:50:43.579994 systemd[1]: run-containerd-runc-k8s.io-52cbdcf49bc0793f6e4485f6f57d97390febc05728551f97200cdba790d26ef2-runc.1Soak1.mount: Deactivated successfully. Sep 10 00:50:43.580113 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52cbdcf49bc0793f6e4485f6f57d97390febc05728551f97200cdba790d26ef2-rootfs.mount: Deactivated successfully. Sep 10 00:50:43.797235 kubelet[1929]: E0910 00:50:43.797185 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:43.800259 kubelet[1929]: E0910 00:50:43.800219 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:43.808556 env[1199]: time="2025-09-10T00:50:43.801897098Z" level=info msg="CreateContainer within sandbox \"5cb06011b789f98cdfcc28fda9c8bfb3ce46444d6d66e091d21a70f0492e2e40\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 10 00:50:43.829026 env[1199]: time="2025-09-10T00:50:43.828966669Z" level=info msg="CreateContainer within sandbox \"5cb06011b789f98cdfcc28fda9c8bfb3ce46444d6d66e091d21a70f0492e2e40\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"728d7cfe3057edddf7a6d61ff9fb99f4cd23add3986e7b8087d1dfb64a3b9204\"" Sep 10 00:50:43.829677 env[1199]: time="2025-09-10T00:50:43.829643716Z" level=info msg="StartContainer for \"728d7cfe3057edddf7a6d61ff9fb99f4cd23add3986e7b8087d1dfb64a3b9204\"" Sep 10 00:50:43.839381 kubelet[1929]: I0910 00:50:43.839171 1929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-65g9m" podStartSLOduration=2.66975007 podStartE2EDuration="18.839148411s" podCreationTimestamp="2025-09-10 00:50:25 +0000 UTC" firstStartedPulling="2025-09-10 00:50:26.839764693 +0000 UTC m=+8.190354118" lastFinishedPulling="2025-09-10 00:50:43.009163044 +0000 UTC m=+24.359752459" observedRunningTime="2025-09-10 00:50:43.838633012 +0000 UTC m=+25.189222427" watchObservedRunningTime="2025-09-10 00:50:43.839148411 +0000 UTC m=+25.189737816" Sep 10 00:50:43.865930 systemd[1]: Started cri-containerd-728d7cfe3057edddf7a6d61ff9fb99f4cd23add3986e7b8087d1dfb64a3b9204.scope. Sep 10 00:50:43.905504 env[1199]: time="2025-09-10T00:50:43.905434770Z" level=info msg="StartContainer for \"728d7cfe3057edddf7a6d61ff9fb99f4cd23add3986e7b8087d1dfb64a3b9204\" returns successfully" Sep 10 00:50:44.014520 kubelet[1929]: I0910 00:50:44.014459 1929 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 10 00:50:44.045659 systemd[1]: Created slice kubepods-burstable-pod72ae3685_de13_4277_a0db_086a4108f81e.slice. Sep 10 00:50:44.051189 systemd[1]: Created slice kubepods-burstable-podf8d76a41_9118_4ff5_b06d_159190dae93f.slice. Sep 10 00:50:44.191194 kubelet[1929]: I0910 00:50:44.191080 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8d76a41-9118-4ff5-b06d-159190dae93f-config-volume\") pod \"coredns-7c65d6cfc9-65vgk\" (UID: \"f8d76a41-9118-4ff5-b06d-159190dae93f\") " pod="kube-system/coredns-7c65d6cfc9-65vgk" Sep 10 00:50:44.191194 kubelet[1929]: I0910 00:50:44.191117 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/72ae3685-de13-4277-a0db-086a4108f81e-config-volume\") pod \"coredns-7c65d6cfc9-k7k8s\" (UID: \"72ae3685-de13-4277-a0db-086a4108f81e\") " pod="kube-system/coredns-7c65d6cfc9-k7k8s" Sep 10 00:50:44.191194 kubelet[1929]: I0910 00:50:44.191143 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j24rl\" (UniqueName: \"kubernetes.io/projected/72ae3685-de13-4277-a0db-086a4108f81e-kube-api-access-j24rl\") pod \"coredns-7c65d6cfc9-k7k8s\" (UID: \"72ae3685-de13-4277-a0db-086a4108f81e\") " pod="kube-system/coredns-7c65d6cfc9-k7k8s" Sep 10 00:50:44.191194 kubelet[1929]: I0910 00:50:44.191160 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l97zd\" (UniqueName: \"kubernetes.io/projected/f8d76a41-9118-4ff5-b06d-159190dae93f-kube-api-access-l97zd\") pod \"coredns-7c65d6cfc9-65vgk\" (UID: \"f8d76a41-9118-4ff5-b06d-159190dae93f\") " pod="kube-system/coredns-7c65d6cfc9-65vgk" Sep 10 00:50:44.348821 kubelet[1929]: E0910 00:50:44.348766 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:44.349507 env[1199]: time="2025-09-10T00:50:44.349455047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-k7k8s,Uid:72ae3685-de13-4277-a0db-086a4108f81e,Namespace:kube-system,Attempt:0,}" Sep 10 00:50:44.353294 kubelet[1929]: E0910 00:50:44.353264 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:44.353746 env[1199]: time="2025-09-10T00:50:44.353693952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-65vgk,Uid:f8d76a41-9118-4ff5-b06d-159190dae93f,Namespace:kube-system,Attempt:0,}" Sep 10 00:50:44.805222 kubelet[1929]: E0910 00:50:44.805180 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:44.805583 kubelet[1929]: E0910 00:50:44.805305 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:44.857119 kubelet[1929]: I0910 00:50:44.857058 1929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qhm8q" podStartSLOduration=7.1151554 podStartE2EDuration="19.857026638s" podCreationTimestamp="2025-09-10 00:50:25 +0000 UTC" firstStartedPulling="2025-09-10 00:50:26.825026073 +0000 UTC m=+8.175615488" lastFinishedPulling="2025-09-10 00:50:39.56689731 +0000 UTC m=+20.917486726" observedRunningTime="2025-09-10 00:50:44.856878365 +0000 UTC m=+26.207467781" watchObservedRunningTime="2025-09-10 00:50:44.857026638 +0000 UTC m=+26.207616053" Sep 10 00:50:45.806426 kubelet[1929]: E0910 00:50:45.806380 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:46.808115 kubelet[1929]: E0910 00:50:46.808060 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:46.881486 systemd-networkd[1022]: cilium_host: Link UP Sep 10 00:50:46.885912 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 10 00:50:46.885984 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 10 00:50:46.882200 systemd-networkd[1022]: cilium_net: Link UP Sep 10 00:50:46.883964 systemd-networkd[1022]: cilium_net: Gained carrier Sep 10 00:50:46.886435 systemd-networkd[1022]: cilium_host: Gained carrier Sep 10 00:50:46.955289 systemd[1]: Started sshd@6-10.0.0.131:22-10.0.0.1:37584.service. Sep 10 00:50:46.960669 systemd-networkd[1022]: cilium_vxlan: Link UP Sep 10 00:50:46.960675 systemd-networkd[1022]: cilium_vxlan: Gained carrier Sep 10 00:50:46.983808 systemd-networkd[1022]: cilium_host: Gained IPv6LL Sep 10 00:50:46.991256 sshd[2841]: Accepted publickey for core from 10.0.0.1 port 37584 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:50:46.992559 sshd[2841]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:50:46.996426 systemd-logind[1190]: New session 7 of user core. Sep 10 00:50:46.997156 systemd[1]: Started session-7.scope. Sep 10 00:50:47.115339 sshd[2841]: pam_unix(sshd:session): session closed for user core Sep 10 00:50:47.118161 systemd-logind[1190]: Session 7 logged out. Waiting for processes to exit. Sep 10 00:50:47.118376 systemd[1]: sshd@6-10.0.0.131:22-10.0.0.1:37584.service: Deactivated successfully. Sep 10 00:50:47.119047 systemd[1]: session-7.scope: Deactivated successfully. Sep 10 00:50:47.119773 systemd-logind[1190]: Removed session 7. Sep 10 00:50:47.176567 kernel: NET: Registered PF_ALG protocol family Sep 10 00:50:47.584626 systemd-networkd[1022]: cilium_net: Gained IPv6LL Sep 10 00:50:47.733606 systemd-networkd[1022]: lxc_health: Link UP Sep 10 00:50:47.743069 systemd-networkd[1022]: lxc_health: Gained carrier Sep 10 00:50:47.743560 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 10 00:50:47.951091 systemd-networkd[1022]: lxc8b353af71e5d: Link UP Sep 10 00:50:47.961876 systemd-networkd[1022]: lxc14b53b520c87: Link UP Sep 10 00:50:47.971590 kernel: eth0: renamed from tmp884d2 Sep 10 00:50:47.979572 kernel: eth0: renamed from tmp36a48 Sep 10 00:50:47.990209 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 10 00:50:47.990283 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc8b353af71e5d: link becomes ready Sep 10 00:50:47.990456 systemd-networkd[1022]: lxc8b353af71e5d: Gained carrier Sep 10 00:50:47.992565 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc14b53b520c87: link becomes ready Sep 10 00:50:47.992865 systemd-networkd[1022]: lxc14b53b520c87: Gained carrier Sep 10 00:50:48.298355 kubelet[1929]: E0910 00:50:48.297920 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:48.810660 kubelet[1929]: E0910 00:50:48.810629 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:48.927681 systemd-networkd[1022]: cilium_vxlan: Gained IPv6LL Sep 10 00:50:49.183681 systemd-networkd[1022]: lxc_health: Gained IPv6LL Sep 10 00:50:49.503675 systemd-networkd[1022]: lxc8b353af71e5d: Gained IPv6LL Sep 10 00:50:49.503938 systemd-networkd[1022]: lxc14b53b520c87: Gained IPv6LL Sep 10 00:50:51.471904 env[1199]: time="2025-09-10T00:50:51.471811916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:50:51.471904 env[1199]: time="2025-09-10T00:50:51.471853775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:50:51.471904 env[1199]: time="2025-09-10T00:50:51.471864836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:50:51.472318 env[1199]: time="2025-09-10T00:50:51.472030200Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/36a486ffe602fdf38e6269489428a24fa8efd847cf951a89d07316b26c11b478 pid=3176 runtime=io.containerd.runc.v2 Sep 10 00:50:51.474559 env[1199]: time="2025-09-10T00:50:51.474143180Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:50:51.474559 env[1199]: time="2025-09-10T00:50:51.474200548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:50:51.474559 env[1199]: time="2025-09-10T00:50:51.474217772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:50:51.474559 env[1199]: time="2025-09-10T00:50:51.474386241Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/884d20c9466a4d4490277806a11af871887f501eee591bcb0a1cf3578aaff357 pid=3191 runtime=io.containerd.runc.v2 Sep 10 00:50:51.490784 systemd[1]: Started cri-containerd-36a486ffe602fdf38e6269489428a24fa8efd847cf951a89d07316b26c11b478.scope. Sep 10 00:50:51.494414 systemd[1]: Started cri-containerd-884d20c9466a4d4490277806a11af871887f501eee591bcb0a1cf3578aaff357.scope. Sep 10 00:50:51.503357 systemd-resolved[1138]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:50:51.510025 systemd-resolved[1138]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:50:51.527990 env[1199]: time="2025-09-10T00:50:51.527932950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-k7k8s,Uid:72ae3685-de13-4277-a0db-086a4108f81e,Namespace:kube-system,Attempt:0,} returns sandbox id \"36a486ffe602fdf38e6269489428a24fa8efd847cf951a89d07316b26c11b478\"" Sep 10 00:50:51.528734 kubelet[1929]: E0910 00:50:51.528696 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:51.530763 env[1199]: time="2025-09-10T00:50:51.530725408Z" level=info msg="CreateContainer within sandbox \"36a486ffe602fdf38e6269489428a24fa8efd847cf951a89d07316b26c11b478\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 00:50:51.537766 env[1199]: time="2025-09-10T00:50:51.536565079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-65vgk,Uid:f8d76a41-9118-4ff5-b06d-159190dae93f,Namespace:kube-system,Attempt:0,} returns sandbox id \"884d20c9466a4d4490277806a11af871887f501eee591bcb0a1cf3578aaff357\"" Sep 10 00:50:51.537925 kubelet[1929]: E0910 00:50:51.537535 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:51.539231 env[1199]: time="2025-09-10T00:50:51.539197975Z" level=info msg="CreateContainer within sandbox \"884d20c9466a4d4490277806a11af871887f501eee591bcb0a1cf3578aaff357\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 00:50:51.555940 env[1199]: time="2025-09-10T00:50:51.555880962Z" level=info msg="CreateContainer within sandbox \"36a486ffe602fdf38e6269489428a24fa8efd847cf951a89d07316b26c11b478\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c9808b068d747c6790ceb329c212577fba8093fed0c0b3177935d5821ce6bd89\"" Sep 10 00:50:51.556697 env[1199]: time="2025-09-10T00:50:51.556669709Z" level=info msg="StartContainer for \"c9808b068d747c6790ceb329c212577fba8093fed0c0b3177935d5821ce6bd89\"" Sep 10 00:50:51.565863 env[1199]: time="2025-09-10T00:50:51.565792540Z" level=info msg="CreateContainer within sandbox \"884d20c9466a4d4490277806a11af871887f501eee591bcb0a1cf3578aaff357\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e19d571d7f838530a9daedfc3861f4c092c807408adf3a05a8613283f8412272\"" Sep 10 00:50:51.567547 env[1199]: time="2025-09-10T00:50:51.567489680Z" level=info msg="StartContainer for \"e19d571d7f838530a9daedfc3861f4c092c807408adf3a05a8613283f8412272\"" Sep 10 00:50:51.576324 systemd[1]: Started cri-containerd-c9808b068d747c6790ceb329c212577fba8093fed0c0b3177935d5821ce6bd89.scope. Sep 10 00:50:51.595701 systemd[1]: Started cri-containerd-e19d571d7f838530a9daedfc3861f4c092c807408adf3a05a8613283f8412272.scope. Sep 10 00:50:51.613174 env[1199]: time="2025-09-10T00:50:51.613092605Z" level=info msg="StartContainer for \"c9808b068d747c6790ceb329c212577fba8093fed0c0b3177935d5821ce6bd89\" returns successfully" Sep 10 00:50:51.629127 env[1199]: time="2025-09-10T00:50:51.629072507Z" level=info msg="StartContainer for \"e19d571d7f838530a9daedfc3861f4c092c807408adf3a05a8613283f8412272\" returns successfully" Sep 10 00:50:51.816479 kubelet[1929]: E0910 00:50:51.816442 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:51.819375 kubelet[1929]: E0910 00:50:51.819343 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:51.827855 kubelet[1929]: I0910 00:50:51.827793 1929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-k7k8s" podStartSLOduration=26.827774115 podStartE2EDuration="26.827774115s" podCreationTimestamp="2025-09-10 00:50:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:50:51.827203923 +0000 UTC m=+33.177793338" watchObservedRunningTime="2025-09-10 00:50:51.827774115 +0000 UTC m=+33.178363530" Sep 10 00:50:51.850605 kubelet[1929]: I0910 00:50:51.849856 1929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-65vgk" podStartSLOduration=25.849834647 podStartE2EDuration="25.849834647s" podCreationTimestamp="2025-09-10 00:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:50:51.849373491 +0000 UTC m=+33.199962906" watchObservedRunningTime="2025-09-10 00:50:51.849834647 +0000 UTC m=+33.200424062" Sep 10 00:50:52.120518 systemd[1]: Started sshd@7-10.0.0.131:22-10.0.0.1:55628.service. Sep 10 00:50:52.155302 sshd[3335]: Accepted publickey for core from 10.0.0.1 port 55628 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:50:52.156623 sshd[3335]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:50:52.160380 systemd-logind[1190]: New session 8 of user core. Sep 10 00:50:52.161403 systemd[1]: Started session-8.scope. Sep 10 00:50:52.279789 sshd[3335]: pam_unix(sshd:session): session closed for user core Sep 10 00:50:52.282406 systemd[1]: sshd@7-10.0.0.131:22-10.0.0.1:55628.service: Deactivated successfully. Sep 10 00:50:52.283364 systemd[1]: session-8.scope: Deactivated successfully. Sep 10 00:50:52.283963 systemd-logind[1190]: Session 8 logged out. Waiting for processes to exit. Sep 10 00:50:52.284773 systemd-logind[1190]: Removed session 8. Sep 10 00:50:52.482380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3698500445.mount: Deactivated successfully. Sep 10 00:50:52.820833 kubelet[1929]: E0910 00:50:52.820783 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:52.821248 kubelet[1929]: E0910 00:50:52.820857 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:53.822964 kubelet[1929]: E0910 00:50:53.822876 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:53.822964 kubelet[1929]: E0910 00:50:53.822885 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:57.284134 systemd[1]: Started sshd@8-10.0.0.131:22-10.0.0.1:55634.service. Sep 10 00:50:57.316507 sshd[3351]: Accepted publickey for core from 10.0.0.1 port 55634 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:50:57.317798 sshd[3351]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:50:57.321228 systemd-logind[1190]: New session 9 of user core. Sep 10 00:50:57.322308 systemd[1]: Started session-9.scope. Sep 10 00:50:57.431707 sshd[3351]: pam_unix(sshd:session): session closed for user core Sep 10 00:50:57.433982 systemd[1]: sshd@8-10.0.0.131:22-10.0.0.1:55634.service: Deactivated successfully. Sep 10 00:50:57.434714 systemd[1]: session-9.scope: Deactivated successfully. Sep 10 00:50:57.435188 systemd-logind[1190]: Session 9 logged out. Waiting for processes to exit. Sep 10 00:50:57.435830 systemd-logind[1190]: Removed session 9. Sep 10 00:51:02.436593 systemd[1]: Started sshd@9-10.0.0.131:22-10.0.0.1:40308.service. Sep 10 00:51:02.755564 sshd[3365]: Accepted publickey for core from 10.0.0.1 port 40308 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:51:02.756645 sshd[3365]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:51:02.760396 systemd-logind[1190]: New session 10 of user core. Sep 10 00:51:02.761480 systemd[1]: Started session-10.scope. Sep 10 00:51:03.262566 sshd[3365]: pam_unix(sshd:session): session closed for user core Sep 10 00:51:03.266496 systemd[1]: Started sshd@10-10.0.0.131:22-10.0.0.1:40314.service. Sep 10 00:51:03.267074 systemd[1]: sshd@9-10.0.0.131:22-10.0.0.1:40308.service: Deactivated successfully. Sep 10 00:51:03.267745 systemd[1]: session-10.scope: Deactivated successfully. Sep 10 00:51:03.268757 systemd-logind[1190]: Session 10 logged out. Waiting for processes to exit. Sep 10 00:51:03.269713 systemd-logind[1190]: Removed session 10. Sep 10 00:51:03.301710 sshd[3379]: Accepted publickey for core from 10.0.0.1 port 40314 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:51:03.303072 sshd[3379]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:51:03.307146 systemd-logind[1190]: New session 11 of user core. Sep 10 00:51:03.307906 systemd[1]: Started session-11.scope. Sep 10 00:51:04.671639 sshd[3379]: pam_unix(sshd:session): session closed for user core Sep 10 00:51:04.674763 systemd[1]: sshd@10-10.0.0.131:22-10.0.0.1:40314.service: Deactivated successfully. Sep 10 00:51:04.675446 systemd[1]: session-11.scope: Deactivated successfully. Sep 10 00:51:04.676120 systemd-logind[1190]: Session 11 logged out. Waiting for processes to exit. Sep 10 00:51:04.677332 systemd[1]: Started sshd@11-10.0.0.131:22-10.0.0.1:40316.service. Sep 10 00:51:04.678503 systemd-logind[1190]: Removed session 11. Sep 10 00:51:04.711966 sshd[3392]: Accepted publickey for core from 10.0.0.1 port 40316 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:51:04.713215 sshd[3392]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:51:04.716582 systemd-logind[1190]: New session 12 of user core. Sep 10 00:51:04.717619 systemd[1]: Started session-12.scope. Sep 10 00:51:04.931114 sshd[3392]: pam_unix(sshd:session): session closed for user core Sep 10 00:51:04.933825 systemd[1]: sshd@11-10.0.0.131:22-10.0.0.1:40316.service: Deactivated successfully. Sep 10 00:51:04.934577 systemd[1]: session-12.scope: Deactivated successfully. Sep 10 00:51:04.935089 systemd-logind[1190]: Session 12 logged out. Waiting for processes to exit. Sep 10 00:51:04.935857 systemd-logind[1190]: Removed session 12. Sep 10 00:51:09.936014 systemd[1]: Started sshd@12-10.0.0.131:22-10.0.0.1:44388.service. Sep 10 00:51:09.970260 sshd[3405]: Accepted publickey for core from 10.0.0.1 port 44388 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:51:09.971566 sshd[3405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:51:09.975577 systemd-logind[1190]: New session 13 of user core. Sep 10 00:51:09.976672 systemd[1]: Started session-13.scope. Sep 10 00:51:10.101903 sshd[3405]: pam_unix(sshd:session): session closed for user core Sep 10 00:51:10.104111 systemd[1]: sshd@12-10.0.0.131:22-10.0.0.1:44388.service: Deactivated successfully. Sep 10 00:51:10.104792 systemd[1]: session-13.scope: Deactivated successfully. Sep 10 00:51:10.105292 systemd-logind[1190]: Session 13 logged out. Waiting for processes to exit. Sep 10 00:51:10.105994 systemd-logind[1190]: Removed session 13. Sep 10 00:51:15.106515 systemd[1]: Started sshd@13-10.0.0.131:22-10.0.0.1:44404.service. Sep 10 00:51:15.139081 sshd[3418]: Accepted publickey for core from 10.0.0.1 port 44404 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:51:15.140305 sshd[3418]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:51:15.144742 systemd-logind[1190]: New session 14 of user core. Sep 10 00:51:15.145923 systemd[1]: Started session-14.scope. Sep 10 00:51:15.302227 sshd[3418]: pam_unix(sshd:session): session closed for user core Sep 10 00:51:15.305844 systemd[1]: sshd@13-10.0.0.131:22-10.0.0.1:44404.service: Deactivated successfully. Sep 10 00:51:15.306623 systemd[1]: session-14.scope: Deactivated successfully. Sep 10 00:51:15.307732 systemd-logind[1190]: Session 14 logged out. Waiting for processes to exit. Sep 10 00:51:15.309239 systemd[1]: Started sshd@14-10.0.0.131:22-10.0.0.1:44406.service. Sep 10 00:51:15.310120 systemd-logind[1190]: Removed session 14. Sep 10 00:51:15.342399 sshd[3431]: Accepted publickey for core from 10.0.0.1 port 44406 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:51:15.343839 sshd[3431]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:51:15.347588 systemd-logind[1190]: New session 15 of user core. Sep 10 00:51:15.348445 systemd[1]: Started session-15.scope. Sep 10 00:51:15.508620 sshd[3431]: pam_unix(sshd:session): session closed for user core Sep 10 00:51:15.512509 systemd[1]: Started sshd@15-10.0.0.131:22-10.0.0.1:44418.service. Sep 10 00:51:15.513239 systemd[1]: sshd@14-10.0.0.131:22-10.0.0.1:44406.service: Deactivated successfully. Sep 10 00:51:15.513833 systemd[1]: session-15.scope: Deactivated successfully. Sep 10 00:51:15.514598 systemd-logind[1190]: Session 15 logged out. Waiting for processes to exit. Sep 10 00:51:15.515578 systemd-logind[1190]: Removed session 15. Sep 10 00:51:15.545995 sshd[3441]: Accepted publickey for core from 10.0.0.1 port 44418 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:51:15.547543 sshd[3441]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:51:15.551612 systemd-logind[1190]: New session 16 of user core. Sep 10 00:51:15.552741 systemd[1]: Started session-16.scope. Sep 10 00:51:17.008056 sshd[3441]: pam_unix(sshd:session): session closed for user core Sep 10 00:51:17.011999 systemd[1]: Started sshd@16-10.0.0.131:22-10.0.0.1:44432.service. Sep 10 00:51:17.012477 systemd[1]: sshd@15-10.0.0.131:22-10.0.0.1:44418.service: Deactivated successfully. Sep 10 00:51:17.013181 systemd[1]: session-16.scope: Deactivated successfully. Sep 10 00:51:17.014583 systemd-logind[1190]: Session 16 logged out. Waiting for processes to exit. Sep 10 00:51:17.015693 systemd-logind[1190]: Removed session 16. Sep 10 00:51:17.071991 sshd[3461]: Accepted publickey for core from 10.0.0.1 port 44432 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:51:17.073778 sshd[3461]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:51:17.078379 systemd-logind[1190]: New session 17 of user core. Sep 10 00:51:17.079165 systemd[1]: Started session-17.scope. Sep 10 00:51:17.384728 sshd[3461]: pam_unix(sshd:session): session closed for user core Sep 10 00:51:17.388228 systemd[1]: Started sshd@17-10.0.0.131:22-10.0.0.1:44436.service. Sep 10 00:51:17.391244 systemd[1]: sshd@16-10.0.0.131:22-10.0.0.1:44432.service: Deactivated successfully. Sep 10 00:51:17.392144 systemd[1]: session-17.scope: Deactivated successfully. Sep 10 00:51:17.392741 systemd-logind[1190]: Session 17 logged out. Waiting for processes to exit. Sep 10 00:51:17.393576 systemd-logind[1190]: Removed session 17. Sep 10 00:51:17.420982 sshd[3473]: Accepted publickey for core from 10.0.0.1 port 44436 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:51:17.422220 sshd[3473]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:51:17.425655 systemd-logind[1190]: New session 18 of user core. Sep 10 00:51:17.426700 systemd[1]: Started session-18.scope. Sep 10 00:51:17.536013 sshd[3473]: pam_unix(sshd:session): session closed for user core Sep 10 00:51:17.539287 systemd[1]: sshd@17-10.0.0.131:22-10.0.0.1:44436.service: Deactivated successfully. Sep 10 00:51:17.540011 systemd[1]: session-18.scope: Deactivated successfully. Sep 10 00:51:17.540820 systemd-logind[1190]: Session 18 logged out. Waiting for processes to exit. Sep 10 00:51:17.541611 systemd-logind[1190]: Removed session 18. Sep 10 00:51:22.539700 systemd[1]: Started sshd@18-10.0.0.131:22-10.0.0.1:42698.service. Sep 10 00:51:22.572482 sshd[3493]: Accepted publickey for core from 10.0.0.1 port 42698 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:51:22.573769 sshd[3493]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:51:22.577232 systemd-logind[1190]: New session 19 of user core. Sep 10 00:51:22.577985 systemd[1]: Started session-19.scope. Sep 10 00:51:22.997447 sshd[3493]: pam_unix(sshd:session): session closed for user core Sep 10 00:51:23.000145 systemd[1]: sshd@18-10.0.0.131:22-10.0.0.1:42698.service: Deactivated successfully. Sep 10 00:51:23.001007 systemd[1]: session-19.scope: Deactivated successfully. Sep 10 00:51:23.001607 systemd-logind[1190]: Session 19 logged out. Waiting for processes to exit. Sep 10 00:51:23.002274 systemd-logind[1190]: Removed session 19. Sep 10 00:51:28.001761 systemd[1]: Started sshd@19-10.0.0.131:22-10.0.0.1:42702.service. Sep 10 00:51:28.034016 sshd[3512]: Accepted publickey for core from 10.0.0.1 port 42702 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:51:28.035162 sshd[3512]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:51:28.038507 systemd-logind[1190]: New session 20 of user core. Sep 10 00:51:28.039512 systemd[1]: Started session-20.scope. Sep 10 00:51:28.141549 sshd[3512]: pam_unix(sshd:session): session closed for user core Sep 10 00:51:28.143837 systemd[1]: sshd@19-10.0.0.131:22-10.0.0.1:42702.service: Deactivated successfully. Sep 10 00:51:28.144650 systemd[1]: session-20.scope: Deactivated successfully. Sep 10 00:51:28.145273 systemd-logind[1190]: Session 20 logged out. Waiting for processes to exit. Sep 10 00:51:28.146097 systemd-logind[1190]: Removed session 20. Sep 10 00:51:33.146700 systemd[1]: Started sshd@20-10.0.0.131:22-10.0.0.1:57242.service. Sep 10 00:51:33.178414 sshd[3525]: Accepted publickey for core from 10.0.0.1 port 57242 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:51:33.261431 sshd[3525]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:51:33.264917 systemd-logind[1190]: New session 21 of user core. Sep 10 00:51:33.265993 systemd[1]: Started session-21.scope. Sep 10 00:51:33.489261 sshd[3525]: pam_unix(sshd:session): session closed for user core Sep 10 00:51:33.491414 systemd[1]: sshd@20-10.0.0.131:22-10.0.0.1:57242.service: Deactivated successfully. Sep 10 00:51:33.492125 systemd[1]: session-21.scope: Deactivated successfully. Sep 10 00:51:33.492800 systemd-logind[1190]: Session 21 logged out. Waiting for processes to exit. Sep 10 00:51:33.493446 systemd-logind[1190]: Removed session 21. Sep 10 00:51:38.493602 systemd[1]: Started sshd@21-10.0.0.131:22-10.0.0.1:57248.service. Sep 10 00:51:38.529176 sshd[3538]: Accepted publickey for core from 10.0.0.1 port 57248 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:51:38.530586 sshd[3538]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:51:38.534260 systemd-logind[1190]: New session 22 of user core. Sep 10 00:51:38.535418 systemd[1]: Started session-22.scope. Sep 10 00:51:38.684875 sshd[3538]: pam_unix(sshd:session): session closed for user core Sep 10 00:51:38.687751 systemd[1]: sshd@21-10.0.0.131:22-10.0.0.1:57248.service: Deactivated successfully. Sep 10 00:51:38.688284 systemd[1]: session-22.scope: Deactivated successfully. Sep 10 00:51:38.688810 systemd-logind[1190]: Session 22 logged out. Waiting for processes to exit. Sep 10 00:51:38.689902 systemd[1]: Started sshd@22-10.0.0.131:22-10.0.0.1:57260.service. Sep 10 00:51:38.690944 systemd-logind[1190]: Removed session 22. Sep 10 00:51:38.721909 sshd[3551]: Accepted publickey for core from 10.0.0.1 port 57260 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:51:38.723189 sshd[3551]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:51:38.726787 systemd-logind[1190]: New session 23 of user core. Sep 10 00:51:38.727630 systemd[1]: Started session-23.scope. Sep 10 00:51:40.066122 env[1199]: time="2025-09-10T00:51:40.064715410Z" level=info msg="StopContainer for \"dbbf16960d87a5855be6e9fdb3e37bc65ef05c497a64602010d463053efab253\" with timeout 30 (s)" Sep 10 00:51:40.067010 env[1199]: time="2025-09-10T00:51:40.066963696Z" level=info msg="Stop container \"dbbf16960d87a5855be6e9fdb3e37bc65ef05c497a64602010d463053efab253\" with signal terminated" Sep 10 00:51:40.079492 systemd[1]: cri-containerd-dbbf16960d87a5855be6e9fdb3e37bc65ef05c497a64602010d463053efab253.scope: Deactivated successfully. Sep 10 00:51:40.091182 env[1199]: time="2025-09-10T00:51:40.090913824Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 00:51:40.096637 env[1199]: time="2025-09-10T00:51:40.096605230Z" level=info msg="StopContainer for \"728d7cfe3057edddf7a6d61ff9fb99f4cd23add3986e7b8087d1dfb64a3b9204\" with timeout 2 (s)" Sep 10 00:51:40.097178 env[1199]: time="2025-09-10T00:51:40.097131822Z" level=info msg="Stop container \"728d7cfe3057edddf7a6d61ff9fb99f4cd23add3986e7b8087d1dfb64a3b9204\" with signal terminated" Sep 10 00:51:40.102127 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbbf16960d87a5855be6e9fdb3e37bc65ef05c497a64602010d463053efab253-rootfs.mount: Deactivated successfully. Sep 10 00:51:40.104974 systemd-networkd[1022]: lxc_health: Link DOWN Sep 10 00:51:40.104981 systemd-networkd[1022]: lxc_health: Lost carrier Sep 10 00:51:40.344026 systemd[1]: cri-containerd-728d7cfe3057edddf7a6d61ff9fb99f4cd23add3986e7b8087d1dfb64a3b9204.scope: Deactivated successfully. Sep 10 00:51:40.344462 systemd[1]: cri-containerd-728d7cfe3057edddf7a6d61ff9fb99f4cd23add3986e7b8087d1dfb64a3b9204.scope: Consumed 6.103s CPU time. Sep 10 00:51:40.361431 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-728d7cfe3057edddf7a6d61ff9fb99f4cd23add3986e7b8087d1dfb64a3b9204-rootfs.mount: Deactivated successfully. Sep 10 00:51:40.439565 env[1199]: time="2025-09-10T00:51:40.439471579Z" level=info msg="shim disconnected" id=728d7cfe3057edddf7a6d61ff9fb99f4cd23add3986e7b8087d1dfb64a3b9204 Sep 10 00:51:40.439811 env[1199]: time="2025-09-10T00:51:40.439578773Z" level=warning msg="cleaning up after shim disconnected" id=728d7cfe3057edddf7a6d61ff9fb99f4cd23add3986e7b8087d1dfb64a3b9204 namespace=k8s.io Sep 10 00:51:40.439811 env[1199]: time="2025-09-10T00:51:40.439590506Z" level=info msg="cleaning up dead shim" Sep 10 00:51:40.440057 env[1199]: time="2025-09-10T00:51:40.440027166Z" level=info msg="shim disconnected" id=dbbf16960d87a5855be6e9fdb3e37bc65ef05c497a64602010d463053efab253 Sep 10 00:51:40.440057 env[1199]: time="2025-09-10T00:51:40.440060128Z" level=warning msg="cleaning up after shim disconnected" id=dbbf16960d87a5855be6e9fdb3e37bc65ef05c497a64602010d463053efab253 namespace=k8s.io Sep 10 00:51:40.440057 env[1199]: time="2025-09-10T00:51:40.440077893Z" level=info msg="cleaning up dead shim" Sep 10 00:51:40.447371 env[1199]: time="2025-09-10T00:51:40.447324487Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:51:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3623 runtime=io.containerd.runc.v2\n" Sep 10 00:51:40.449815 env[1199]: time="2025-09-10T00:51:40.449757725Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:51:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3622 runtime=io.containerd.runc.v2\n" Sep 10 00:51:40.452490 env[1199]: time="2025-09-10T00:51:40.452448423Z" level=info msg="StopContainer for \"dbbf16960d87a5855be6e9fdb3e37bc65ef05c497a64602010d463053efab253\" returns successfully" Sep 10 00:51:40.453287 env[1199]: time="2025-09-10T00:51:40.453251952Z" level=info msg="StopPodSandbox for \"aeb761ff97c535caca8ad5c81af23864623790e53242498c36588d4b6d4e7485\"" Sep 10 00:51:40.453347 env[1199]: time="2025-09-10T00:51:40.453334279Z" level=info msg="Container to stop \"dbbf16960d87a5855be6e9fdb3e37bc65ef05c497a64602010d463053efab253\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:51:40.454392 env[1199]: time="2025-09-10T00:51:40.454367294Z" level=info msg="StopContainer for \"728d7cfe3057edddf7a6d61ff9fb99f4cd23add3986e7b8087d1dfb64a3b9204\" returns successfully" Sep 10 00:51:40.454794 env[1199]: time="2025-09-10T00:51:40.454772254Z" level=info msg="StopPodSandbox for \"5cb06011b789f98cdfcc28fda9c8bfb3ce46444d6d66e091d21a70f0492e2e40\"" Sep 10 00:51:40.454863 env[1199]: time="2025-09-10T00:51:40.454820225Z" level=info msg="Container to stop \"728d7cfe3057edddf7a6d61ff9fb99f4cd23add3986e7b8087d1dfb64a3b9204\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:51:40.454863 env[1199]: time="2025-09-10T00:51:40.454835064Z" level=info msg="Container to stop \"52cbdcf49bc0793f6e4485f6f57d97390febc05728551f97200cdba790d26ef2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:51:40.454863 env[1199]: time="2025-09-10T00:51:40.454845894Z" level=info msg="Container to stop \"39a0e1deb017bfba542779dd34c39f127eeed678f795ad8ec416c9fd9419f664\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:51:40.454863 env[1199]: time="2025-09-10T00:51:40.454855041Z" level=info msg="Container to stop \"0106c3784053b48fd5033bf9bfb41c436ea8cdedbcdd88d3f95f916c9fd7c8a1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:51:40.454972 env[1199]: time="2025-09-10T00:51:40.454864299Z" level=info msg="Container to stop \"d207163557ac6025337fce945b45e42fa60b2f48ab7637c956044a949d42b5c8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:51:40.457328 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aeb761ff97c535caca8ad5c81af23864623790e53242498c36588d4b6d4e7485-shm.mount: Deactivated successfully. Sep 10 00:51:40.459758 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5cb06011b789f98cdfcc28fda9c8bfb3ce46444d6d66e091d21a70f0492e2e40-shm.mount: Deactivated successfully. Sep 10 00:51:40.460520 systemd[1]: cri-containerd-5cb06011b789f98cdfcc28fda9c8bfb3ce46444d6d66e091d21a70f0492e2e40.scope: Deactivated successfully. Sep 10 00:51:40.461477 systemd[1]: cri-containerd-aeb761ff97c535caca8ad5c81af23864623790e53242498c36588d4b6d4e7485.scope: Deactivated successfully. Sep 10 00:51:40.483357 env[1199]: time="2025-09-10T00:51:40.483301352Z" level=info msg="shim disconnected" id=5cb06011b789f98cdfcc28fda9c8bfb3ce46444d6d66e091d21a70f0492e2e40 Sep 10 00:51:40.483650 env[1199]: time="2025-09-10T00:51:40.483629617Z" level=warning msg="cleaning up after shim disconnected" id=5cb06011b789f98cdfcc28fda9c8bfb3ce46444d6d66e091d21a70f0492e2e40 namespace=k8s.io Sep 10 00:51:40.483737 env[1199]: time="2025-09-10T00:51:40.483718966Z" level=info msg="cleaning up dead shim" Sep 10 00:51:40.484658 env[1199]: time="2025-09-10T00:51:40.483978440Z" level=info msg="shim disconnected" id=aeb761ff97c535caca8ad5c81af23864623790e53242498c36588d4b6d4e7485 Sep 10 00:51:40.484658 env[1199]: time="2025-09-10T00:51:40.484650077Z" level=warning msg="cleaning up after shim disconnected" id=aeb761ff97c535caca8ad5c81af23864623790e53242498c36588d4b6d4e7485 namespace=k8s.io Sep 10 00:51:40.484658 env[1199]: time="2025-09-10T00:51:40.484658224Z" level=info msg="cleaning up dead shim" Sep 10 00:51:40.491714 env[1199]: time="2025-09-10T00:51:40.491650554Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:51:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3685 runtime=io.containerd.runc.v2\n" Sep 10 00:51:40.492045 env[1199]: time="2025-09-10T00:51:40.492016169Z" level=info msg="TearDown network for sandbox \"5cb06011b789f98cdfcc28fda9c8bfb3ce46444d6d66e091d21a70f0492e2e40\" successfully" Sep 10 00:51:40.492045 env[1199]: time="2025-09-10T00:51:40.492043421Z" level=info msg="StopPodSandbox for \"5cb06011b789f98cdfcc28fda9c8bfb3ce46444d6d66e091d21a70f0492e2e40\" returns successfully" Sep 10 00:51:40.498188 env[1199]: time="2025-09-10T00:51:40.498151900Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:51:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3686 runtime=io.containerd.runc.v2\n" Sep 10 00:51:40.498504 env[1199]: time="2025-09-10T00:51:40.498473472Z" level=info msg="TearDown network for sandbox \"aeb761ff97c535caca8ad5c81af23864623790e53242498c36588d4b6d4e7485\" successfully" Sep 10 00:51:40.498504 env[1199]: time="2025-09-10T00:51:40.498502778Z" level=info msg="StopPodSandbox for \"aeb761ff97c535caca8ad5c81af23864623790e53242498c36588d4b6d4e7485\" returns successfully" Sep 10 00:51:40.507280 kubelet[1929]: I0910 00:51:40.507243 1929 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkv6r\" (UniqueName: \"kubernetes.io/projected/08089bcd-5d61-4c4d-bb10-c99d7259adea-kube-api-access-vkv6r\") pod \"08089bcd-5d61-4c4d-bb10-c99d7259adea\" (UID: \"08089bcd-5d61-4c4d-bb10-c99d7259adea\") " Sep 10 00:51:40.507612 kubelet[1929]: I0910 00:51:40.507308 1929 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-etc-cni-netd\") pod \"72015ac2-1c73-4b4f-83e7-9cb61d325200\" (UID: \"72015ac2-1c73-4b4f-83e7-9cb61d325200\") " Sep 10 00:51:40.507612 kubelet[1929]: I0910 00:51:40.507424 1929 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "72015ac2-1c73-4b4f-83e7-9cb61d325200" (UID: "72015ac2-1c73-4b4f-83e7-9cb61d325200"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:51:40.510612 kubelet[1929]: I0910 00:51:40.510562 1929 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08089bcd-5d61-4c4d-bb10-c99d7259adea-kube-api-access-vkv6r" (OuterVolumeSpecName: "kube-api-access-vkv6r") pod "08089bcd-5d61-4c4d-bb10-c99d7259adea" (UID: "08089bcd-5d61-4c4d-bb10-c99d7259adea"). InnerVolumeSpecName "kube-api-access-vkv6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 00:51:40.608539 kubelet[1929]: I0910 00:51:40.608371 1929 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkq7q\" (UniqueName: \"kubernetes.io/projected/72015ac2-1c73-4b4f-83e7-9cb61d325200-kube-api-access-gkq7q\") pod \"72015ac2-1c73-4b4f-83e7-9cb61d325200\" (UID: \"72015ac2-1c73-4b4f-83e7-9cb61d325200\") " Sep 10 00:51:40.608803 kubelet[1929]: I0910 00:51:40.608781 1929 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-hostproc\") pod \"72015ac2-1c73-4b4f-83e7-9cb61d325200\" (UID: \"72015ac2-1c73-4b4f-83e7-9cb61d325200\") " Sep 10 00:51:40.608929 kubelet[1929]: I0910 00:51:40.608906 1929 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/72015ac2-1c73-4b4f-83e7-9cb61d325200-cilium-config-path\") pod \"72015ac2-1c73-4b4f-83e7-9cb61d325200\" (UID: \"72015ac2-1c73-4b4f-83e7-9cb61d325200\") " Sep 10 00:51:40.609083 kubelet[1929]: I0910 00:51:40.609047 1929 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-cilium-run\") pod \"72015ac2-1c73-4b4f-83e7-9cb61d325200\" (UID: \"72015ac2-1c73-4b4f-83e7-9cb61d325200\") " Sep 10 00:51:40.609206 kubelet[1929]: I0910 00:51:40.609182 1929 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/72015ac2-1c73-4b4f-83e7-9cb61d325200-hubble-tls\") pod \"72015ac2-1c73-4b4f-83e7-9cb61d325200\" (UID: \"72015ac2-1c73-4b4f-83e7-9cb61d325200\") " Sep 10 00:51:40.609352 kubelet[1929]: I0910 00:51:40.609330 1929 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-xtables-lock\") pod \"72015ac2-1c73-4b4f-83e7-9cb61d325200\" (UID: \"72015ac2-1c73-4b4f-83e7-9cb61d325200\") " Sep 10 00:51:40.609494 kubelet[1929]: I0910 00:51:40.609471 1929 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/08089bcd-5d61-4c4d-bb10-c99d7259adea-cilium-config-path\") pod \"08089bcd-5d61-4c4d-bb10-c99d7259adea\" (UID: \"08089bcd-5d61-4c4d-bb10-c99d7259adea\") " Sep 10 00:51:40.609659 kubelet[1929]: I0910 00:51:40.609617 1929 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-bpf-maps\") pod \"72015ac2-1c73-4b4f-83e7-9cb61d325200\" (UID: \"72015ac2-1c73-4b4f-83e7-9cb61d325200\") " Sep 10 00:51:40.609659 kubelet[1929]: I0910 00:51:40.609649 1929 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-cni-path\") pod \"72015ac2-1c73-4b4f-83e7-9cb61d325200\" (UID: \"72015ac2-1c73-4b4f-83e7-9cb61d325200\") " Sep 10 00:51:40.609868 kubelet[1929]: I0910 00:51:40.609704 1929 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-host-proc-sys-kernel\") pod \"72015ac2-1c73-4b4f-83e7-9cb61d325200\" (UID: \"72015ac2-1c73-4b4f-83e7-9cb61d325200\") " Sep 10 00:51:40.609868 kubelet[1929]: I0910 00:51:40.609731 1929 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/72015ac2-1c73-4b4f-83e7-9cb61d325200-clustermesh-secrets\") pod \"72015ac2-1c73-4b4f-83e7-9cb61d325200\" (UID: \"72015ac2-1c73-4b4f-83e7-9cb61d325200\") " Sep 10 00:51:40.609868 kubelet[1929]: I0910 00:51:40.609752 1929 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-host-proc-sys-net\") pod \"72015ac2-1c73-4b4f-83e7-9cb61d325200\" (UID: \"72015ac2-1c73-4b4f-83e7-9cb61d325200\") " Sep 10 00:51:40.609868 kubelet[1929]: I0910 00:51:40.609774 1929 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-lib-modules\") pod \"72015ac2-1c73-4b4f-83e7-9cb61d325200\" (UID: \"72015ac2-1c73-4b4f-83e7-9cb61d325200\") " Sep 10 00:51:40.609868 kubelet[1929]: I0910 00:51:40.609816 1929 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-cilium-cgroup\") pod \"72015ac2-1c73-4b4f-83e7-9cb61d325200\" (UID: \"72015ac2-1c73-4b4f-83e7-9cb61d325200\") " Sep 10 00:51:40.609868 kubelet[1929]: I0910 00:51:40.609865 1929 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkv6r\" (UniqueName: \"kubernetes.io/projected/08089bcd-5d61-4c4d-bb10-c99d7259adea-kube-api-access-vkv6r\") on node \"localhost\" DevicePath \"\"" Sep 10 00:51:40.610058 kubelet[1929]: I0910 00:51:40.609880 1929 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 10 00:51:40.610058 kubelet[1929]: I0910 00:51:40.609920 1929 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "72015ac2-1c73-4b4f-83e7-9cb61d325200" (UID: "72015ac2-1c73-4b4f-83e7-9cb61d325200"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:51:40.610058 kubelet[1929]: I0910 00:51:40.609953 1929 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "72015ac2-1c73-4b4f-83e7-9cb61d325200" (UID: "72015ac2-1c73-4b4f-83e7-9cb61d325200"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:51:40.611462 kubelet[1929]: I0910 00:51:40.611416 1929 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72015ac2-1c73-4b4f-83e7-9cb61d325200-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "72015ac2-1c73-4b4f-83e7-9cb61d325200" (UID: "72015ac2-1c73-4b4f-83e7-9cb61d325200"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 10 00:51:40.611558 kubelet[1929]: I0910 00:51:40.611476 1929 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-cni-path" (OuterVolumeSpecName: "cni-path") pod "72015ac2-1c73-4b4f-83e7-9cb61d325200" (UID: "72015ac2-1c73-4b4f-83e7-9cb61d325200"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:51:40.611558 kubelet[1929]: I0910 00:51:40.611505 1929 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "72015ac2-1c73-4b4f-83e7-9cb61d325200" (UID: "72015ac2-1c73-4b4f-83e7-9cb61d325200"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:51:40.611840 kubelet[1929]: I0910 00:51:40.611776 1929 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-hostproc" (OuterVolumeSpecName: "hostproc") pod "72015ac2-1c73-4b4f-83e7-9cb61d325200" (UID: "72015ac2-1c73-4b4f-83e7-9cb61d325200"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:51:40.612548 kubelet[1929]: I0910 00:51:40.612505 1929 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72015ac2-1c73-4b4f-83e7-9cb61d325200-kube-api-access-gkq7q" (OuterVolumeSpecName: "kube-api-access-gkq7q") pod "72015ac2-1c73-4b4f-83e7-9cb61d325200" (UID: "72015ac2-1c73-4b4f-83e7-9cb61d325200"). InnerVolumeSpecName "kube-api-access-gkq7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 00:51:40.612701 kubelet[1929]: I0910 00:51:40.612675 1929 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "72015ac2-1c73-4b4f-83e7-9cb61d325200" (UID: "72015ac2-1c73-4b4f-83e7-9cb61d325200"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:51:40.612837 kubelet[1929]: I0910 00:51:40.612814 1929 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "72015ac2-1c73-4b4f-83e7-9cb61d325200" (UID: "72015ac2-1c73-4b4f-83e7-9cb61d325200"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:51:40.612961 kubelet[1929]: I0910 00:51:40.612938 1929 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "72015ac2-1c73-4b4f-83e7-9cb61d325200" (UID: "72015ac2-1c73-4b4f-83e7-9cb61d325200"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:51:40.613109 kubelet[1929]: I0910 00:51:40.613085 1929 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "72015ac2-1c73-4b4f-83e7-9cb61d325200" (UID: "72015ac2-1c73-4b4f-83e7-9cb61d325200"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:51:40.613339 kubelet[1929]: I0910 00:51:40.613290 1929 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72015ac2-1c73-4b4f-83e7-9cb61d325200-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "72015ac2-1c73-4b4f-83e7-9cb61d325200" (UID: "72015ac2-1c73-4b4f-83e7-9cb61d325200"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 00:51:40.614258 kubelet[1929]: I0910 00:51:40.614229 1929 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08089bcd-5d61-4c4d-bb10-c99d7259adea-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "08089bcd-5d61-4c4d-bb10-c99d7259adea" (UID: "08089bcd-5d61-4c4d-bb10-c99d7259adea"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 10 00:51:40.614832 kubelet[1929]: I0910 00:51:40.614808 1929 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72015ac2-1c73-4b4f-83e7-9cb61d325200-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "72015ac2-1c73-4b4f-83e7-9cb61d325200" (UID: "72015ac2-1c73-4b4f-83e7-9cb61d325200"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 10 00:51:40.710706 kubelet[1929]: I0910 00:51:40.710594 1929 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/72015ac2-1c73-4b4f-83e7-9cb61d325200-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 10 00:51:40.710706 kubelet[1929]: I0910 00:51:40.710634 1929 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 10 00:51:40.710706 kubelet[1929]: I0910 00:51:40.710651 1929 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 10 00:51:40.710706 kubelet[1929]: I0910 00:51:40.710659 1929 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 10 00:51:40.710706 kubelet[1929]: I0910 00:51:40.710667 1929 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkq7q\" (UniqueName: \"kubernetes.io/projected/72015ac2-1c73-4b4f-83e7-9cb61d325200-kube-api-access-gkq7q\") on node \"localhost\" DevicePath \"\"" Sep 10 00:51:40.710706 kubelet[1929]: I0910 00:51:40.710678 1929 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 10 00:51:40.710706 kubelet[1929]: I0910 00:51:40.710684 1929 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/72015ac2-1c73-4b4f-83e7-9cb61d325200-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 10 00:51:40.710706 kubelet[1929]: I0910 00:51:40.710693 1929 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 10 00:51:40.711507 kubelet[1929]: I0910 00:51:40.710702 1929 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/72015ac2-1c73-4b4f-83e7-9cb61d325200-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 10 00:51:40.711507 kubelet[1929]: I0910 00:51:40.710709 1929 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 10 00:51:40.711507 kubelet[1929]: I0910 00:51:40.710716 1929 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 10 00:51:40.711507 kubelet[1929]: I0910 00:51:40.710722 1929 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/08089bcd-5d61-4c4d-bb10-c99d7259adea-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 10 00:51:40.711507 kubelet[1929]: I0910 00:51:40.710729 1929 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 10 00:51:40.711507 kubelet[1929]: I0910 00:51:40.710735 1929 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/72015ac2-1c73-4b4f-83e7-9cb61d325200-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 10 00:51:40.743921 systemd[1]: Removed slice kubepods-burstable-pod72015ac2_1c73_4b4f_83e7_9cb61d325200.slice. Sep 10 00:51:40.744034 systemd[1]: kubepods-burstable-pod72015ac2_1c73_4b4f_83e7_9cb61d325200.slice: Consumed 6.216s CPU time. Sep 10 00:51:40.745013 systemd[1]: Removed slice kubepods-besteffort-pod08089bcd_5d61_4c4d_bb10_c99d7259adea.slice. Sep 10 00:51:40.911037 kubelet[1929]: I0910 00:51:40.910898 1929 scope.go:117] "RemoveContainer" containerID="728d7cfe3057edddf7a6d61ff9fb99f4cd23add3986e7b8087d1dfb64a3b9204" Sep 10 00:51:40.912276 env[1199]: time="2025-09-10T00:51:40.912215861Z" level=info msg="RemoveContainer for \"728d7cfe3057edddf7a6d61ff9fb99f4cd23add3986e7b8087d1dfb64a3b9204\"" Sep 10 00:51:41.074786 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aeb761ff97c535caca8ad5c81af23864623790e53242498c36588d4b6d4e7485-rootfs.mount: Deactivated successfully. Sep 10 00:51:41.074911 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5cb06011b789f98cdfcc28fda9c8bfb3ce46444d6d66e091d21a70f0492e2e40-rootfs.mount: Deactivated successfully. Sep 10 00:51:41.075011 systemd[1]: var-lib-kubelet-pods-08089bcd\x2d5d61\x2d4c4d\x2dbb10\x2dc99d7259adea-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvkv6r.mount: Deactivated successfully. Sep 10 00:51:41.075124 systemd[1]: var-lib-kubelet-pods-72015ac2\x2d1c73\x2d4b4f\x2d83e7\x2d9cb61d325200-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgkq7q.mount: Deactivated successfully. Sep 10 00:51:41.075219 systemd[1]: var-lib-kubelet-pods-72015ac2\x2d1c73\x2d4b4f\x2d83e7\x2d9cb61d325200-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 10 00:51:41.075315 systemd[1]: var-lib-kubelet-pods-72015ac2\x2d1c73\x2d4b4f\x2d83e7\x2d9cb61d325200-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 10 00:51:41.196410 env[1199]: time="2025-09-10T00:51:41.196296148Z" level=info msg="RemoveContainer for \"728d7cfe3057edddf7a6d61ff9fb99f4cd23add3986e7b8087d1dfb64a3b9204\" returns successfully" Sep 10 00:51:41.196742 kubelet[1929]: I0910 00:51:41.196648 1929 scope.go:117] "RemoveContainer" containerID="52cbdcf49bc0793f6e4485f6f57d97390febc05728551f97200cdba790d26ef2" Sep 10 00:51:41.197648 env[1199]: time="2025-09-10T00:51:41.197614796Z" level=info msg="RemoveContainer for \"52cbdcf49bc0793f6e4485f6f57d97390febc05728551f97200cdba790d26ef2\"" Sep 10 00:51:41.361172 env[1199]: time="2025-09-10T00:51:41.361093582Z" level=info msg="RemoveContainer for \"52cbdcf49bc0793f6e4485f6f57d97390febc05728551f97200cdba790d26ef2\" returns successfully" Sep 10 00:51:41.361653 kubelet[1929]: I0910 00:51:41.361613 1929 scope.go:117] "RemoveContainer" containerID="d207163557ac6025337fce945b45e42fa60b2f48ab7637c956044a949d42b5c8" Sep 10 00:51:41.363407 env[1199]: time="2025-09-10T00:51:41.363363289Z" level=info msg="RemoveContainer for \"d207163557ac6025337fce945b45e42fa60b2f48ab7637c956044a949d42b5c8\"" Sep 10 00:51:41.370968 env[1199]: time="2025-09-10T00:51:41.370907418Z" level=info msg="RemoveContainer for \"d207163557ac6025337fce945b45e42fa60b2f48ab7637c956044a949d42b5c8\" returns successfully" Sep 10 00:51:41.371286 kubelet[1929]: I0910 00:51:41.371213 1929 scope.go:117] "RemoveContainer" containerID="0106c3784053b48fd5033bf9bfb41c436ea8cdedbcdd88d3f95f916c9fd7c8a1" Sep 10 00:51:41.372638 env[1199]: time="2025-09-10T00:51:41.372602331Z" level=info msg="RemoveContainer for \"0106c3784053b48fd5033bf9bfb41c436ea8cdedbcdd88d3f95f916c9fd7c8a1\"" Sep 10 00:51:41.377547 env[1199]: time="2025-09-10T00:51:41.377473536Z" level=info msg="RemoveContainer for \"0106c3784053b48fd5033bf9bfb41c436ea8cdedbcdd88d3f95f916c9fd7c8a1\" returns successfully" Sep 10 00:51:41.377778 kubelet[1929]: I0910 00:51:41.377732 1929 scope.go:117] "RemoveContainer" containerID="39a0e1deb017bfba542779dd34c39f127eeed678f795ad8ec416c9fd9419f664" Sep 10 00:51:41.378922 env[1199]: time="2025-09-10T00:51:41.378874271Z" level=info msg="RemoveContainer for \"39a0e1deb017bfba542779dd34c39f127eeed678f795ad8ec416c9fd9419f664\"" Sep 10 00:51:41.418693 env[1199]: time="2025-09-10T00:51:41.418636067Z" level=info msg="RemoveContainer for \"39a0e1deb017bfba542779dd34c39f127eeed678f795ad8ec416c9fd9419f664\" returns successfully" Sep 10 00:51:41.419104 kubelet[1929]: I0910 00:51:41.419042 1929 scope.go:117] "RemoveContainer" containerID="728d7cfe3057edddf7a6d61ff9fb99f4cd23add3986e7b8087d1dfb64a3b9204" Sep 10 00:51:41.419442 env[1199]: time="2025-09-10T00:51:41.419359112Z" level=error msg="ContainerStatus for \"728d7cfe3057edddf7a6d61ff9fb99f4cd23add3986e7b8087d1dfb64a3b9204\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"728d7cfe3057edddf7a6d61ff9fb99f4cd23add3986e7b8087d1dfb64a3b9204\": not found" Sep 10 00:51:41.419624 kubelet[1929]: E0910 00:51:41.419585 1929 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"728d7cfe3057edddf7a6d61ff9fb99f4cd23add3986e7b8087d1dfb64a3b9204\": not found" containerID="728d7cfe3057edddf7a6d61ff9fb99f4cd23add3986e7b8087d1dfb64a3b9204" Sep 10 00:51:41.419764 kubelet[1929]: I0910 00:51:41.419642 1929 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"728d7cfe3057edddf7a6d61ff9fb99f4cd23add3986e7b8087d1dfb64a3b9204"} err="failed to get container status \"728d7cfe3057edddf7a6d61ff9fb99f4cd23add3986e7b8087d1dfb64a3b9204\": rpc error: code = NotFound desc = an error occurred when try to find container \"728d7cfe3057edddf7a6d61ff9fb99f4cd23add3986e7b8087d1dfb64a3b9204\": not found" Sep 10 00:51:41.419764 kubelet[1929]: I0910 00:51:41.419763 1929 scope.go:117] "RemoveContainer" containerID="52cbdcf49bc0793f6e4485f6f57d97390febc05728551f97200cdba790d26ef2" Sep 10 00:51:41.420185 env[1199]: time="2025-09-10T00:51:41.420092718Z" level=error msg="ContainerStatus for \"52cbdcf49bc0793f6e4485f6f57d97390febc05728551f97200cdba790d26ef2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"52cbdcf49bc0793f6e4485f6f57d97390febc05728551f97200cdba790d26ef2\": not found" Sep 10 00:51:41.420334 kubelet[1929]: E0910 00:51:41.420310 1929 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"52cbdcf49bc0793f6e4485f6f57d97390febc05728551f97200cdba790d26ef2\": not found" containerID="52cbdcf49bc0793f6e4485f6f57d97390febc05728551f97200cdba790d26ef2" Sep 10 00:51:41.420389 kubelet[1929]: I0910 00:51:41.420335 1929 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"52cbdcf49bc0793f6e4485f6f57d97390febc05728551f97200cdba790d26ef2"} err="failed to get container status \"52cbdcf49bc0793f6e4485f6f57d97390febc05728551f97200cdba790d26ef2\": rpc error: code = NotFound desc = an error occurred when try to find container \"52cbdcf49bc0793f6e4485f6f57d97390febc05728551f97200cdba790d26ef2\": not found" Sep 10 00:51:41.420389 kubelet[1929]: I0910 00:51:41.420353 1929 scope.go:117] "RemoveContainer" containerID="d207163557ac6025337fce945b45e42fa60b2f48ab7637c956044a949d42b5c8" Sep 10 00:51:41.420595 env[1199]: time="2025-09-10T00:51:41.420549737Z" level=error msg="ContainerStatus for \"d207163557ac6025337fce945b45e42fa60b2f48ab7637c956044a949d42b5c8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d207163557ac6025337fce945b45e42fa60b2f48ab7637c956044a949d42b5c8\": not found" Sep 10 00:51:41.420724 kubelet[1929]: E0910 00:51:41.420700 1929 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d207163557ac6025337fce945b45e42fa60b2f48ab7637c956044a949d42b5c8\": not found" containerID="d207163557ac6025337fce945b45e42fa60b2f48ab7637c956044a949d42b5c8" Sep 10 00:51:41.420774 kubelet[1929]: I0910 00:51:41.420725 1929 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d207163557ac6025337fce945b45e42fa60b2f48ab7637c956044a949d42b5c8"} err="failed to get container status \"d207163557ac6025337fce945b45e42fa60b2f48ab7637c956044a949d42b5c8\": rpc error: code = NotFound desc = an error occurred when try to find container \"d207163557ac6025337fce945b45e42fa60b2f48ab7637c956044a949d42b5c8\": not found" Sep 10 00:51:41.420774 kubelet[1929]: I0910 00:51:41.420741 1929 scope.go:117] "RemoveContainer" containerID="0106c3784053b48fd5033bf9bfb41c436ea8cdedbcdd88d3f95f916c9fd7c8a1" Sep 10 00:51:41.420934 env[1199]: time="2025-09-10T00:51:41.420888632Z" level=error msg="ContainerStatus for \"0106c3784053b48fd5033bf9bfb41c436ea8cdedbcdd88d3f95f916c9fd7c8a1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0106c3784053b48fd5033bf9bfb41c436ea8cdedbcdd88d3f95f916c9fd7c8a1\": not found" Sep 10 00:51:41.421048 kubelet[1929]: E0910 00:51:41.421028 1929 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0106c3784053b48fd5033bf9bfb41c436ea8cdedbcdd88d3f95f916c9fd7c8a1\": not found" containerID="0106c3784053b48fd5033bf9bfb41c436ea8cdedbcdd88d3f95f916c9fd7c8a1" Sep 10 00:51:41.421109 kubelet[1929]: I0910 00:51:41.421046 1929 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0106c3784053b48fd5033bf9bfb41c436ea8cdedbcdd88d3f95f916c9fd7c8a1"} err="failed to get container status \"0106c3784053b48fd5033bf9bfb41c436ea8cdedbcdd88d3f95f916c9fd7c8a1\": rpc error: code = NotFound desc = an error occurred when try to find container \"0106c3784053b48fd5033bf9bfb41c436ea8cdedbcdd88d3f95f916c9fd7c8a1\": not found" Sep 10 00:51:41.421109 kubelet[1929]: I0910 00:51:41.421072 1929 scope.go:117] "RemoveContainer" containerID="39a0e1deb017bfba542779dd34c39f127eeed678f795ad8ec416c9fd9419f664" Sep 10 00:51:41.421265 env[1199]: time="2025-09-10T00:51:41.421225533Z" level=error msg="ContainerStatus for \"39a0e1deb017bfba542779dd34c39f127eeed678f795ad8ec416c9fd9419f664\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"39a0e1deb017bfba542779dd34c39f127eeed678f795ad8ec416c9fd9419f664\": not found" Sep 10 00:51:41.421389 kubelet[1929]: E0910 00:51:41.421360 1929 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"39a0e1deb017bfba542779dd34c39f127eeed678f795ad8ec416c9fd9419f664\": not found" containerID="39a0e1deb017bfba542779dd34c39f127eeed678f795ad8ec416c9fd9419f664" Sep 10 00:51:41.421440 kubelet[1929]: I0910 00:51:41.421401 1929 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"39a0e1deb017bfba542779dd34c39f127eeed678f795ad8ec416c9fd9419f664"} err="failed to get container status \"39a0e1deb017bfba542779dd34c39f127eeed678f795ad8ec416c9fd9419f664\": rpc error: code = NotFound desc = an error occurred when try to find container \"39a0e1deb017bfba542779dd34c39f127eeed678f795ad8ec416c9fd9419f664\": not found" Sep 10 00:51:41.421440 kubelet[1929]: I0910 00:51:41.421430 1929 scope.go:117] "RemoveContainer" containerID="dbbf16960d87a5855be6e9fdb3e37bc65ef05c497a64602010d463053efab253" Sep 10 00:51:41.422660 env[1199]: time="2025-09-10T00:51:41.422613723Z" level=info msg="RemoveContainer for \"dbbf16960d87a5855be6e9fdb3e37bc65ef05c497a64602010d463053efab253\"" Sep 10 00:51:41.426774 env[1199]: time="2025-09-10T00:51:41.426718890Z" level=info msg="RemoveContainer for \"dbbf16960d87a5855be6e9fdb3e37bc65ef05c497a64602010d463053efab253\" returns successfully" Sep 10 00:51:41.426961 kubelet[1929]: I0910 00:51:41.426892 1929 scope.go:117] "RemoveContainer" containerID="dbbf16960d87a5855be6e9fdb3e37bc65ef05c497a64602010d463053efab253" Sep 10 00:51:41.427127 env[1199]: time="2025-09-10T00:51:41.427071732Z" level=error msg="ContainerStatus for \"dbbf16960d87a5855be6e9fdb3e37bc65ef05c497a64602010d463053efab253\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dbbf16960d87a5855be6e9fdb3e37bc65ef05c497a64602010d463053efab253\": not found" Sep 10 00:51:41.427227 kubelet[1929]: E0910 00:51:41.427205 1929 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dbbf16960d87a5855be6e9fdb3e37bc65ef05c497a64602010d463053efab253\": not found" containerID="dbbf16960d87a5855be6e9fdb3e37bc65ef05c497a64602010d463053efab253" Sep 10 00:51:41.427227 kubelet[1929]: I0910 00:51:41.427225 1929 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dbbf16960d87a5855be6e9fdb3e37bc65ef05c497a64602010d463053efab253"} err="failed to get container status \"dbbf16960d87a5855be6e9fdb3e37bc65ef05c497a64602010d463053efab253\": rpc error: code = NotFound desc = an error occurred when try to find container \"dbbf16960d87a5855be6e9fdb3e37bc65ef05c497a64602010d463053efab253\": not found" Sep 10 00:51:42.030521 sshd[3551]: pam_unix(sshd:session): session closed for user core Sep 10 00:51:42.034155 systemd[1]: sshd@22-10.0.0.131:22-10.0.0.1:57260.service: Deactivated successfully. Sep 10 00:51:42.034859 systemd[1]: session-23.scope: Deactivated successfully. Sep 10 00:51:42.035562 systemd-logind[1190]: Session 23 logged out. Waiting for processes to exit. Sep 10 00:51:42.036812 systemd[1]: Started sshd@23-10.0.0.131:22-10.0.0.1:37236.service. Sep 10 00:51:42.037582 systemd-logind[1190]: Removed session 23. Sep 10 00:51:42.074676 sshd[3716]: Accepted publickey for core from 10.0.0.1 port 37236 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:51:42.076118 sshd[3716]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:51:42.079967 systemd-logind[1190]: New session 24 of user core. Sep 10 00:51:42.080770 systemd[1]: Started session-24.scope. Sep 10 00:51:42.738472 kubelet[1929]: I0910 00:51:42.738418 1929 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08089bcd-5d61-4c4d-bb10-c99d7259adea" path="/var/lib/kubelet/pods/08089bcd-5d61-4c4d-bb10-c99d7259adea/volumes" Sep 10 00:51:42.739084 kubelet[1929]: I0910 00:51:42.739043 1929 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72015ac2-1c73-4b4f-83e7-9cb61d325200" path="/var/lib/kubelet/pods/72015ac2-1c73-4b4f-83e7-9cb61d325200/volumes" Sep 10 00:51:42.933199 sshd[3716]: pam_unix(sshd:session): session closed for user core Sep 10 00:51:42.937743 systemd[1]: Started sshd@24-10.0.0.131:22-10.0.0.1:37238.service. Sep 10 00:51:42.938264 systemd[1]: sshd@23-10.0.0.131:22-10.0.0.1:37236.service: Deactivated successfully. Sep 10 00:51:42.938914 systemd[1]: session-24.scope: Deactivated successfully. Sep 10 00:51:42.939815 systemd-logind[1190]: Session 24 logged out. Waiting for processes to exit. Sep 10 00:51:42.940948 systemd-logind[1190]: Removed session 24. Sep 10 00:51:42.964612 kubelet[1929]: E0910 00:51:42.964566 1929 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="72015ac2-1c73-4b4f-83e7-9cb61d325200" containerName="mount-cgroup" Sep 10 00:51:42.964847 kubelet[1929]: E0910 00:51:42.964826 1929 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="72015ac2-1c73-4b4f-83e7-9cb61d325200" containerName="apply-sysctl-overwrites" Sep 10 00:51:42.964954 kubelet[1929]: E0910 00:51:42.964934 1929 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="72015ac2-1c73-4b4f-83e7-9cb61d325200" containerName="mount-bpf-fs" Sep 10 00:51:42.965071 kubelet[1929]: E0910 00:51:42.965040 1929 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="08089bcd-5d61-4c4d-bb10-c99d7259adea" containerName="cilium-operator" Sep 10 00:51:42.965175 kubelet[1929]: E0910 00:51:42.965154 1929 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="72015ac2-1c73-4b4f-83e7-9cb61d325200" containerName="clean-cilium-state" Sep 10 00:51:42.965270 kubelet[1929]: E0910 00:51:42.965251 1929 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="72015ac2-1c73-4b4f-83e7-9cb61d325200" containerName="cilium-agent" Sep 10 00:51:42.965395 kubelet[1929]: I0910 00:51:42.965374 1929 memory_manager.go:354] "RemoveStaleState removing state" podUID="08089bcd-5d61-4c4d-bb10-c99d7259adea" containerName="cilium-operator" Sep 10 00:51:42.965496 kubelet[1929]: I0910 00:51:42.965477 1929 memory_manager.go:354] "RemoveStaleState removing state" podUID="72015ac2-1c73-4b4f-83e7-9cb61d325200" containerName="cilium-agent" Sep 10 00:51:42.977355 systemd[1]: Created slice kubepods-burstable-pode15bf996_1fcc_4109_8938_d4441c31a9a6.slice. Sep 10 00:51:42.987787 sshd[3727]: Accepted publickey for core from 10.0.0.1 port 37238 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:51:42.989588 sshd[3727]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:51:42.996499 systemd[1]: Started session-25.scope. Sep 10 00:51:42.998275 systemd-logind[1190]: New session 25 of user core. Sep 10 00:51:43.124751 sshd[3727]: pam_unix(sshd:session): session closed for user core Sep 10 00:51:43.127298 systemd[1]: sshd@24-10.0.0.131:22-10.0.0.1:37238.service: Deactivated successfully. Sep 10 00:51:43.127815 systemd[1]: session-25.scope: Deactivated successfully. Sep 10 00:51:43.128504 systemd-logind[1190]: Session 25 logged out. Waiting for processes to exit. Sep 10 00:51:43.129706 systemd[1]: Started sshd@25-10.0.0.131:22-10.0.0.1:37252.service. Sep 10 00:51:43.131213 systemd-logind[1190]: Removed session 25. Sep 10 00:51:43.137879 kubelet[1929]: E0910 00:51:43.137822 1929 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-74rdg lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-zr9jf" podUID="e15bf996-1fcc-4109-8938-d4441c31a9a6" Sep 10 00:51:43.148309 kubelet[1929]: I0910 00:51:43.148244 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-cilium-run\") pod \"cilium-zr9jf\" (UID: \"e15bf996-1fcc-4109-8938-d4441c31a9a6\") " pod="kube-system/cilium-zr9jf" Sep 10 00:51:43.148494 kubelet[1929]: I0910 00:51:43.148319 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e15bf996-1fcc-4109-8938-d4441c31a9a6-cilium-config-path\") pod \"cilium-zr9jf\" (UID: \"e15bf996-1fcc-4109-8938-d4441c31a9a6\") " pod="kube-system/cilium-zr9jf" Sep 10 00:51:43.148494 kubelet[1929]: I0910 00:51:43.148350 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-etc-cni-netd\") pod \"cilium-zr9jf\" (UID: \"e15bf996-1fcc-4109-8938-d4441c31a9a6\") " pod="kube-system/cilium-zr9jf" Sep 10 00:51:43.148494 kubelet[1929]: I0910 00:51:43.148368 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-xtables-lock\") pod \"cilium-zr9jf\" (UID: \"e15bf996-1fcc-4109-8938-d4441c31a9a6\") " pod="kube-system/cilium-zr9jf" Sep 10 00:51:43.148494 kubelet[1929]: I0910 00:51:43.148401 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74rdg\" (UniqueName: \"kubernetes.io/projected/e15bf996-1fcc-4109-8938-d4441c31a9a6-kube-api-access-74rdg\") pod \"cilium-zr9jf\" (UID: \"e15bf996-1fcc-4109-8938-d4441c31a9a6\") " pod="kube-system/cilium-zr9jf" Sep 10 00:51:43.148494 kubelet[1929]: I0910 00:51:43.148430 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-cilium-cgroup\") pod \"cilium-zr9jf\" (UID: \"e15bf996-1fcc-4109-8938-d4441c31a9a6\") " pod="kube-system/cilium-zr9jf" Sep 10 00:51:43.148494 kubelet[1929]: I0910 00:51:43.148446 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-lib-modules\") pod \"cilium-zr9jf\" (UID: \"e15bf996-1fcc-4109-8938-d4441c31a9a6\") " pod="kube-system/cilium-zr9jf" Sep 10 00:51:43.148654 kubelet[1929]: I0910 00:51:43.148471 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-hostproc\") pod \"cilium-zr9jf\" (UID: \"e15bf996-1fcc-4109-8938-d4441c31a9a6\") " pod="kube-system/cilium-zr9jf" Sep 10 00:51:43.148654 kubelet[1929]: I0910 00:51:43.148494 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e15bf996-1fcc-4109-8938-d4441c31a9a6-clustermesh-secrets\") pod \"cilium-zr9jf\" (UID: \"e15bf996-1fcc-4109-8938-d4441c31a9a6\") " pod="kube-system/cilium-zr9jf" Sep 10 00:51:43.148654 kubelet[1929]: I0910 00:51:43.148513 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-host-proc-sys-kernel\") pod \"cilium-zr9jf\" (UID: \"e15bf996-1fcc-4109-8938-d4441c31a9a6\") " pod="kube-system/cilium-zr9jf" Sep 10 00:51:43.148654 kubelet[1929]: I0910 00:51:43.148546 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-cni-path\") pod \"cilium-zr9jf\" (UID: \"e15bf996-1fcc-4109-8938-d4441c31a9a6\") " pod="kube-system/cilium-zr9jf" Sep 10 00:51:43.148654 kubelet[1929]: I0910 00:51:43.148567 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e15bf996-1fcc-4109-8938-d4441c31a9a6-hubble-tls\") pod \"cilium-zr9jf\" (UID: \"e15bf996-1fcc-4109-8938-d4441c31a9a6\") " pod="kube-system/cilium-zr9jf" Sep 10 00:51:43.148654 kubelet[1929]: I0910 00:51:43.148583 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e15bf996-1fcc-4109-8938-d4441c31a9a6-cilium-ipsec-secrets\") pod \"cilium-zr9jf\" (UID: \"e15bf996-1fcc-4109-8938-d4441c31a9a6\") " pod="kube-system/cilium-zr9jf" Sep 10 00:51:43.148787 kubelet[1929]: I0910 00:51:43.148614 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-bpf-maps\") pod \"cilium-zr9jf\" (UID: \"e15bf996-1fcc-4109-8938-d4441c31a9a6\") " pod="kube-system/cilium-zr9jf" Sep 10 00:51:43.148787 kubelet[1929]: I0910 00:51:43.148634 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-host-proc-sys-net\") pod \"cilium-zr9jf\" (UID: \"e15bf996-1fcc-4109-8938-d4441c31a9a6\") " pod="kube-system/cilium-zr9jf" Sep 10 00:51:43.164195 sshd[3741]: Accepted publickey for core from 10.0.0.1 port 37252 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:51:43.165411 sshd[3741]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:51:43.169027 systemd-logind[1190]: New session 26 of user core. Sep 10 00:51:43.169938 systemd[1]: Started session-26.scope. Sep 10 00:51:43.776295 kubelet[1929]: E0910 00:51:43.776242 1929 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 10 00:51:43.953861 kubelet[1929]: I0910 00:51:43.953790 1929 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e15bf996-1fcc-4109-8938-d4441c31a9a6-cilium-config-path\") pod \"e15bf996-1fcc-4109-8938-d4441c31a9a6\" (UID: \"e15bf996-1fcc-4109-8938-d4441c31a9a6\") " Sep 10 00:51:43.953861 kubelet[1929]: I0910 00:51:43.953847 1929 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-cilium-cgroup\") pod \"e15bf996-1fcc-4109-8938-d4441c31a9a6\" (UID: \"e15bf996-1fcc-4109-8938-d4441c31a9a6\") " Sep 10 00:51:43.953861 kubelet[1929]: I0910 00:51:43.953873 1929 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-host-proc-sys-kernel\") pod \"e15bf996-1fcc-4109-8938-d4441c31a9a6\" (UID: \"e15bf996-1fcc-4109-8938-d4441c31a9a6\") " Sep 10 00:51:43.954125 kubelet[1929]: I0910 00:51:43.953919 1929 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-cilium-run\") pod \"e15bf996-1fcc-4109-8938-d4441c31a9a6\" (UID: \"e15bf996-1fcc-4109-8938-d4441c31a9a6\") " Sep 10 00:51:43.954125 kubelet[1929]: I0910 00:51:43.953939 1929 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-etc-cni-netd\") pod \"e15bf996-1fcc-4109-8938-d4441c31a9a6\" (UID: \"e15bf996-1fcc-4109-8938-d4441c31a9a6\") " Sep 10 00:51:43.954125 kubelet[1929]: I0910 00:51:43.953915 1929 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e15bf996-1fcc-4109-8938-d4441c31a9a6" (UID: "e15bf996-1fcc-4109-8938-d4441c31a9a6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:51:43.954125 kubelet[1929]: I0910 00:51:43.953980 1929 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e15bf996-1fcc-4109-8938-d4441c31a9a6" (UID: "e15bf996-1fcc-4109-8938-d4441c31a9a6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:51:43.954125 kubelet[1929]: I0910 00:51:43.953937 1929 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e15bf996-1fcc-4109-8938-d4441c31a9a6" (UID: "e15bf996-1fcc-4109-8938-d4441c31a9a6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:51:43.954243 kubelet[1929]: I0910 00:51:43.954011 1929 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74rdg\" (UniqueName: \"kubernetes.io/projected/e15bf996-1fcc-4109-8938-d4441c31a9a6-kube-api-access-74rdg\") pod \"e15bf996-1fcc-4109-8938-d4441c31a9a6\" (UID: \"e15bf996-1fcc-4109-8938-d4441c31a9a6\") " Sep 10 00:51:43.954243 kubelet[1929]: I0910 00:51:43.954034 1929 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e15bf996-1fcc-4109-8938-d4441c31a9a6-clustermesh-secrets\") pod \"e15bf996-1fcc-4109-8938-d4441c31a9a6\" (UID: \"e15bf996-1fcc-4109-8938-d4441c31a9a6\") " Sep 10 00:51:43.954243 kubelet[1929]: I0910 00:51:43.954083 1929 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e15bf996-1fcc-4109-8938-d4441c31a9a6-cilium-ipsec-secrets\") pod \"e15bf996-1fcc-4109-8938-d4441c31a9a6\" (UID: \"e15bf996-1fcc-4109-8938-d4441c31a9a6\") " Sep 10 00:51:43.954243 kubelet[1929]: I0910 00:51:43.954105 1929 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-host-proc-sys-net\") pod \"e15bf996-1fcc-4109-8938-d4441c31a9a6\" (UID: \"e15bf996-1fcc-4109-8938-d4441c31a9a6\") " Sep 10 00:51:43.954243 kubelet[1929]: I0910 00:51:43.954136 1929 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-lib-modules\") pod \"e15bf996-1fcc-4109-8938-d4441c31a9a6\" (UID: \"e15bf996-1fcc-4109-8938-d4441c31a9a6\") " Sep 10 00:51:43.954243 kubelet[1929]: I0910 00:51:43.954155 1929 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-hostproc\") pod \"e15bf996-1fcc-4109-8938-d4441c31a9a6\" (UID: \"e15bf996-1fcc-4109-8938-d4441c31a9a6\") " Sep 10 00:51:43.954381 kubelet[1929]: I0910 00:51:43.954173 1929 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-cni-path\") pod \"e15bf996-1fcc-4109-8938-d4441c31a9a6\" (UID: \"e15bf996-1fcc-4109-8938-d4441c31a9a6\") " Sep 10 00:51:43.954381 kubelet[1929]: I0910 00:51:43.954199 1929 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e15bf996-1fcc-4109-8938-d4441c31a9a6-hubble-tls\") pod \"e15bf996-1fcc-4109-8938-d4441c31a9a6\" (UID: \"e15bf996-1fcc-4109-8938-d4441c31a9a6\") " Sep 10 00:51:43.954381 kubelet[1929]: I0910 00:51:43.954224 1929 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-xtables-lock\") pod \"e15bf996-1fcc-4109-8938-d4441c31a9a6\" (UID: \"e15bf996-1fcc-4109-8938-d4441c31a9a6\") " Sep 10 00:51:43.954381 kubelet[1929]: I0910 00:51:43.954246 1929 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-bpf-maps\") pod \"e15bf996-1fcc-4109-8938-d4441c31a9a6\" (UID: \"e15bf996-1fcc-4109-8938-d4441c31a9a6\") " Sep 10 00:51:43.954381 kubelet[1929]: I0910 00:51:43.954284 1929 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 10 00:51:43.954381 kubelet[1929]: I0910 00:51:43.954298 1929 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 10 00:51:43.954381 kubelet[1929]: I0910 00:51:43.954315 1929 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 10 00:51:43.954572 kubelet[1929]: I0910 00:51:43.954347 1929 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e15bf996-1fcc-4109-8938-d4441c31a9a6" (UID: "e15bf996-1fcc-4109-8938-d4441c31a9a6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:51:43.954572 kubelet[1929]: I0910 00:51:43.954349 1929 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e15bf996-1fcc-4109-8938-d4441c31a9a6" (UID: "e15bf996-1fcc-4109-8938-d4441c31a9a6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:51:43.954572 kubelet[1929]: I0910 00:51:43.954371 1929 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-hostproc" (OuterVolumeSpecName: "hostproc") pod "e15bf996-1fcc-4109-8938-d4441c31a9a6" (UID: "e15bf996-1fcc-4109-8938-d4441c31a9a6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:51:43.954572 kubelet[1929]: I0910 00:51:43.954398 1929 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-cni-path" (OuterVolumeSpecName: "cni-path") pod "e15bf996-1fcc-4109-8938-d4441c31a9a6" (UID: "e15bf996-1fcc-4109-8938-d4441c31a9a6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:51:43.956210 kubelet[1929]: I0910 00:51:43.956171 1929 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e15bf996-1fcc-4109-8938-d4441c31a9a6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e15bf996-1fcc-4109-8938-d4441c31a9a6" (UID: "e15bf996-1fcc-4109-8938-d4441c31a9a6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 10 00:51:43.956290 kubelet[1929]: I0910 00:51:43.956227 1929 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e15bf996-1fcc-4109-8938-d4441c31a9a6" (UID: "e15bf996-1fcc-4109-8938-d4441c31a9a6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:51:43.958260 kubelet[1929]: I0910 00:51:43.957592 1929 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e15bf996-1fcc-4109-8938-d4441c31a9a6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e15bf996-1fcc-4109-8938-d4441c31a9a6" (UID: "e15bf996-1fcc-4109-8938-d4441c31a9a6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 00:51:43.958260 kubelet[1929]: I0910 00:51:43.957649 1929 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e15bf996-1fcc-4109-8938-d4441c31a9a6" (UID: "e15bf996-1fcc-4109-8938-d4441c31a9a6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:51:43.958260 kubelet[1929]: I0910 00:51:43.957678 1929 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e15bf996-1fcc-4109-8938-d4441c31a9a6" (UID: "e15bf996-1fcc-4109-8938-d4441c31a9a6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:51:43.958260 kubelet[1929]: I0910 00:51:43.957789 1929 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e15bf996-1fcc-4109-8938-d4441c31a9a6-kube-api-access-74rdg" (OuterVolumeSpecName: "kube-api-access-74rdg") pod "e15bf996-1fcc-4109-8938-d4441c31a9a6" (UID: "e15bf996-1fcc-4109-8938-d4441c31a9a6"). InnerVolumeSpecName "kube-api-access-74rdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 00:51:43.958552 systemd[1]: var-lib-kubelet-pods-e15bf996\x2d1fcc\x2d4109\x2d8938\x2dd4441c31a9a6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d74rdg.mount: Deactivated successfully. Sep 10 00:51:43.958652 systemd[1]: var-lib-kubelet-pods-e15bf996\x2d1fcc\x2d4109\x2d8938\x2dd4441c31a9a6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 10 00:51:43.958714 systemd[1]: var-lib-kubelet-pods-e15bf996\x2d1fcc\x2d4109\x2d8938\x2dd4441c31a9a6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 10 00:51:43.958900 kubelet[1929]: I0910 00:51:43.958875 1929 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e15bf996-1fcc-4109-8938-d4441c31a9a6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e15bf996-1fcc-4109-8938-d4441c31a9a6" (UID: "e15bf996-1fcc-4109-8938-d4441c31a9a6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 10 00:51:43.960067 kubelet[1929]: I0910 00:51:43.960022 1929 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e15bf996-1fcc-4109-8938-d4441c31a9a6-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "e15bf996-1fcc-4109-8938-d4441c31a9a6" (UID: "e15bf996-1fcc-4109-8938-d4441c31a9a6"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 10 00:51:43.961005 systemd[1]: var-lib-kubelet-pods-e15bf996\x2d1fcc\x2d4109\x2d8938\x2dd4441c31a9a6-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 10 00:51:44.055483 kubelet[1929]: I0910 00:51:44.055433 1929 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 10 00:51:44.055483 kubelet[1929]: I0910 00:51:44.055473 1929 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 10 00:51:44.055483 kubelet[1929]: I0910 00:51:44.055482 1929 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e15bf996-1fcc-4109-8938-d4441c31a9a6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 10 00:51:44.055483 kubelet[1929]: I0910 00:51:44.055492 1929 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74rdg\" (UniqueName: \"kubernetes.io/projected/e15bf996-1fcc-4109-8938-d4441c31a9a6-kube-api-access-74rdg\") on node \"localhost\" DevicePath \"\"" Sep 10 00:51:44.055729 kubelet[1929]: I0910 00:51:44.055500 1929 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e15bf996-1fcc-4109-8938-d4441c31a9a6-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 10 00:51:44.055729 kubelet[1929]: I0910 00:51:44.055507 1929 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 10 00:51:44.055729 kubelet[1929]: I0910 00:51:44.055514 1929 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e15bf996-1fcc-4109-8938-d4441c31a9a6-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Sep 10 00:51:44.055729 kubelet[1929]: I0910 00:51:44.055522 1929 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 10 00:51:44.055729 kubelet[1929]: I0910 00:51:44.055549 1929 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 10 00:51:44.055729 kubelet[1929]: I0910 00:51:44.055555 1929 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 10 00:51:44.055729 kubelet[1929]: I0910 00:51:44.055562 1929 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e15bf996-1fcc-4109-8938-d4441c31a9a6-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 10 00:51:44.055729 kubelet[1929]: I0910 00:51:44.055568 1929 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e15bf996-1fcc-4109-8938-d4441c31a9a6-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 10 00:51:44.745164 systemd[1]: Removed slice kubepods-burstable-pode15bf996_1fcc_4109_8938_d4441c31a9a6.slice. Sep 10 00:51:44.958989 kubelet[1929]: I0910 00:51:44.958946 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9439d3ab-9803-4cb9-9465-43fa973aa3d0-etc-cni-netd\") pod \"cilium-zshws\" (UID: \"9439d3ab-9803-4cb9-9465-43fa973aa3d0\") " pod="kube-system/cilium-zshws" Sep 10 00:51:44.958989 kubelet[1929]: I0910 00:51:44.958985 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9439d3ab-9803-4cb9-9465-43fa973aa3d0-host-proc-sys-kernel\") pod \"cilium-zshws\" (UID: \"9439d3ab-9803-4cb9-9465-43fa973aa3d0\") " pod="kube-system/cilium-zshws" Sep 10 00:51:44.958989 kubelet[1929]: I0910 00:51:44.959000 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9439d3ab-9803-4cb9-9465-43fa973aa3d0-cilium-config-path\") pod \"cilium-zshws\" (UID: \"9439d3ab-9803-4cb9-9465-43fa973aa3d0\") " pod="kube-system/cilium-zshws" Sep 10 00:51:44.959459 kubelet[1929]: I0910 00:51:44.959013 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9439d3ab-9803-4cb9-9465-43fa973aa3d0-cilium-ipsec-secrets\") pod \"cilium-zshws\" (UID: \"9439d3ab-9803-4cb9-9465-43fa973aa3d0\") " pod="kube-system/cilium-zshws" Sep 10 00:51:44.959459 kubelet[1929]: I0910 00:51:44.959027 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9439d3ab-9803-4cb9-9465-43fa973aa3d0-host-proc-sys-net\") pod \"cilium-zshws\" (UID: \"9439d3ab-9803-4cb9-9465-43fa973aa3d0\") " pod="kube-system/cilium-zshws" Sep 10 00:51:44.959459 kubelet[1929]: I0910 00:51:44.959043 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9439d3ab-9803-4cb9-9465-43fa973aa3d0-bpf-maps\") pod \"cilium-zshws\" (UID: \"9439d3ab-9803-4cb9-9465-43fa973aa3d0\") " pod="kube-system/cilium-zshws" Sep 10 00:51:44.959459 kubelet[1929]: I0910 00:51:44.959066 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9439d3ab-9803-4cb9-9465-43fa973aa3d0-hostproc\") pod \"cilium-zshws\" (UID: \"9439d3ab-9803-4cb9-9465-43fa973aa3d0\") " pod="kube-system/cilium-zshws" Sep 10 00:51:44.959459 kubelet[1929]: I0910 00:51:44.959078 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9439d3ab-9803-4cb9-9465-43fa973aa3d0-lib-modules\") pod \"cilium-zshws\" (UID: \"9439d3ab-9803-4cb9-9465-43fa973aa3d0\") " pod="kube-system/cilium-zshws" Sep 10 00:51:44.959459 kubelet[1929]: I0910 00:51:44.959091 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9439d3ab-9803-4cb9-9465-43fa973aa3d0-cilium-cgroup\") pod \"cilium-zshws\" (UID: \"9439d3ab-9803-4cb9-9465-43fa973aa3d0\") " pod="kube-system/cilium-zshws" Sep 10 00:51:44.959793 kubelet[1929]: I0910 00:51:44.959103 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9439d3ab-9803-4cb9-9465-43fa973aa3d0-cni-path\") pod \"cilium-zshws\" (UID: \"9439d3ab-9803-4cb9-9465-43fa973aa3d0\") " pod="kube-system/cilium-zshws" Sep 10 00:51:44.959793 kubelet[1929]: I0910 00:51:44.959115 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9439d3ab-9803-4cb9-9465-43fa973aa3d0-xtables-lock\") pod \"cilium-zshws\" (UID: \"9439d3ab-9803-4cb9-9465-43fa973aa3d0\") " pod="kube-system/cilium-zshws" Sep 10 00:51:44.959793 kubelet[1929]: I0910 00:51:44.959128 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9439d3ab-9803-4cb9-9465-43fa973aa3d0-hubble-tls\") pod \"cilium-zshws\" (UID: \"9439d3ab-9803-4cb9-9465-43fa973aa3d0\") " pod="kube-system/cilium-zshws" Sep 10 00:51:44.959793 kubelet[1929]: I0910 00:51:44.959142 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9439d3ab-9803-4cb9-9465-43fa973aa3d0-cilium-run\") pod \"cilium-zshws\" (UID: \"9439d3ab-9803-4cb9-9465-43fa973aa3d0\") " pod="kube-system/cilium-zshws" Sep 10 00:51:44.959793 kubelet[1929]: I0910 00:51:44.959155 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9439d3ab-9803-4cb9-9465-43fa973aa3d0-clustermesh-secrets\") pod \"cilium-zshws\" (UID: \"9439d3ab-9803-4cb9-9465-43fa973aa3d0\") " pod="kube-system/cilium-zshws" Sep 10 00:51:44.959793 kubelet[1929]: I0910 00:51:44.959167 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhj4w\" (UniqueName: \"kubernetes.io/projected/9439d3ab-9803-4cb9-9465-43fa973aa3d0-kube-api-access-zhj4w\") pod \"cilium-zshws\" (UID: \"9439d3ab-9803-4cb9-9465-43fa973aa3d0\") " pod="kube-system/cilium-zshws" Sep 10 00:51:44.961063 systemd[1]: Created slice kubepods-burstable-pod9439d3ab_9803_4cb9_9465_43fa973aa3d0.slice. Sep 10 00:51:45.264256 kubelet[1929]: E0910 00:51:45.264203 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:45.264857 env[1199]: time="2025-09-10T00:51:45.264796026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zshws,Uid:9439d3ab-9803-4cb9-9465-43fa973aa3d0,Namespace:kube-system,Attempt:0,}" Sep 10 00:51:45.278771 env[1199]: time="2025-09-10T00:51:45.278696695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:51:45.278771 env[1199]: time="2025-09-10T00:51:45.278742973Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:51:45.278771 env[1199]: time="2025-09-10T00:51:45.278757662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:51:45.279007 env[1199]: time="2025-09-10T00:51:45.278945578Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d651d0d5e42d4ebf2e172c6322329aa98a597d9c82ddc813332a75504ed14dce pid=3769 runtime=io.containerd.runc.v2 Sep 10 00:51:45.291492 systemd[1]: Started cri-containerd-d651d0d5e42d4ebf2e172c6322329aa98a597d9c82ddc813332a75504ed14dce.scope. Sep 10 00:51:45.314118 env[1199]: time="2025-09-10T00:51:45.314043048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zshws,Uid:9439d3ab-9803-4cb9-9465-43fa973aa3d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"d651d0d5e42d4ebf2e172c6322329aa98a597d9c82ddc813332a75504ed14dce\"" Sep 10 00:51:45.314858 kubelet[1929]: E0910 00:51:45.314831 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:45.318388 env[1199]: time="2025-09-10T00:51:45.317121921Z" level=info msg="CreateContainer within sandbox \"d651d0d5e42d4ebf2e172c6322329aa98a597d9c82ddc813332a75504ed14dce\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 10 00:51:45.330348 env[1199]: time="2025-09-10T00:51:45.330278946Z" level=info msg="CreateContainer within sandbox \"d651d0d5e42d4ebf2e172c6322329aa98a597d9c82ddc813332a75504ed14dce\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e44ebb1b962d134a3647792ff4fe556cfaaead35ed987fd77a2834a0f0e44798\"" Sep 10 00:51:45.331974 env[1199]: time="2025-09-10T00:51:45.331064379Z" level=info msg="StartContainer for \"e44ebb1b962d134a3647792ff4fe556cfaaead35ed987fd77a2834a0f0e44798\"" Sep 10 00:51:45.345891 systemd[1]: Started cri-containerd-e44ebb1b962d134a3647792ff4fe556cfaaead35ed987fd77a2834a0f0e44798.scope. Sep 10 00:51:45.379830 env[1199]: time="2025-09-10T00:51:45.379770741Z" level=info msg="StartContainer for \"e44ebb1b962d134a3647792ff4fe556cfaaead35ed987fd77a2834a0f0e44798\" returns successfully" Sep 10 00:51:45.387379 systemd[1]: cri-containerd-e44ebb1b962d134a3647792ff4fe556cfaaead35ed987fd77a2834a0f0e44798.scope: Deactivated successfully. Sep 10 00:51:45.413964 env[1199]: time="2025-09-10T00:51:45.413913485Z" level=info msg="shim disconnected" id=e44ebb1b962d134a3647792ff4fe556cfaaead35ed987fd77a2834a0f0e44798 Sep 10 00:51:45.413964 env[1199]: time="2025-09-10T00:51:45.413962969Z" level=warning msg="cleaning up after shim disconnected" id=e44ebb1b962d134a3647792ff4fe556cfaaead35ed987fd77a2834a0f0e44798 namespace=k8s.io Sep 10 00:51:45.413964 env[1199]: time="2025-09-10T00:51:45.413971786Z" level=info msg="cleaning up dead shim" Sep 10 00:51:45.420740 env[1199]: time="2025-09-10T00:51:45.420629333Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:51:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3856 runtime=io.containerd.runc.v2\n" Sep 10 00:51:45.736470 kubelet[1929]: E0910 00:51:45.736414 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:45.928563 kubelet[1929]: E0910 00:51:45.928333 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:45.933800 env[1199]: time="2025-09-10T00:51:45.933744628Z" level=info msg="CreateContainer within sandbox \"d651d0d5e42d4ebf2e172c6322329aa98a597d9c82ddc813332a75504ed14dce\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 10 00:51:45.948432 env[1199]: time="2025-09-10T00:51:45.948362370Z" level=info msg="CreateContainer within sandbox \"d651d0d5e42d4ebf2e172c6322329aa98a597d9c82ddc813332a75504ed14dce\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"308f2d39591c7a514f32eb9f8e76d945177d741bd73df394839902e90ac6fbcb\"" Sep 10 00:51:45.948944 env[1199]: time="2025-09-10T00:51:45.948913738Z" level=info msg="StartContainer for \"308f2d39591c7a514f32eb9f8e76d945177d741bd73df394839902e90ac6fbcb\"" Sep 10 00:51:45.962920 systemd[1]: Started cri-containerd-308f2d39591c7a514f32eb9f8e76d945177d741bd73df394839902e90ac6fbcb.scope. Sep 10 00:51:45.989860 env[1199]: time="2025-09-10T00:51:45.989740760Z" level=info msg="StartContainer for \"308f2d39591c7a514f32eb9f8e76d945177d741bd73df394839902e90ac6fbcb\" returns successfully" Sep 10 00:51:45.995964 systemd[1]: cri-containerd-308f2d39591c7a514f32eb9f8e76d945177d741bd73df394839902e90ac6fbcb.scope: Deactivated successfully. Sep 10 00:51:46.016450 env[1199]: time="2025-09-10T00:51:46.016398590Z" level=info msg="shim disconnected" id=308f2d39591c7a514f32eb9f8e76d945177d741bd73df394839902e90ac6fbcb Sep 10 00:51:46.016724 env[1199]: time="2025-09-10T00:51:46.016452303Z" level=warning msg="cleaning up after shim disconnected" id=308f2d39591c7a514f32eb9f8e76d945177d741bd73df394839902e90ac6fbcb namespace=k8s.io Sep 10 00:51:46.016724 env[1199]: time="2025-09-10T00:51:46.016468273Z" level=info msg="cleaning up dead shim" Sep 10 00:51:46.023167 env[1199]: time="2025-09-10T00:51:46.023113876Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:51:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3917 runtime=io.containerd.runc.v2\n" Sep 10 00:51:46.738571 kubelet[1929]: I0910 00:51:46.738499 1929 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e15bf996-1fcc-4109-8938-d4441c31a9a6" path="/var/lib/kubelet/pods/e15bf996-1fcc-4109-8938-d4441c31a9a6/volumes" Sep 10 00:51:46.931574 kubelet[1929]: E0910 00:51:46.931515 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:46.933138 env[1199]: time="2025-09-10T00:51:46.933098246Z" level=info msg="CreateContainer within sandbox \"d651d0d5e42d4ebf2e172c6322329aa98a597d9c82ddc813332a75504ed14dce\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 10 00:51:47.021377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3913318492.mount: Deactivated successfully. Sep 10 00:51:47.023691 env[1199]: time="2025-09-10T00:51:47.023648224Z" level=info msg="CreateContainer within sandbox \"d651d0d5e42d4ebf2e172c6322329aa98a597d9c82ddc813332a75504ed14dce\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ce11aad1a4f7c31d9de4f595318a10420c68288ac83fe9ed37e8b4c31ba5fc36\"" Sep 10 00:51:47.024148 env[1199]: time="2025-09-10T00:51:47.024111243Z" level=info msg="StartContainer for \"ce11aad1a4f7c31d9de4f595318a10420c68288ac83fe9ed37e8b4c31ba5fc36\"" Sep 10 00:51:47.042261 systemd[1]: Started cri-containerd-ce11aad1a4f7c31d9de4f595318a10420c68288ac83fe9ed37e8b4c31ba5fc36.scope. Sep 10 00:51:47.063505 systemd[1]: run-containerd-runc-k8s.io-ce11aad1a4f7c31d9de4f595318a10420c68288ac83fe9ed37e8b4c31ba5fc36-runc.X7F1kH.mount: Deactivated successfully. Sep 10 00:51:47.071777 env[1199]: time="2025-09-10T00:51:47.071685898Z" level=info msg="StartContainer for \"ce11aad1a4f7c31d9de4f595318a10420c68288ac83fe9ed37e8b4c31ba5fc36\" returns successfully" Sep 10 00:51:47.074222 systemd[1]: cri-containerd-ce11aad1a4f7c31d9de4f595318a10420c68288ac83fe9ed37e8b4c31ba5fc36.scope: Deactivated successfully. Sep 10 00:51:47.093165 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce11aad1a4f7c31d9de4f595318a10420c68288ac83fe9ed37e8b4c31ba5fc36-rootfs.mount: Deactivated successfully. Sep 10 00:51:47.098631 env[1199]: time="2025-09-10T00:51:47.098580412Z" level=info msg="shim disconnected" id=ce11aad1a4f7c31d9de4f595318a10420c68288ac83fe9ed37e8b4c31ba5fc36 Sep 10 00:51:47.098631 env[1199]: time="2025-09-10T00:51:47.098627762Z" level=warning msg="cleaning up after shim disconnected" id=ce11aad1a4f7c31d9de4f595318a10420c68288ac83fe9ed37e8b4c31ba5fc36 namespace=k8s.io Sep 10 00:51:47.098631 env[1199]: time="2025-09-10T00:51:47.098635927Z" level=info msg="cleaning up dead shim" Sep 10 00:51:47.105207 env[1199]: time="2025-09-10T00:51:47.105173725Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:51:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3974 runtime=io.containerd.runc.v2\n" Sep 10 00:51:47.934858 kubelet[1929]: E0910 00:51:47.934826 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:47.936311 env[1199]: time="2025-09-10T00:51:47.936235158Z" level=info msg="CreateContainer within sandbox \"d651d0d5e42d4ebf2e172c6322329aa98a597d9c82ddc813332a75504ed14dce\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 10 00:51:48.071348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3634801360.mount: Deactivated successfully. Sep 10 00:51:48.173180 env[1199]: time="2025-09-10T00:51:48.173112179Z" level=info msg="CreateContainer within sandbox \"d651d0d5e42d4ebf2e172c6322329aa98a597d9c82ddc813332a75504ed14dce\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1f73f85dbad1c09eee19fd751aca0270ed2d81a6037b888566e3b3d0f269a980\"" Sep 10 00:51:48.173764 env[1199]: time="2025-09-10T00:51:48.173704655Z" level=info msg="StartContainer for \"1f73f85dbad1c09eee19fd751aca0270ed2d81a6037b888566e3b3d0f269a980\"" Sep 10 00:51:48.190017 systemd[1]: run-containerd-runc-k8s.io-1f73f85dbad1c09eee19fd751aca0270ed2d81a6037b888566e3b3d0f269a980-runc.ytJXXp.mount: Deactivated successfully. Sep 10 00:51:48.191448 systemd[1]: Started cri-containerd-1f73f85dbad1c09eee19fd751aca0270ed2d81a6037b888566e3b3d0f269a980.scope. Sep 10 00:51:48.212763 systemd[1]: cri-containerd-1f73f85dbad1c09eee19fd751aca0270ed2d81a6037b888566e3b3d0f269a980.scope: Deactivated successfully. Sep 10 00:51:48.214166 env[1199]: time="2025-09-10T00:51:48.214115876Z" level=info msg="StartContainer for \"1f73f85dbad1c09eee19fd751aca0270ed2d81a6037b888566e3b3d0f269a980\" returns successfully" Sep 10 00:51:48.233727 env[1199]: time="2025-09-10T00:51:48.233675714Z" level=info msg="shim disconnected" id=1f73f85dbad1c09eee19fd751aca0270ed2d81a6037b888566e3b3d0f269a980 Sep 10 00:51:48.233727 env[1199]: time="2025-09-10T00:51:48.233732071Z" level=warning msg="cleaning up after shim disconnected" id=1f73f85dbad1c09eee19fd751aca0270ed2d81a6037b888566e3b3d0f269a980 namespace=k8s.io Sep 10 00:51:48.233950 env[1199]: time="2025-09-10T00:51:48.233740757Z" level=info msg="cleaning up dead shim" Sep 10 00:51:48.240682 env[1199]: time="2025-09-10T00:51:48.240612329Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:51:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4027 runtime=io.containerd.runc.v2\n" Sep 10 00:51:48.777584 kubelet[1929]: E0910 00:51:48.777521 1929 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 10 00:51:48.938062 kubelet[1929]: E0910 00:51:48.938023 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:48.939757 env[1199]: time="2025-09-10T00:51:48.939688406Z" level=info msg="CreateContainer within sandbox \"d651d0d5e42d4ebf2e172c6322329aa98a597d9c82ddc813332a75504ed14dce\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 10 00:51:48.960910 env[1199]: time="2025-09-10T00:51:48.960858224Z" level=info msg="CreateContainer within sandbox \"d651d0d5e42d4ebf2e172c6322329aa98a597d9c82ddc813332a75504ed14dce\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2de03f58642ed1c50deeb0131a109fa08bd77381315d408f5cfb439ff4468379\"" Sep 10 00:51:48.961380 env[1199]: time="2025-09-10T00:51:48.961358666Z" level=info msg="StartContainer for \"2de03f58642ed1c50deeb0131a109fa08bd77381315d408f5cfb439ff4468379\"" Sep 10 00:51:48.975986 systemd[1]: Started cri-containerd-2de03f58642ed1c50deeb0131a109fa08bd77381315d408f5cfb439ff4468379.scope. Sep 10 00:51:49.001681 env[1199]: time="2025-09-10T00:51:49.001635602Z" level=info msg="StartContainer for \"2de03f58642ed1c50deeb0131a109fa08bd77381315d408f5cfb439ff4468379\" returns successfully" Sep 10 00:51:49.068970 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f73f85dbad1c09eee19fd751aca0270ed2d81a6037b888566e3b3d0f269a980-rootfs.mount: Deactivated successfully. Sep 10 00:51:49.267569 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 10 00:51:49.942774 kubelet[1929]: E0910 00:51:49.942731 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:49.955230 kubelet[1929]: I0910 00:51:49.955165 1929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zshws" podStartSLOduration=5.9551421730000005 podStartE2EDuration="5.955142173s" podCreationTimestamp="2025-09-10 00:51:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:51:49.954628426 +0000 UTC m=+91.305217841" watchObservedRunningTime="2025-09-10 00:51:49.955142173 +0000 UTC m=+91.305731588" Sep 10 00:51:50.657656 kubelet[1929]: I0910 00:51:50.657596 1929 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-10T00:51:50Z","lastTransitionTime":"2025-09-10T00:51:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 10 00:51:51.265812 kubelet[1929]: E0910 00:51:51.265771 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:51.893040 systemd-networkd[1022]: lxc_health: Link UP Sep 10 00:51:51.907307 systemd-networkd[1022]: lxc_health: Gained carrier Sep 10 00:51:51.907550 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 10 00:51:52.737240 kubelet[1929]: E0910 00:51:52.737196 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:53.266221 kubelet[1929]: E0910 00:51:53.266122 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:53.631682 systemd-networkd[1022]: lxc_health: Gained IPv6LL Sep 10 00:51:53.950216 kubelet[1929]: E0910 00:51:53.950090 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:54.952110 kubelet[1929]: E0910 00:51:54.952067 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:56.736410 kubelet[1929]: E0910 00:51:56.736371 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:56.736909 kubelet[1929]: E0910 00:51:56.736592 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:57.748457 systemd[1]: run-containerd-runc-k8s.io-2de03f58642ed1c50deeb0131a109fa08bd77381315d408f5cfb439ff4468379-runc.F55SWt.mount: Deactivated successfully. Sep 10 00:51:57.801297 sshd[3741]: pam_unix(sshd:session): session closed for user core Sep 10 00:51:57.804290 systemd[1]: sshd@25-10.0.0.131:22-10.0.0.1:37252.service: Deactivated successfully. Sep 10 00:51:57.804966 systemd[1]: session-26.scope: Deactivated successfully. Sep 10 00:51:57.805657 systemd-logind[1190]: Session 26 logged out. Waiting for processes to exit. Sep 10 00:51:57.806514 systemd-logind[1190]: Removed session 26.