Sep 13 00:52:29.005513 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 12 23:13:49 -00 2025 Sep 13 00:52:29.005538 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:52:29.005551 kernel: BIOS-provided physical RAM map: Sep 13 00:52:29.005559 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 00:52:29.005566 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 13 00:52:29.005573 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 13 00:52:29.005582 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 13 00:52:29.005590 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 13 00:52:29.005597 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Sep 13 00:52:29.005607 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Sep 13 00:52:29.005614 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Sep 13 00:52:29.005621 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Sep 13 00:52:29.005629 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Sep 13 00:52:29.005637 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 13 00:52:29.005646 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Sep 13 00:52:29.005656 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Sep 13 00:52:29.005664 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 13 00:52:29.005671 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 13 00:52:29.005683 kernel: NX (Execute Disable) protection: active Sep 13 00:52:29.005691 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Sep 13 00:52:29.005699 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Sep 13 00:52:29.005707 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Sep 13 00:52:29.005715 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Sep 13 00:52:29.005722 kernel: extended physical RAM map: Sep 13 00:52:29.005730 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 00:52:29.005739 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 13 00:52:29.005747 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 13 00:52:29.005755 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 13 00:52:29.005763 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 13 00:52:29.005771 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Sep 13 00:52:29.005779 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Sep 13 00:52:29.005787 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b438017] usable Sep 13 00:52:29.005795 kernel: reserve setup_data: [mem 0x000000009b438018-0x000000009b474e57] usable Sep 13 00:52:29.005803 kernel: reserve setup_data: [mem 0x000000009b474e58-0x000000009b475017] usable Sep 13 00:52:29.005810 kernel: reserve setup_data: [mem 0x000000009b475018-0x000000009b47ec57] usable Sep 13 00:52:29.005818 kernel: reserve setup_data: [mem 0x000000009b47ec58-0x000000009c8eefff] usable Sep 13 00:52:29.005828 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Sep 13 00:52:29.005836 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Sep 13 00:52:29.005843 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 13 00:52:29.005851 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Sep 13 00:52:29.005863 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Sep 13 00:52:29.005872 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 13 00:52:29.005880 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 13 00:52:29.005890 kernel: efi: EFI v2.70 by EDK II Sep 13 00:52:29.005898 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b673018 RNG=0x9cb73018 Sep 13 00:52:29.005907 kernel: random: crng init done Sep 13 00:52:29.005915 kernel: SMBIOS 2.8 present. Sep 13 00:52:29.005924 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Sep 13 00:52:29.005932 kernel: Hypervisor detected: KVM Sep 13 00:52:29.005941 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:52:29.005949 kernel: kvm-clock: cpu 0, msr 1c19f001, primary cpu clock Sep 13 00:52:29.005963 kernel: kvm-clock: using sched offset of 5059262727 cycles Sep 13 00:52:29.006003 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:52:29.006036 kernel: tsc: Detected 2794.748 MHz processor Sep 13 00:52:29.006046 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:52:29.006055 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:52:29.006064 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Sep 13 00:52:29.006073 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:52:29.006081 kernel: Using GB pages for direct mapping Sep 13 00:52:29.006103 kernel: Secure boot disabled Sep 13 00:52:29.006112 kernel: ACPI: Early table checksum verification disabled Sep 13 00:52:29.006123 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 13 00:52:29.006132 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 13 00:52:29.006141 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:52:29.006150 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:52:29.006163 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 13 00:52:29.006172 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:52:29.006180 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:52:29.006192 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:52:29.006201 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:52:29.006219 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 13 00:52:29.006237 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 13 00:52:29.006247 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 13 00:52:29.006255 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 13 00:52:29.006275 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 13 00:52:29.006284 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 13 00:52:29.006292 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 13 00:52:29.006301 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 13 00:52:29.006310 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 13 00:52:29.006320 kernel: No NUMA configuration found Sep 13 00:52:29.006329 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Sep 13 00:52:29.006338 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Sep 13 00:52:29.006347 kernel: Zone ranges: Sep 13 00:52:29.006356 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:52:29.006364 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Sep 13 00:52:29.006373 kernel: Normal empty Sep 13 00:52:29.006382 kernel: Movable zone start for each node Sep 13 00:52:29.006390 kernel: Early memory node ranges Sep 13 00:52:29.006401 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 13 00:52:29.006409 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 13 00:52:29.006418 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 13 00:52:29.006427 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Sep 13 00:52:29.006436 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Sep 13 00:52:29.006444 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Sep 13 00:52:29.006452 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Sep 13 00:52:29.006461 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:52:29.006469 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 13 00:52:29.006478 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 13 00:52:29.006488 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:52:29.006497 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Sep 13 00:52:29.006506 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 13 00:52:29.006514 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Sep 13 00:52:29.006523 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 00:52:29.006532 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:52:29.006540 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:52:29.006549 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 00:52:29.006558 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:52:29.006569 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:52:29.006577 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:52:29.006586 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:52:29.006600 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:52:29.006611 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:52:29.006619 kernel: TSC deadline timer available Sep 13 00:52:29.006628 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 13 00:52:29.006636 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 13 00:52:29.006645 kernel: kvm-guest: setup PV sched yield Sep 13 00:52:29.006656 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 13 00:52:29.006665 kernel: Booting paravirtualized kernel on KVM Sep 13 00:52:29.006679 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:52:29.006690 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Sep 13 00:52:29.006699 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Sep 13 00:52:29.006708 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Sep 13 00:52:29.006717 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 13 00:52:29.006726 kernel: kvm-guest: setup async PF for cpu 0 Sep 13 00:52:29.006735 kernel: kvm-guest: stealtime: cpu 0, msr 9b21c0c0 Sep 13 00:52:29.006744 kernel: kvm-guest: PV spinlocks enabled Sep 13 00:52:29.006753 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:52:29.006762 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Sep 13 00:52:29.006773 kernel: Policy zone: DMA32 Sep 13 00:52:29.006784 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:52:29.006793 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:52:29.006803 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:52:29.006813 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:52:29.006822 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:52:29.006832 kernel: Memory: 2397432K/2567000K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 169308K reserved, 0K cma-reserved) Sep 13 00:52:29.006841 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 13 00:52:29.006850 kernel: ftrace: allocating 34614 entries in 136 pages Sep 13 00:52:29.006859 kernel: ftrace: allocated 136 pages with 2 groups Sep 13 00:52:29.006868 kernel: rcu: Hierarchical RCU implementation. Sep 13 00:52:29.006878 kernel: rcu: RCU event tracing is enabled. Sep 13 00:52:29.006887 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 13 00:52:29.006898 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:52:29.006907 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:52:29.006916 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:52:29.006926 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 13 00:52:29.006934 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 13 00:52:29.006944 kernel: Console: colour dummy device 80x25 Sep 13 00:52:29.006953 kernel: printk: console [ttyS0] enabled Sep 13 00:52:29.006962 kernel: ACPI: Core revision 20210730 Sep 13 00:52:29.006981 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 13 00:52:29.006992 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:52:29.007001 kernel: x2apic enabled Sep 13 00:52:29.007010 kernel: Switched APIC routing to physical x2apic. Sep 13 00:52:29.007019 kernel: kvm-guest: setup PV IPIs Sep 13 00:52:29.007028 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 00:52:29.007037 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 13 00:52:29.007047 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 13 00:52:29.007056 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 13 00:52:29.007068 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 13 00:52:29.007079 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 13 00:52:29.007100 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:52:29.007110 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:52:29.007119 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:52:29.007128 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 13 00:52:29.007137 kernel: active return thunk: retbleed_return_thunk Sep 13 00:52:29.007146 kernel: RETBleed: Mitigation: untrained return thunk Sep 13 00:52:29.007158 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:52:29.007168 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 13 00:52:29.007179 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:52:29.007188 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:52:29.007197 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:52:29.007206 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:52:29.007215 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 13 00:52:29.007224 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:52:29.007233 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:52:29.007242 kernel: LSM: Security Framework initializing Sep 13 00:52:29.007251 kernel: SELinux: Initializing. Sep 13 00:52:29.007262 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:52:29.007271 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:52:29.007281 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 13 00:52:29.007290 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 13 00:52:29.007299 kernel: ... version: 0 Sep 13 00:52:29.007308 kernel: ... bit width: 48 Sep 13 00:52:29.007317 kernel: ... generic registers: 6 Sep 13 00:52:29.007326 kernel: ... value mask: 0000ffffffffffff Sep 13 00:52:29.007335 kernel: ... max period: 00007fffffffffff Sep 13 00:52:29.007345 kernel: ... fixed-purpose events: 0 Sep 13 00:52:29.007354 kernel: ... event mask: 000000000000003f Sep 13 00:52:29.007364 kernel: signal: max sigframe size: 1776 Sep 13 00:52:29.007372 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:52:29.007381 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:52:29.007390 kernel: x86: Booting SMP configuration: Sep 13 00:52:29.007399 kernel: .... node #0, CPUs: #1 Sep 13 00:52:29.007408 kernel: kvm-clock: cpu 1, msr 1c19f041, secondary cpu clock Sep 13 00:52:29.007417 kernel: kvm-guest: setup async PF for cpu 1 Sep 13 00:52:29.007428 kernel: kvm-guest: stealtime: cpu 1, msr 9b29c0c0 Sep 13 00:52:29.007437 kernel: #2 Sep 13 00:52:29.007446 kernel: kvm-clock: cpu 2, msr 1c19f081, secondary cpu clock Sep 13 00:52:29.007455 kernel: kvm-guest: setup async PF for cpu 2 Sep 13 00:52:29.007464 kernel: kvm-guest: stealtime: cpu 2, msr 9b31c0c0 Sep 13 00:52:29.007473 kernel: #3 Sep 13 00:52:29.007482 kernel: kvm-clock: cpu 3, msr 1c19f0c1, secondary cpu clock Sep 13 00:52:29.007490 kernel: kvm-guest: setup async PF for cpu 3 Sep 13 00:52:29.007499 kernel: kvm-guest: stealtime: cpu 3, msr 9b39c0c0 Sep 13 00:52:29.007512 kernel: smp: Brought up 1 node, 4 CPUs Sep 13 00:52:29.007523 kernel: smpboot: Max logical packages: 1 Sep 13 00:52:29.007533 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 13 00:52:29.007541 kernel: devtmpfs: initialized Sep 13 00:52:29.007550 kernel: x86/mm: Memory block size: 128MB Sep 13 00:52:29.007559 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 13 00:52:29.007568 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 13 00:52:29.007578 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Sep 13 00:52:29.007587 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 13 00:52:29.007596 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 13 00:52:29.007607 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:52:29.007616 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 13 00:52:29.007626 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:52:29.007635 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:52:29.007644 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:52:29.007653 kernel: audit: type=2000 audit(1757724748.563:1): state=initialized audit_enabled=0 res=1 Sep 13 00:52:29.007662 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:52:29.007671 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:52:29.007682 kernel: cpuidle: using governor menu Sep 13 00:52:29.007691 kernel: ACPI: bus type PCI registered Sep 13 00:52:29.007700 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:52:29.007709 kernel: dca service started, version 1.12.1 Sep 13 00:52:29.007718 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 13 00:52:29.007728 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Sep 13 00:52:29.007737 kernel: PCI: Using configuration type 1 for base access Sep 13 00:52:29.007746 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:52:29.007759 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:52:29.007770 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:52:29.007779 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:52:29.007788 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:52:29.007797 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:52:29.007806 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 13 00:52:29.007815 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 13 00:52:29.007824 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 13 00:52:29.007833 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:52:29.007842 kernel: ACPI: Interpreter enabled Sep 13 00:52:29.007851 kernel: ACPI: PM: (supports S0 S3 S5) Sep 13 00:52:29.007862 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:52:29.007871 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:52:29.007880 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 13 00:52:29.007889 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:52:29.008059 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:52:29.008178 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 13 00:52:29.008272 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 13 00:52:29.008289 kernel: PCI host bridge to bus 0000:00 Sep 13 00:52:29.008399 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:52:29.008489 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:52:29.008573 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:52:29.008657 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 13 00:52:29.008740 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 13 00:52:29.008822 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Sep 13 00:52:29.008910 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:52:29.009056 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 13 00:52:29.009188 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 13 00:52:29.009294 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 13 00:52:29.009388 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Sep 13 00:52:29.009483 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 13 00:52:29.009580 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Sep 13 00:52:29.009674 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:52:29.009815 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 13 00:52:29.009916 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Sep 13 00:52:29.010060 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Sep 13 00:52:29.010216 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Sep 13 00:52:29.010367 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 13 00:52:29.010470 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Sep 13 00:52:29.010563 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 13 00:52:29.010657 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Sep 13 00:52:29.010766 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 13 00:52:29.010863 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Sep 13 00:52:29.010956 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 13 00:52:29.011066 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Sep 13 00:52:29.011187 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 13 00:52:29.011299 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 13 00:52:29.011398 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 13 00:52:29.011513 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 13 00:52:29.011618 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Sep 13 00:52:29.011715 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Sep 13 00:52:29.011825 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 13 00:52:29.011927 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Sep 13 00:52:29.011941 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:52:29.011951 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:52:29.011960 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:52:29.011969 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:52:29.011987 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 13 00:52:29.011997 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 13 00:52:29.012006 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 13 00:52:29.012018 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 13 00:52:29.012027 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 13 00:52:29.012036 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 13 00:52:29.012045 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 13 00:52:29.012055 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 13 00:52:29.012064 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 13 00:52:29.012073 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 13 00:52:29.012082 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 13 00:52:29.012104 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 13 00:52:29.012126 kernel: iommu: Default domain type: Translated Sep 13 00:52:29.012135 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:52:29.012237 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 13 00:52:29.012332 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:52:29.012424 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 13 00:52:29.012437 kernel: vgaarb: loaded Sep 13 00:52:29.012446 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 00:52:29.012455 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 00:52:29.012465 kernel: PTP clock support registered Sep 13 00:52:29.012478 kernel: Registered efivars operations Sep 13 00:52:29.012487 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:52:29.012496 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:52:29.012505 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 13 00:52:29.012514 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Sep 13 00:52:29.012523 kernel: e820: reserve RAM buffer [mem 0x9b438018-0x9bffffff] Sep 13 00:52:29.012532 kernel: e820: reserve RAM buffer [mem 0x9b475018-0x9bffffff] Sep 13 00:52:29.012541 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Sep 13 00:52:29.012551 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Sep 13 00:52:29.012561 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 13 00:52:29.012571 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 13 00:52:29.012580 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:52:29.012589 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:52:29.012599 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:52:29.012608 kernel: pnp: PnP ACPI init Sep 13 00:52:29.012726 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 13 00:52:29.012743 kernel: pnp: PnP ACPI: found 6 devices Sep 13 00:52:29.012753 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:52:29.012762 kernel: NET: Registered PF_INET protocol family Sep 13 00:52:29.012771 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:52:29.012781 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 00:52:29.012790 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:52:29.012800 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:52:29.012809 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 13 00:52:29.012818 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 00:52:29.012829 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:52:29.012839 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:52:29.012848 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:52:29.012857 kernel: NET: Registered PF_XDP protocol family Sep 13 00:52:29.012956 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 13 00:52:29.013065 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 13 00:52:29.013186 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:52:29.013271 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:52:29.013360 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:52:29.013448 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 13 00:52:29.013530 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 13 00:52:29.013612 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Sep 13 00:52:29.013625 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:52:29.013634 kernel: Initialise system trusted keyrings Sep 13 00:52:29.013644 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 00:52:29.013653 kernel: Key type asymmetric registered Sep 13 00:52:29.013662 kernel: Asymmetric key parser 'x509' registered Sep 13 00:52:29.013674 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 13 00:52:29.013684 kernel: io scheduler mq-deadline registered Sep 13 00:52:29.013704 kernel: io scheduler kyber registered Sep 13 00:52:29.013715 kernel: io scheduler bfq registered Sep 13 00:52:29.013725 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:52:29.013735 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 13 00:52:29.013745 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 13 00:52:29.013754 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 13 00:52:29.013763 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:52:29.013775 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:52:29.013785 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:52:29.013794 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:52:29.013804 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:52:29.013942 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 13 00:52:29.013958 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:52:29.014054 kernel: rtc_cmos 00:04: registered as rtc0 Sep 13 00:52:29.014157 kernel: rtc_cmos 00:04: setting system clock to 2025-09-13T00:52:28 UTC (1757724748) Sep 13 00:52:29.014252 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 13 00:52:29.014266 kernel: efifb: probing for efifb Sep 13 00:52:29.014276 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 13 00:52:29.014286 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 13 00:52:29.014295 kernel: efifb: scrolling: redraw Sep 13 00:52:29.014305 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 13 00:52:29.014315 kernel: Console: switching to colour frame buffer device 160x50 Sep 13 00:52:29.014324 kernel: fb0: EFI VGA frame buffer device Sep 13 00:52:29.014334 kernel: pstore: Registered efi as persistent store backend Sep 13 00:52:29.014346 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:52:29.014355 kernel: Segment Routing with IPv6 Sep 13 00:52:29.014365 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:52:29.014376 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:52:29.014386 kernel: Key type dns_resolver registered Sep 13 00:52:29.014395 kernel: IPI shorthand broadcast: enabled Sep 13 00:52:29.014407 kernel: sched_clock: Marking stable (544030322, 150077861)->(756608010, -62499827) Sep 13 00:52:29.014418 kernel: registered taskstats version 1 Sep 13 00:52:29.014427 kernel: Loading compiled-in X.509 certificates Sep 13 00:52:29.014437 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: d4931373bb0d9b9f95da11f02ae07d3649cc6c37' Sep 13 00:52:29.014447 kernel: Key type .fscrypt registered Sep 13 00:52:29.014456 kernel: Key type fscrypt-provisioning registered Sep 13 00:52:29.014466 kernel: pstore: Using crash dump compression: deflate Sep 13 00:52:29.014475 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:52:29.014487 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:52:29.014496 kernel: ima: No architecture policies found Sep 13 00:52:29.014506 kernel: clk: Disabling unused clocks Sep 13 00:52:29.014515 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 13 00:52:29.014524 kernel: Write protecting the kernel read-only data: 28672k Sep 13 00:52:29.014534 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 13 00:52:29.014548 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 13 00:52:29.014557 kernel: Run /init as init process Sep 13 00:52:29.014567 kernel: with arguments: Sep 13 00:52:29.014578 kernel: /init Sep 13 00:52:29.014588 kernel: with environment: Sep 13 00:52:29.014597 kernel: HOME=/ Sep 13 00:52:29.014606 kernel: TERM=linux Sep 13 00:52:29.014616 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:52:29.014628 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:52:29.014640 systemd[1]: Detected virtualization kvm. Sep 13 00:52:29.014650 systemd[1]: Detected architecture x86-64. Sep 13 00:52:29.014662 systemd[1]: Running in initrd. Sep 13 00:52:29.014672 systemd[1]: No hostname configured, using default hostname. Sep 13 00:52:29.014681 systemd[1]: Hostname set to . Sep 13 00:52:29.014693 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:52:29.014703 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:52:29.014713 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:52:29.014723 systemd[1]: Reached target cryptsetup.target. Sep 13 00:52:29.014740 systemd[1]: Reached target paths.target. Sep 13 00:52:29.014753 systemd[1]: Reached target slices.target. Sep 13 00:52:29.014766 systemd[1]: Reached target swap.target. Sep 13 00:52:29.014776 systemd[1]: Reached target timers.target. Sep 13 00:52:29.014786 systemd[1]: Listening on iscsid.socket. Sep 13 00:52:29.014796 systemd[1]: Listening on iscsiuio.socket. Sep 13 00:52:29.014807 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:52:29.014817 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:52:29.014827 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:52:29.014839 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:52:29.014849 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:52:29.014859 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:52:29.014869 systemd[1]: Reached target sockets.target. Sep 13 00:52:29.014879 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:52:29.014889 systemd[1]: Finished network-cleanup.service. Sep 13 00:52:29.014900 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:52:29.014910 systemd[1]: Starting systemd-journald.service... Sep 13 00:52:29.014920 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:52:29.014932 systemd[1]: Starting systemd-resolved.service... Sep 13 00:52:29.014942 systemd[1]: Starting systemd-vconsole-setup.service... Sep 13 00:52:29.014952 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:52:29.014962 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:52:29.014983 kernel: audit: type=1130 audit(1757724749.007:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:29.014993 systemd[1]: Finished systemd-vconsole-setup.service. Sep 13 00:52:29.015003 kernel: audit: type=1130 audit(1757724749.011:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:29.015016 systemd-journald[198]: Journal started Sep 13 00:52:29.015067 systemd-journald[198]: Runtime Journal (/run/log/journal/cb1ceeba5e064850bc5ce2c559912225) is 6.0M, max 48.4M, 42.4M free. Sep 13 00:52:29.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:29.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:29.006272 systemd-modules-load[199]: Inserted module 'overlay' Sep 13 00:52:29.017266 systemd[1]: Starting dracut-cmdline-ask.service... Sep 13 00:52:29.020349 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:52:29.022491 systemd[1]: Started systemd-journald.service. Sep 13 00:52:29.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:29.025387 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:52:29.026840 kernel: audit: type=1130 audit(1757724749.022:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:29.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:29.033121 kernel: audit: type=1130 audit(1757724749.025:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:29.037199 systemd[1]: Finished dracut-cmdline-ask.service. Sep 13 00:52:29.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:29.038953 systemd-resolved[200]: Positive Trust Anchors: Sep 13 00:52:29.043182 kernel: audit: type=1130 audit(1757724749.038:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:29.038962 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:52:29.039009 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:52:29.041690 systemd[1]: Starting dracut-cmdline.service... Sep 13 00:52:29.042464 systemd-resolved[200]: Defaulting to hostname 'linux'. Sep 13 00:52:29.056539 kernel: audit: type=1130 audit(1757724749.045:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:29.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:29.045624 systemd[1]: Started systemd-resolved.service. Sep 13 00:52:29.046129 systemd[1]: Reached target nss-lookup.target. Sep 13 00:52:29.066045 dracut-cmdline[216]: dracut-dracut-053 Sep 13 00:52:29.068956 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:52:29.138962 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:52:29.144322 systemd-modules-load[199]: Inserted module 'br_netfilter' Sep 13 00:52:29.145441 kernel: Bridge firewalling registered Sep 13 00:52:29.163130 kernel: SCSI subsystem initialized Sep 13 00:52:29.177224 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:52:29.177304 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:52:29.177336 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 13 00:52:29.181123 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:52:29.182124 systemd-modules-load[199]: Inserted module 'dm_multipath' Sep 13 00:52:29.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:29.183454 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:52:29.190494 kernel: audit: type=1130 audit(1757724749.184:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:29.185202 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:52:29.196558 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:52:29.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:29.201119 kernel: audit: type=1130 audit(1757724749.198:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:29.203117 kernel: iscsi: registered transport (tcp) Sep 13 00:52:29.224358 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:52:29.224397 kernel: QLogic iSCSI HBA Driver Sep 13 00:52:29.251324 systemd[1]: Finished dracut-cmdline.service. Sep 13 00:52:29.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:29.254195 systemd[1]: Starting dracut-pre-udev.service... Sep 13 00:52:29.257738 kernel: audit: type=1130 audit(1757724749.253:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:29.302131 kernel: raid6: avx2x4 gen() 30685 MB/s Sep 13 00:52:29.319116 kernel: raid6: avx2x4 xor() 8239 MB/s Sep 13 00:52:29.336141 kernel: raid6: avx2x2 gen() 32469 MB/s Sep 13 00:52:29.353118 kernel: raid6: avx2x2 xor() 19114 MB/s Sep 13 00:52:29.370144 kernel: raid6: avx2x1 gen() 26311 MB/s Sep 13 00:52:29.387133 kernel: raid6: avx2x1 xor() 15240 MB/s Sep 13 00:52:29.404114 kernel: raid6: sse2x4 gen() 14737 MB/s Sep 13 00:52:29.421121 kernel: raid6: sse2x4 xor() 7730 MB/s Sep 13 00:52:29.438141 kernel: raid6: sse2x2 gen() 16161 MB/s Sep 13 00:52:29.455116 kernel: raid6: sse2x2 xor() 9790 MB/s Sep 13 00:52:29.472128 kernel: raid6: sse2x1 gen() 4259 MB/s Sep 13 00:52:29.489499 kernel: raid6: sse2x1 xor() 7113 MB/s Sep 13 00:52:29.489544 kernel: raid6: using algorithm avx2x2 gen() 32469 MB/s Sep 13 00:52:29.489580 kernel: raid6: .... xor() 19114 MB/s, rmw enabled Sep 13 00:52:29.490218 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:52:29.503137 kernel: xor: automatically using best checksumming function avx Sep 13 00:52:29.598141 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 13 00:52:29.606851 systemd[1]: Finished dracut-pre-udev.service. Sep 13 00:52:29.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:29.608000 audit: BPF prog-id=7 op=LOAD Sep 13 00:52:29.608000 audit: BPF prog-id=8 op=LOAD Sep 13 00:52:29.608871 systemd[1]: Starting systemd-udevd.service... Sep 13 00:52:29.621428 systemd-udevd[399]: Using default interface naming scheme 'v252'. Sep 13 00:52:29.625344 systemd[1]: Started systemd-udevd.service. Sep 13 00:52:29.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:29.627856 systemd[1]: Starting dracut-pre-trigger.service... Sep 13 00:52:29.637412 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Sep 13 00:52:29.665340 systemd[1]: Finished dracut-pre-trigger.service. Sep 13 00:52:29.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:29.667078 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:52:29.704029 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:52:29.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:29.737676 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 13 00:52:29.747372 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:52:29.747389 kernel: GPT:9289727 != 19775487 Sep 13 00:52:29.747401 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:52:29.747413 kernel: GPT:9289727 != 19775487 Sep 13 00:52:29.747424 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:52:29.747440 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:52:29.747452 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:52:29.759112 kernel: libata version 3.00 loaded. Sep 13 00:52:29.772027 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:52:29.772081 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (444) Sep 13 00:52:29.776666 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 13 00:52:29.780185 kernel: AES CTR mode by8 optimization enabled Sep 13 00:52:29.780214 kernel: ahci 0000:00:1f.2: version 3.0 Sep 13 00:52:29.799341 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 13 00:52:29.799366 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 13 00:52:29.799492 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 13 00:52:29.799609 kernel: scsi host0: ahci Sep 13 00:52:29.799702 kernel: scsi host1: ahci Sep 13 00:52:29.799882 kernel: scsi host2: ahci Sep 13 00:52:29.799975 kernel: scsi host3: ahci Sep 13 00:52:29.800060 kernel: scsi host4: ahci Sep 13 00:52:29.800185 kernel: scsi host5: ahci Sep 13 00:52:29.800370 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Sep 13 00:52:29.800384 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Sep 13 00:52:29.800396 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Sep 13 00:52:29.800410 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Sep 13 00:52:29.800421 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Sep 13 00:52:29.800432 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Sep 13 00:52:29.780138 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 13 00:52:29.789227 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 13 00:52:29.795432 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 13 00:52:29.807956 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:52:29.810559 systemd[1]: Starting disk-uuid.service... Sep 13 00:52:29.816897 disk-uuid[529]: Primary Header is updated. Sep 13 00:52:29.816897 disk-uuid[529]: Secondary Entries is updated. Sep 13 00:52:29.816897 disk-uuid[529]: Secondary Header is updated. Sep 13 00:52:29.820393 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:52:29.823123 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:52:29.826122 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:52:30.112403 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 13 00:52:30.112491 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 13 00:52:30.112502 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 13 00:52:30.112523 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 13 00:52:30.112532 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 13 00:52:30.114119 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 13 00:52:30.114141 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 13 00:52:30.115413 kernel: ata3.00: applying bridge limits Sep 13 00:52:30.116113 kernel: ata3.00: configured for UDMA/100 Sep 13 00:52:30.118120 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 13 00:52:30.147118 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 13 00:52:30.164925 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:52:30.164950 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 13 00:52:30.826790 disk-uuid[530]: The operation has completed successfully. Sep 13 00:52:30.827910 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:52:30.848700 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:52:30.848775 systemd[1]: Finished disk-uuid.service. Sep 13 00:52:30.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:30.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:30.855734 systemd[1]: Starting verity-setup.service... Sep 13 00:52:30.871128 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 13 00:52:30.904661 systemd[1]: Found device dev-mapper-usr.device. Sep 13 00:52:30.907040 systemd[1]: Mounting sysusr-usr.mount... Sep 13 00:52:30.910816 systemd[1]: Finished verity-setup.service. Sep 13 00:52:30.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:31.040119 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 13 00:52:31.040224 systemd[1]: Mounted sysusr-usr.mount. Sep 13 00:52:31.041636 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 13 00:52:31.043562 systemd[1]: Starting ignition-setup.service... Sep 13 00:52:31.045485 systemd[1]: Starting parse-ip-for-networkd.service... Sep 13 00:52:31.053483 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:52:31.053511 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:52:31.053521 kernel: BTRFS info (device vda6): has skinny extents Sep 13 00:52:31.061235 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:52:31.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:31.070475 systemd[1]: Finished ignition-setup.service. Sep 13 00:52:31.072539 systemd[1]: Starting ignition-fetch-offline.service... Sep 13 00:52:31.193689 systemd[1]: Finished parse-ip-for-networkd.service. Sep 13 00:52:31.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:31.196000 audit: BPF prog-id=9 op=LOAD Sep 13 00:52:31.197199 systemd[1]: Starting systemd-networkd.service... Sep 13 00:52:31.223828 systemd-networkd[721]: lo: Link UP Sep 13 00:52:31.223837 systemd-networkd[721]: lo: Gained carrier Sep 13 00:52:31.224255 systemd-networkd[721]: Enumeration completed Sep 13 00:52:31.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:31.224443 systemd-networkd[721]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:52:31.225738 systemd[1]: Started systemd-networkd.service. Sep 13 00:52:31.227342 systemd[1]: Reached target network.target. Sep 13 00:52:31.227822 systemd-networkd[721]: eth0: Link UP Sep 13 00:52:31.227825 systemd-networkd[721]: eth0: Gained carrier Sep 13 00:52:31.230081 systemd[1]: Starting iscsiuio.service... Sep 13 00:52:31.287123 ignition[659]: Ignition 2.14.0 Sep 13 00:52:31.287965 ignition[659]: Stage: fetch-offline Sep 13 00:52:31.288047 ignition[659]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:52:31.288059 ignition[659]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:52:31.288784 ignition[659]: parsed url from cmdline: "" Sep 13 00:52:31.288789 ignition[659]: no config URL provided Sep 13 00:52:31.288796 ignition[659]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:52:31.288818 ignition[659]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:52:31.289412 ignition[659]: op(1): [started] loading QEMU firmware config module Sep 13 00:52:31.289419 ignition[659]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 13 00:52:31.298352 ignition[659]: op(1): [finished] loading QEMU firmware config module Sep 13 00:52:31.298848 ignition[659]: QEMU firmware config was not found. Ignoring... Sep 13 00:52:31.340098 ignition[659]: parsing config with SHA512: 4b49976d0579977964b75f01b7acc2187b793a80ddcbc1e84753e897115fa0b447a5dc408effb04cb8bf42dd32516d42f54c1f771e2f08ae064d0d4a99e983dc Sep 13 00:52:31.364792 unknown[659]: fetched base config from "system" Sep 13 00:52:31.364807 unknown[659]: fetched user config from "qemu" Sep 13 00:52:31.365543 ignition[659]: fetch-offline: fetch-offline passed Sep 13 00:52:31.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:31.366865 systemd[1]: Finished ignition-fetch-offline.service. Sep 13 00:52:31.365617 ignition[659]: Ignition finished successfully Sep 13 00:52:31.367692 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 00:52:31.368736 systemd[1]: Starting ignition-kargs.service... Sep 13 00:52:31.430996 systemd[1]: Started iscsiuio.service. Sep 13 00:52:31.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:31.432736 systemd[1]: Starting iscsid.service... Sep 13 00:52:31.435229 systemd-networkd[721]: eth0: DHCPv4 address 10.0.0.130/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:52:31.436982 iscsid[734]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:52:31.436982 iscsid[734]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 13 00:52:31.436982 iscsid[734]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 13 00:52:31.436982 iscsid[734]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 13 00:52:31.436982 iscsid[734]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:52:31.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:31.439622 systemd[1]: Started iscsid.service. Sep 13 00:52:31.450870 iscsid[734]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 13 00:52:31.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:31.442371 ignition[727]: Ignition 2.14.0 Sep 13 00:52:31.448287 systemd[1]: Starting dracut-initqueue.service... Sep 13 00:52:31.442379 ignition[727]: Stage: kargs Sep 13 00:52:31.449603 systemd[1]: Finished ignition-kargs.service. Sep 13 00:52:31.442604 ignition[727]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:52:31.451691 systemd[1]: Starting ignition-disks.service... Sep 13 00:52:31.442616 ignition[727]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:52:31.444225 ignition[727]: kargs: kargs passed Sep 13 00:52:31.444267 ignition[727]: Ignition finished successfully Sep 13 00:52:31.461008 systemd[1]: Finished dracut-initqueue.service. Sep 13 00:52:31.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:31.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:31.460101 ignition[736]: Ignition 2.14.0 Sep 13 00:52:31.462789 systemd[1]: Finished ignition-disks.service. Sep 13 00:52:31.460108 ignition[736]: Stage: disks Sep 13 00:52:31.464582 systemd[1]: Reached target initrd-root-device.target. Sep 13 00:52:31.460222 ignition[736]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:52:31.465493 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:52:31.460230 ignition[736]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:52:31.466813 systemd[1]: Reached target local-fs.target. Sep 13 00:52:31.461146 ignition[736]: disks: disks passed Sep 13 00:52:31.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:31.467589 systemd[1]: Reached target remote-fs-pre.target. Sep 13 00:52:31.461183 ignition[736]: Ignition finished successfully Sep 13 00:52:31.467980 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:52:31.468309 systemd[1]: Reached target remote-fs.target. Sep 13 00:52:31.468472 systemd[1]: Reached target sysinit.target. Sep 13 00:52:31.468632 systemd[1]: Reached target basic.target. Sep 13 00:52:31.469549 systemd[1]: Starting dracut-pre-mount.service... Sep 13 00:52:31.477290 systemd[1]: Finished dracut-pre-mount.service. Sep 13 00:52:31.478718 systemd[1]: Starting systemd-fsck-root.service... Sep 13 00:52:31.490915 systemd-fsck[756]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 13 00:52:31.496687 systemd[1]: Finished systemd-fsck-root.service. Sep 13 00:52:31.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:31.500004 systemd[1]: Mounting sysroot.mount... Sep 13 00:52:31.509116 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 13 00:52:31.509328 systemd[1]: Mounted sysroot.mount. Sep 13 00:52:31.510769 systemd[1]: Reached target initrd-root-fs.target. Sep 13 00:52:31.513184 systemd[1]: Mounting sysroot-usr.mount... Sep 13 00:52:31.514744 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 13 00:52:31.514778 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:52:31.514797 systemd[1]: Reached target ignition-diskful.target. Sep 13 00:52:31.519938 systemd[1]: Mounted sysroot-usr.mount. Sep 13 00:52:31.522069 systemd[1]: Starting initrd-setup-root.service... Sep 13 00:52:31.526653 initrd-setup-root[766]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:52:31.531444 initrd-setup-root[774]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:52:31.535471 initrd-setup-root[782]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:52:31.539192 initrd-setup-root[790]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:52:31.566515 systemd[1]: Finished initrd-setup-root.service. Sep 13 00:52:31.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:31.569251 systemd[1]: Starting ignition-mount.service... Sep 13 00:52:31.571926 systemd[1]: Starting sysroot-boot.service... Sep 13 00:52:31.574235 bash[807]: umount: /sysroot/usr/share/oem: not mounted. Sep 13 00:52:31.623621 systemd[1]: Finished sysroot-boot.service. Sep 13 00:52:31.624995 ignition[808]: INFO : Ignition 2.14.0 Sep 13 00:52:31.624995 ignition[808]: INFO : Stage: mount Sep 13 00:52:31.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:31.627280 ignition[808]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:52:31.627280 ignition[808]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:52:31.627280 ignition[808]: INFO : mount: mount passed Sep 13 00:52:31.627280 ignition[808]: INFO : Ignition finished successfully Sep 13 00:52:31.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:31.627271 systemd[1]: Finished ignition-mount.service. Sep 13 00:52:31.916673 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 00:52:31.924986 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (816) Sep 13 00:52:31.925024 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:52:31.925036 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:52:31.926820 kernel: BTRFS info (device vda6): has skinny extents Sep 13 00:52:31.931820 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 00:52:31.933125 systemd[1]: Starting ignition-files.service... Sep 13 00:52:31.950407 ignition[836]: INFO : Ignition 2.14.0 Sep 13 00:52:31.950407 ignition[836]: INFO : Stage: files Sep 13 00:52:31.952145 ignition[836]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:52:31.952145 ignition[836]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:52:31.952145 ignition[836]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:52:31.956382 ignition[836]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:52:31.956382 ignition[836]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:52:31.956382 ignition[836]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:52:31.956382 ignition[836]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:52:31.956382 ignition[836]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:52:31.955938 unknown[836]: wrote ssh authorized keys file for user: core Sep 13 00:52:31.965120 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:52:31.965120 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 13 00:52:32.013811 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:52:32.300469 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:52:32.302633 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:52:32.304518 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 13 00:52:32.403913 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 00:52:32.579150 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:52:32.579150 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:52:32.621060 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:52:32.621060 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:52:32.621060 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:52:32.621060 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:52:32.621060 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:52:32.621060 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:52:32.621060 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:52:32.621060 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:52:32.621060 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:52:32.621060 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:52:32.621060 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:52:32.621060 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:52:32.621060 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 13 00:52:32.815303 systemd-networkd[721]: eth0: Gained IPv6LL Sep 13 00:52:32.844401 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 13 00:52:33.542181 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:52:33.542181 ignition[836]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 13 00:52:33.546226 ignition[836]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:52:33.546226 ignition[836]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:52:33.546226 ignition[836]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 13 00:52:33.546226 ignition[836]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 13 00:52:33.546226 ignition[836]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:52:33.546226 ignition[836]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:52:33.546226 ignition[836]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 13 00:52:33.546226 ignition[836]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:52:33.546226 ignition[836]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:52:33.546226 ignition[836]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 00:52:33.546226 ignition[836]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:52:33.579340 ignition[836]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:52:33.581844 ignition[836]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 00:52:33.581844 ignition[836]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:52:33.581844 ignition[836]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:52:33.581844 ignition[836]: INFO : files: files passed Sep 13 00:52:33.581844 ignition[836]: INFO : Ignition finished successfully Sep 13 00:52:33.604559 kernel: kauditd_printk_skb: 24 callbacks suppressed Sep 13 00:52:33.604584 kernel: audit: type=1130 audit(1757724753.581:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.604596 kernel: audit: type=1130 audit(1757724753.592:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.604607 kernel: audit: type=1130 audit(1757724753.597:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.604617 kernel: audit: type=1131 audit(1757724753.597:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.580832 systemd[1]: Finished ignition-files.service. Sep 13 00:52:33.582794 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 13 00:52:33.587977 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 13 00:52:33.609229 initrd-setup-root-after-ignition[859]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Sep 13 00:52:33.588607 systemd[1]: Starting ignition-quench.service... Sep 13 00:52:33.611585 initrd-setup-root-after-ignition[861]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:52:33.590534 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 13 00:52:33.593150 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:52:33.593205 systemd[1]: Finished ignition-quench.service. Sep 13 00:52:33.597476 systemd[1]: Reached target ignition-complete.target. Sep 13 00:52:33.623706 kernel: audit: type=1130 audit(1757724753.616:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.623723 kernel: audit: type=1131 audit(1757724753.616:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.605196 systemd[1]: Starting initrd-parse-etc.service... Sep 13 00:52:33.615755 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:52:33.615825 systemd[1]: Finished initrd-parse-etc.service. Sep 13 00:52:33.616770 systemd[1]: Reached target initrd-fs.target. Sep 13 00:52:33.623712 systemd[1]: Reached target initrd.target. Sep 13 00:52:33.624477 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 13 00:52:33.625064 systemd[1]: Starting dracut-pre-pivot.service... Sep 13 00:52:33.635555 systemd[1]: Finished dracut-pre-pivot.service. Sep 13 00:52:33.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.638142 systemd[1]: Starting initrd-cleanup.service... Sep 13 00:52:33.641390 kernel: audit: type=1130 audit(1757724753.637:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.646862 systemd[1]: Stopped target nss-lookup.target. Sep 13 00:52:33.647738 systemd[1]: Stopped target remote-cryptsetup.target. Sep 13 00:52:33.649285 systemd[1]: Stopped target timers.target. Sep 13 00:52:33.650818 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:52:33.711195 kernel: audit: type=1131 audit(1757724753.652:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.650918 systemd[1]: Stopped dracut-pre-pivot.service. Sep 13 00:52:33.652390 systemd[1]: Stopped target initrd.target. Sep 13 00:52:33.711276 systemd[1]: Stopped target basic.target. Sep 13 00:52:33.712767 systemd[1]: Stopped target ignition-complete.target. Sep 13 00:52:33.714304 systemd[1]: Stopped target ignition-diskful.target. Sep 13 00:52:33.715794 systemd[1]: Stopped target initrd-root-device.target. Sep 13 00:52:33.717479 systemd[1]: Stopped target remote-fs.target. Sep 13 00:52:33.719036 systemd[1]: Stopped target remote-fs-pre.target. Sep 13 00:52:33.720656 systemd[1]: Stopped target sysinit.target. Sep 13 00:52:33.722127 systemd[1]: Stopped target local-fs.target. Sep 13 00:52:33.723635 systemd[1]: Stopped target local-fs-pre.target. Sep 13 00:52:33.725150 systemd[1]: Stopped target swap.target. Sep 13 00:52:33.732112 kernel: audit: type=1131 audit(1757724753.727:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.726503 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:52:33.726610 systemd[1]: Stopped dracut-pre-mount.service. Sep 13 00:52:33.755056 kernel: audit: type=1131 audit(1757724753.733:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.728146 systemd[1]: Stopped target cryptsetup.target. Sep 13 00:52:33.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.732172 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:52:33.732255 systemd[1]: Stopped dracut-initqueue.service. Sep 13 00:52:33.733922 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:52:33.734007 systemd[1]: Stopped ignition-fetch-offline.service. Sep 13 00:52:33.755340 systemd[1]: Stopped target paths.target. Sep 13 00:52:33.756688 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:52:33.761208 systemd[1]: Stopped systemd-ask-password-console.path. Sep 13 00:52:33.763335 systemd[1]: Stopped target slices.target. Sep 13 00:52:33.764971 systemd[1]: Stopped target sockets.target. Sep 13 00:52:33.766416 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:52:33.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.766533 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 13 00:52:33.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.768042 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:52:33.768145 systemd[1]: Stopped ignition-files.service. Sep 13 00:52:33.773220 iscsid[734]: iscsid shutting down. Sep 13 00:52:33.770860 systemd[1]: Stopping ignition-mount.service... Sep 13 00:52:33.773969 systemd[1]: Stopping iscsid.service... Sep 13 00:52:33.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.774624 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:52:33.778607 ignition[876]: INFO : Ignition 2.14.0 Sep 13 00:52:33.778607 ignition[876]: INFO : Stage: umount Sep 13 00:52:33.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.774750 systemd[1]: Stopped kmod-static-nodes.service. Sep 13 00:52:33.782501 ignition[876]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:52:33.782501 ignition[876]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:52:33.782501 ignition[876]: INFO : umount: umount passed Sep 13 00:52:33.782501 ignition[876]: INFO : Ignition finished successfully Sep 13 00:52:33.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.777301 systemd[1]: Stopping sysroot-boot.service... Sep 13 00:52:33.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.778609 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:52:33.778767 systemd[1]: Stopped systemd-udev-trigger.service. Sep 13 00:52:33.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.780409 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:52:33.780523 systemd[1]: Stopped dracut-pre-trigger.service. Sep 13 00:52:33.783940 systemd[1]: iscsid.service: Deactivated successfully. Sep 13 00:52:33.784071 systemd[1]: Stopped iscsid.service. Sep 13 00:52:33.786813 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:52:33.786905 systemd[1]: Stopped ignition-mount.service. Sep 13 00:52:33.788500 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:52:33.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.788579 systemd[1]: Closed iscsid.socket. Sep 13 00:52:33.789961 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:52:33.790011 systemd[1]: Stopped ignition-disks.service. Sep 13 00:52:33.790516 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:52:33.790549 systemd[1]: Stopped ignition-kargs.service. Sep 13 00:52:33.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.790962 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:52:33.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.791028 systemd[1]: Stopped ignition-setup.service. Sep 13 00:52:33.791426 systemd[1]: Stopping iscsiuio.service... Sep 13 00:52:33.791709 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:52:33.791852 systemd[1]: Finished initrd-cleanup.service. Sep 13 00:52:33.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.794063 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:52:33.795766 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 13 00:52:33.831000 audit: BPF prog-id=6 op=UNLOAD Sep 13 00:52:33.795888 systemd[1]: Stopped iscsiuio.service. Sep 13 00:52:33.797241 systemd[1]: Stopped target network.target. Sep 13 00:52:33.798923 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:52:33.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.798951 systemd[1]: Closed iscsiuio.socket. Sep 13 00:52:33.799845 systemd[1]: Stopping systemd-networkd.service... Sep 13 00:52:33.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.801625 systemd[1]: Stopping systemd-resolved.service... Sep 13 00:52:33.804149 systemd-networkd[721]: eth0: DHCPv6 lease lost Sep 13 00:52:33.852000 audit: BPF prog-id=9 op=UNLOAD Sep 13 00:52:33.806066 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:52:33.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.806166 systemd[1]: Stopped systemd-networkd.service. Sep 13 00:52:33.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.810123 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:52:33.810149 systemd[1]: Closed systemd-networkd.socket. Sep 13 00:52:33.812980 systemd[1]: Stopping network-cleanup.service... Sep 13 00:52:33.813673 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:52:33.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.813712 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 13 00:52:33.815788 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:52:33.815828 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:52:33.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.817960 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:52:33.818026 systemd[1]: Stopped systemd-modules-load.service. Sep 13 00:52:33.819408 systemd[1]: Stopping systemd-udevd.service... Sep 13 00:52:33.822389 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 00:52:33.822954 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:52:33.823069 systemd[1]: Stopped systemd-resolved.service. Sep 13 00:52:33.844168 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:52:33.844896 systemd[1]: Stopped systemd-udevd.service. Sep 13 00:52:33.847889 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:52:33.848016 systemd[1]: Stopped network-cleanup.service. Sep 13 00:52:33.849383 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:52:33.849419 systemd[1]: Closed systemd-udevd-control.socket. Sep 13 00:52:33.850845 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:52:33.850877 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 13 00:52:33.852640 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:52:33.852688 systemd[1]: Stopped dracut-pre-udev.service. Sep 13 00:52:33.854378 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:52:33.854427 systemd[1]: Stopped dracut-cmdline.service. Sep 13 00:52:33.856211 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:52:33.856250 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 13 00:52:33.859089 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 13 00:52:33.860870 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:52:33.860927 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 13 00:52:33.865146 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:52:33.865228 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 13 00:52:33.921496 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:52:33.921612 systemd[1]: Stopped sysroot-boot.service. Sep 13 00:52:33.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.923694 systemd[1]: Reached target initrd-switch-root.target. Sep 13 00:52:33.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:33.925386 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:52:33.925422 systemd[1]: Stopped initrd-setup-root.service. Sep 13 00:52:33.927124 systemd[1]: Starting initrd-switch-root.service... Sep 13 00:52:33.942250 systemd[1]: Switching root. Sep 13 00:52:33.961759 systemd-journald[198]: Journal stopped Sep 13 00:52:37.027555 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Sep 13 00:52:37.027610 kernel: SELinux: Class mctp_socket not defined in policy. Sep 13 00:52:37.027622 kernel: SELinux: Class anon_inode not defined in policy. Sep 13 00:52:37.027632 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 13 00:52:37.027644 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:52:37.027653 kernel: SELinux: policy capability open_perms=1 Sep 13 00:52:37.027663 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:52:37.027672 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:52:37.027683 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:52:37.027693 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:52:37.027702 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:52:37.027711 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:52:37.027722 systemd[1]: Successfully loaded SELinux policy in 42.157ms. Sep 13 00:52:37.027755 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.864ms. Sep 13 00:52:37.027767 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:52:37.027777 systemd[1]: Detected virtualization kvm. Sep 13 00:52:37.027787 systemd[1]: Detected architecture x86-64. Sep 13 00:52:37.027797 systemd[1]: Detected first boot. Sep 13 00:52:37.027811 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:52:37.027822 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 13 00:52:37.027831 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:52:37.027842 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:52:37.027856 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:52:37.027868 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:52:37.027879 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:52:37.027890 systemd[1]: Stopped initrd-switch-root.service. Sep 13 00:52:37.027900 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:52:37.027910 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 13 00:52:37.027920 systemd[1]: Created slice system-addon\x2drun.slice. Sep 13 00:52:37.027933 systemd[1]: Created slice system-getty.slice. Sep 13 00:52:37.027943 systemd[1]: Created slice system-modprobe.slice. Sep 13 00:52:37.027953 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 13 00:52:37.027963 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 13 00:52:37.027974 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 13 00:52:37.027984 systemd[1]: Created slice user.slice. Sep 13 00:52:37.027994 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:52:37.028004 systemd[1]: Started systemd-ask-password-wall.path. Sep 13 00:52:37.028013 systemd[1]: Set up automount boot.automount. Sep 13 00:52:37.028024 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 13 00:52:37.028033 systemd[1]: Stopped target initrd-switch-root.target. Sep 13 00:52:37.028043 systemd[1]: Stopped target initrd-fs.target. Sep 13 00:52:37.028055 systemd[1]: Stopped target initrd-root-fs.target. Sep 13 00:52:37.028066 systemd[1]: Reached target integritysetup.target. Sep 13 00:52:37.028076 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:52:37.028086 systemd[1]: Reached target remote-fs.target. Sep 13 00:52:37.028107 systemd[1]: Reached target slices.target. Sep 13 00:52:37.028117 systemd[1]: Reached target swap.target. Sep 13 00:52:37.028127 systemd[1]: Reached target torcx.target. Sep 13 00:52:37.028138 systemd[1]: Reached target veritysetup.target. Sep 13 00:52:37.028148 systemd[1]: Listening on systemd-coredump.socket. Sep 13 00:52:37.028158 systemd[1]: Listening on systemd-initctl.socket. Sep 13 00:52:37.028169 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:52:37.028179 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:52:37.028189 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:52:37.028200 systemd[1]: Listening on systemd-userdbd.socket. Sep 13 00:52:37.028210 systemd[1]: Mounting dev-hugepages.mount... Sep 13 00:52:37.028219 systemd[1]: Mounting dev-mqueue.mount... Sep 13 00:52:37.028229 systemd[1]: Mounting media.mount... Sep 13 00:52:37.028239 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:52:37.028249 systemd[1]: Mounting sys-kernel-debug.mount... Sep 13 00:52:37.028259 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 13 00:52:37.028269 systemd[1]: Mounting tmp.mount... Sep 13 00:52:37.028279 systemd[1]: Starting flatcar-tmpfiles.service... Sep 13 00:52:37.028289 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:52:37.028299 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:52:37.028309 systemd[1]: Starting modprobe@configfs.service... Sep 13 00:52:37.028319 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:52:37.028330 systemd[1]: Starting modprobe@drm.service... Sep 13 00:52:37.028339 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:52:37.028350 systemd[1]: Starting modprobe@fuse.service... Sep 13 00:52:37.028360 systemd[1]: Starting modprobe@loop.service... Sep 13 00:52:37.028372 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:52:37.028382 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:52:37.028391 systemd[1]: Stopped systemd-fsck-root.service. Sep 13 00:52:37.028401 kernel: loop: module loaded Sep 13 00:52:37.028411 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:52:37.028421 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:52:37.028431 kernel: fuse: init (API version 7.34) Sep 13 00:52:37.028446 systemd[1]: Stopped systemd-journald.service. Sep 13 00:52:37.028457 systemd[1]: Starting systemd-journald.service... Sep 13 00:52:37.028467 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:52:37.028477 systemd[1]: Starting systemd-network-generator.service... Sep 13 00:52:37.028487 systemd[1]: Starting systemd-remount-fs.service... Sep 13 00:52:37.028498 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:52:37.028508 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:52:37.028519 systemd[1]: Stopped verity-setup.service. Sep 13 00:52:37.028531 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:52:37.028545 systemd[1]: Mounted dev-hugepages.mount. Sep 13 00:52:37.028555 systemd[1]: Mounted dev-mqueue.mount. Sep 13 00:52:37.028565 systemd[1]: Mounted media.mount. Sep 13 00:52:37.028578 systemd-journald[987]: Journal started Sep 13 00:52:37.028615 systemd-journald[987]: Runtime Journal (/run/log/journal/cb1ceeba5e064850bc5ce2c559912225) is 6.0M, max 48.4M, 42.4M free. Sep 13 00:52:34.026000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:52:34.193000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:52:34.193000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:52:34.193000 audit: BPF prog-id=10 op=LOAD Sep 13 00:52:34.193000 audit: BPF prog-id=10 op=UNLOAD Sep 13 00:52:34.193000 audit: BPF prog-id=11 op=LOAD Sep 13 00:52:34.193000 audit: BPF prog-id=11 op=UNLOAD Sep 13 00:52:34.226000 audit[909]: AVC avc: denied { associate } for pid=909 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 13 00:52:34.226000 audit[909]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001878e4 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=892 pid=909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:52:34.226000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 00:52:34.228000 audit[909]: AVC avc: denied { associate } for pid=909 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 13 00:52:34.228000 audit[909]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001879c9 a2=1ed a3=0 items=2 ppid=892 pid=909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:52:34.228000 audit: CWD cwd="/" Sep 13 00:52:34.228000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:34.228000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:34.228000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 00:52:36.902000 audit: BPF prog-id=12 op=LOAD Sep 13 00:52:36.902000 audit: BPF prog-id=3 op=UNLOAD Sep 13 00:52:36.902000 audit: BPF prog-id=13 op=LOAD Sep 13 00:52:36.902000 audit: BPF prog-id=14 op=LOAD Sep 13 00:52:36.902000 audit: BPF prog-id=4 op=UNLOAD Sep 13 00:52:36.902000 audit: BPF prog-id=5 op=UNLOAD Sep 13 00:52:36.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:36.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:36.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:36.913000 audit: BPF prog-id=12 op=UNLOAD Sep 13 00:52:36.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:37.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:37.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:37.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:37.005000 audit: BPF prog-id=15 op=LOAD Sep 13 00:52:37.005000 audit: BPF prog-id=16 op=LOAD Sep 13 00:52:37.005000 audit: BPF prog-id=17 op=LOAD Sep 13 00:52:37.005000 audit: BPF prog-id=13 op=UNLOAD Sep 13 00:52:37.005000 audit: BPF prog-id=14 op=UNLOAD Sep 13 00:52:37.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:37.026000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 00:52:37.026000 audit[987]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd3a274c40 a2=4000 a3=7ffd3a274cdc items=0 ppid=1 pid=987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:52:37.026000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 13 00:52:36.901070 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:52:34.225035 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-13T00:52:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:52:36.901081 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 13 00:52:34.225311 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-13T00:52:34Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 00:52:36.903660 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:52:34.225330 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-13T00:52:34Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 00:52:34.225369 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-13T00:52:34Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 13 00:52:34.225381 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-13T00:52:34Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 13 00:52:34.225417 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-13T00:52:34Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 13 00:52:34.225430 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-13T00:52:34Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 13 00:52:34.225654 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-13T00:52:34Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 13 00:52:34.225695 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-13T00:52:34Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 00:52:34.225708 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-13T00:52:34Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 00:52:34.226377 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-13T00:52:34Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 13 00:52:34.226418 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-13T00:52:34Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 13 00:52:34.226441 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-13T00:52:34Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 13 00:52:34.226458 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-13T00:52:34Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 13 00:52:34.226478 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-13T00:52:34Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 13 00:52:34.226496 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-13T00:52:34Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 13 00:52:36.648606 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-13T00:52:36Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:52:36.648903 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-13T00:52:36Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:52:36.649008 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-13T00:52:36Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:52:36.649183 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-13T00:52:36Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:52:36.649232 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-13T00:52:36Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 13 00:52:36.649294 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-13T00:52:36Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 13 00:52:37.031108 systemd[1]: Started systemd-journald.service. Sep 13 00:52:37.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:37.032083 systemd[1]: Mounted sys-kernel-debug.mount. Sep 13 00:52:37.032993 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 13 00:52:37.033830 systemd[1]: Mounted tmp.mount. Sep 13 00:52:37.034803 systemd[1]: Finished flatcar-tmpfiles.service. Sep 13 00:52:37.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:37.035835 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:52:37.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:37.036816 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:52:37.036969 systemd[1]: Finished modprobe@configfs.service. Sep 13 00:52:37.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:37.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:37.037962 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:52:37.038137 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:52:37.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:37.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:37.039116 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:52:37.039399 systemd[1]: Finished modprobe@drm.service. Sep 13 00:52:37.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:37.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:37.040441 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:52:37.040568 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:52:37.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:37.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:37.041592 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:52:37.041720 systemd[1]: Finished modprobe@fuse.service. Sep 13 00:52:37.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:37.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:37.042664 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:52:37.042792 systemd[1]: Finished modprobe@loop.service. Sep 13 00:52:37.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:37.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:37.043894 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:52:37.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:37.045073 systemd[1]: Finished systemd-network-generator.service. Sep 13 00:52:37.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:37.047073 systemd[1]: Finished systemd-remount-fs.service. Sep 13 00:52:37.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:37.048419 systemd[1]: Reached target network-pre.target. Sep 13 00:52:37.050360 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 13 00:52:37.052412 systemd[1]: Mounting sys-kernel-config.mount... Sep 13 00:52:37.053207 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:52:37.054790 systemd[1]: Starting systemd-hwdb-update.service... Sep 13 00:52:37.056793 systemd[1]: Starting systemd-journal-flush.service... Sep 13 00:52:37.057661 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:52:37.062942 systemd-journald[987]: Time spent on flushing to /var/log/journal/cb1ceeba5e064850bc5ce2c559912225 is 13.582ms for 1157 entries. Sep 13 00:52:37.062942 systemd-journald[987]: System Journal (/var/log/journal/cb1ceeba5e064850bc5ce2c559912225) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:52:37.089395 systemd-journald[987]: Received client request to flush runtime journal. Sep 13 00:52:37.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:37.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:37.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:37.058814 systemd[1]: Starting systemd-random-seed.service... Sep 13 00:52:37.060351 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:52:37.061490 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:52:37.065409 systemd[1]: Starting systemd-sysusers.service... Sep 13 00:52:37.069165 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 13 00:52:37.070301 systemd[1]: Mounted sys-kernel-config.mount. Sep 13 00:52:37.074364 systemd[1]: Finished systemd-random-seed.service. Sep 13 00:52:37.075394 systemd[1]: Reached target first-boot-complete.target. Sep 13 00:52:37.078862 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:52:37.084275 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:52:37.086290 systemd[1]: Starting systemd-udev-settle.service... Sep 13 00:52:37.090570 systemd[1]: Finished systemd-journal-flush.service. Sep 13 00:52:37.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:37.093604 systemd[1]: Finished systemd-sysusers.service. Sep 13 00:52:37.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:37.094661 udevadm[1013]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:52:37.501468 systemd[1]: Finished systemd-hwdb-update.service. Sep 13 00:52:37.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:37.503000 audit: BPF prog-id=18 op=LOAD Sep 13 00:52:37.503000 audit: BPF prog-id=19 op=LOAD Sep 13 00:52:37.503000 audit: BPF prog-id=7 op=UNLOAD Sep 13 00:52:37.503000 audit: BPF prog-id=8 op=UNLOAD Sep 13 00:52:37.504429 systemd[1]: Starting systemd-udevd.service... Sep 13 00:52:37.519872 systemd-udevd[1015]: Using default interface naming scheme 'v252'. Sep 13 00:52:37.532606 systemd[1]: Started systemd-udevd.service. Sep 13 00:52:37.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:37.534000 audit: BPF prog-id=20 op=LOAD Sep 13 00:52:37.534799 systemd[1]: Starting systemd-networkd.service... Sep 13 00:52:37.538000 audit: BPF prog-id=21 op=LOAD Sep 13 00:52:37.538000 audit: BPF prog-id=22 op=LOAD Sep 13 00:52:37.538000 audit: BPF prog-id=23 op=LOAD Sep 13 00:52:37.539482 systemd[1]: Starting systemd-userdbd.service... Sep 13 00:52:37.564533 systemd[1]: Started systemd-userdbd.service. Sep 13 00:52:37.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:37.572573 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Sep 13 00:52:37.578892 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:52:37.603589 systemd-networkd[1023]: lo: Link UP Sep 13 00:52:37.603599 systemd-networkd[1023]: lo: Gained carrier Sep 13 00:52:37.603961 systemd-networkd[1023]: Enumeration completed Sep 13 00:52:37.604046 systemd[1]: Started systemd-networkd.service. Sep 13 00:52:37.604617 systemd-networkd[1023]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:52:37.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:37.608113 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 13 00:52:37.608336 systemd-networkd[1023]: eth0: Link UP Sep 13 00:52:37.608344 systemd-networkd[1023]: eth0: Gained carrier Sep 13 00:52:37.615110 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:52:37.620223 systemd-networkd[1023]: eth0: DHCPv4 address 10.0.0.130/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:52:37.626000 audit[1032]: AVC avc: denied { confidentiality } for pid=1032 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 13 00:52:37.626000 audit[1032]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55fd07657f40 a1=338ec a2=7fbadecabbc5 a3=5 items=110 ppid=1015 pid=1032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:52:37.626000 audit: CWD cwd="/" Sep 13 00:52:37.626000 audit: PATH item=0 name=(null) inode=51 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=1 name=(null) inode=14563 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=2 name=(null) inode=14563 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=3 name=(null) inode=14564 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=4 name=(null) inode=14563 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=5 name=(null) inode=14565 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=6 name=(null) inode=14563 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=7 name=(null) inode=14566 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=8 name=(null) inode=14566 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=9 name=(null) inode=14567 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=10 name=(null) inode=14566 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=11 name=(null) inode=14568 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=12 name=(null) inode=14566 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=13 name=(null) inode=14569 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=14 name=(null) inode=14566 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=15 name=(null) inode=14570 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=16 name=(null) inode=14566 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=17 name=(null) inode=14571 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=18 name=(null) inode=14563 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=19 name=(null) inode=14572 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=20 name=(null) inode=14572 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=21 name=(null) inode=14573 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=22 name=(null) inode=14572 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=23 name=(null) inode=14574 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=24 name=(null) inode=14572 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=25 name=(null) inode=14575 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=26 name=(null) inode=14572 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=27 name=(null) inode=14576 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=28 name=(null) inode=14572 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=29 name=(null) inode=14577 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=30 name=(null) inode=14563 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=31 name=(null) inode=14578 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=32 name=(null) inode=14578 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=33 name=(null) inode=14579 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=34 name=(null) inode=14578 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=35 name=(null) inode=14580 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=36 name=(null) inode=14578 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=37 name=(null) inode=14581 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=38 name=(null) inode=14578 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=39 name=(null) inode=14582 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=40 name=(null) inode=14578 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=41 name=(null) inode=14583 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=42 name=(null) inode=14563 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=43 name=(null) inode=14584 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=44 name=(null) inode=14584 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=45 name=(null) inode=14585 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=46 name=(null) inode=14584 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=47 name=(null) inode=14586 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=48 name=(null) inode=14584 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=49 name=(null) inode=14587 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=50 name=(null) inode=14584 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=51 name=(null) inode=14588 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=52 name=(null) inode=14584 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=53 name=(null) inode=14589 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=54 name=(null) inode=51 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=55 name=(null) inode=14590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=56 name=(null) inode=14590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=57 name=(null) inode=14591 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=58 name=(null) inode=14590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=59 name=(null) inode=14592 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=60 name=(null) inode=14590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=61 name=(null) inode=14593 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=62 name=(null) inode=14593 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=63 name=(null) inode=14594 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=64 name=(null) inode=14593 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=65 name=(null) inode=14595 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=66 name=(null) inode=14593 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=67 name=(null) inode=14596 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=68 name=(null) inode=14593 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=69 name=(null) inode=14597 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=70 name=(null) inode=14593 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=71 name=(null) inode=14598 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=72 name=(null) inode=14590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=73 name=(null) inode=14599 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=74 name=(null) inode=14599 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=75 name=(null) inode=14600 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=76 name=(null) inode=14599 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=77 name=(null) inode=14601 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=78 name=(null) inode=14599 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=79 name=(null) inode=14602 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=80 name=(null) inode=14599 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=81 name=(null) inode=14603 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=82 name=(null) inode=14599 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=83 name=(null) inode=14604 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=84 name=(null) inode=14590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=85 name=(null) inode=14605 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=86 name=(null) inode=14605 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=87 name=(null) inode=14606 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=88 name=(null) inode=14605 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=89 name=(null) inode=14607 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=90 name=(null) inode=14605 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=91 name=(null) inode=14608 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=92 name=(null) inode=14605 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=93 name=(null) inode=14609 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=94 name=(null) inode=14605 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=95 name=(null) inode=14610 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=96 name=(null) inode=14590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=97 name=(null) inode=14611 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=98 name=(null) inode=14611 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=99 name=(null) inode=14612 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=100 name=(null) inode=14611 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=101 name=(null) inode=14613 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=102 name=(null) inode=14611 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=103 name=(null) inode=14614 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=104 name=(null) inode=14611 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=105 name=(null) inode=14615 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=106 name=(null) inode=14611 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=107 name=(null) inode=14616 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PATH item=109 name=(null) inode=14617 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:37.626000 audit: PROCTITLE proctitle="(udev-worker)" Sep 13 00:52:37.641112 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 13 00:52:37.660124 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:52:37.664647 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 13 00:52:37.671227 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 13 00:52:37.671344 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 13 00:52:37.671456 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 13 00:52:37.707118 kernel: kvm: Nested Virtualization enabled Sep 13 00:52:37.707205 kernel: SVM: kvm: Nested Paging enabled Sep 13 00:52:37.707238 kernel: SVM: Virtual VMLOAD VMSAVE supported Sep 13 00:52:37.707251 kernel: SVM: Virtual GIF supported Sep 13 00:52:37.726190 kernel: EDAC MC: Ver: 3.0.0 Sep 13 00:52:37.753430 systemd[1]: Finished systemd-udev-settle.service. Sep 13 00:52:37.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:37.755486 systemd[1]: Starting lvm2-activation-early.service... Sep 13 00:52:37.763007 lvm[1052]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:52:37.789419 systemd[1]: Finished lvm2-activation-early.service. Sep 13 00:52:37.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:37.790607 systemd[1]: Reached target cryptsetup.target. Sep 13 00:52:37.792587 systemd[1]: Starting lvm2-activation.service... Sep 13 00:52:37.796181 lvm[1053]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:52:37.820772 systemd[1]: Finished lvm2-activation.service. Sep 13 00:52:37.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:37.822584 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:52:37.823534 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:52:37.823558 systemd[1]: Reached target local-fs.target. Sep 13 00:52:37.824431 systemd[1]: Reached target machines.target. Sep 13 00:52:37.826201 systemd[1]: Starting ldconfig.service... Sep 13 00:52:37.827267 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:52:37.827310 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:52:37.828147 systemd[1]: Starting systemd-boot-update.service... Sep 13 00:52:37.829999 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 13 00:52:37.832006 systemd[1]: Starting systemd-machine-id-commit.service... Sep 13 00:52:37.834131 systemd[1]: Starting systemd-sysext.service... Sep 13 00:52:37.835548 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1055 (bootctl) Sep 13 00:52:37.837065 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 13 00:52:37.842659 systemd[1]: Unmounting usr-share-oem.mount... Sep 13 00:52:37.847754 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 13 00:52:37.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:37.850007 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 13 00:52:37.850238 systemd[1]: Unmounted usr-share-oem.mount. Sep 13 00:52:37.862124 kernel: loop0: detected capacity change from 0 to 221472 Sep 13 00:52:37.876465 systemd-fsck[1063]: fsck.fat 4.2 (2021-01-31) Sep 13 00:52:37.876465 systemd-fsck[1063]: /dev/vda1: 791 files, 120781/258078 clusters Sep 13 00:52:37.880826 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 13 00:52:37.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:37.883890 systemd[1]: Mounting boot.mount... Sep 13 00:52:37.895340 systemd[1]: Mounted boot.mount. Sep 13 00:52:37.915214 systemd[1]: Finished systemd-boot-update.service. Sep 13 00:52:37.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:38.816123 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:52:38.820622 ldconfig[1054]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:52:38.857119 kernel: loop1: detected capacity change from 0 to 221472 Sep 13 00:52:38.914073 (sd-sysext)[1069]: Using extensions 'kubernetes'. Sep 13 00:52:38.914418 (sd-sysext)[1069]: Merged extensions into '/usr'. Sep 13 00:52:38.927788 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:52:38.928967 systemd[1]: Mounting usr-share-oem.mount... Sep 13 00:52:38.929906 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:52:38.931114 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:52:38.932869 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:52:38.934539 systemd[1]: Starting modprobe@loop.service... Sep 13 00:52:38.935334 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:52:38.935485 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:52:38.935597 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:52:38.937847 systemd[1]: Mounted usr-share-oem.mount. Sep 13 00:52:38.938810 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:52:38.938915 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:52:38.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:38.940052 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:52:38.940210 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:52:38.940591 kernel: kauditd_printk_skb: 225 callbacks suppressed Sep 13 00:52:38.940627 kernel: audit: type=1130 audit(1757724758.939:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:38.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:38.946476 kernel: audit: type=1131 audit(1757724758.939:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:38.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:38.947613 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:52:38.947717 systemd[1]: Finished modprobe@loop.service. Sep 13 00:52:38.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:38.951131 kernel: audit: type=1130 audit(1757724758.947:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:38.951174 kernel: audit: type=1131 audit(1757724758.947:151): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:38.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:38.954504 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:52:38.954593 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:52:38.955363 systemd[1]: Finished systemd-sysext.service. Sep 13 00:52:38.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:38.958124 kernel: audit: type=1130 audit(1757724758.954:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:38.958160 kernel: audit: type=1131 audit(1757724758.954:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:38.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:38.961673 systemd[1]: Starting ensure-sysext.service... Sep 13 00:52:38.964130 kernel: audit: type=1130 audit(1757724758.960:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:38.965142 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 13 00:52:38.969316 systemd[1]: Reloading. Sep 13 00:52:38.979049 systemd-tmpfiles[1076]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 13 00:52:38.981446 systemd-tmpfiles[1076]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:52:38.984291 systemd-tmpfiles[1076]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:52:39.020275 /usr/lib/systemd/system-generators/torcx-generator[1095]: time="2025-09-13T00:52:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:52:39.020301 /usr/lib/systemd/system-generators/torcx-generator[1095]: time="2025-09-13T00:52:39Z" level=info msg="torcx already run" Sep 13 00:52:39.146495 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:52:39.146514 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:52:39.163786 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:52:39.215000 audit: BPF prog-id=24 op=LOAD Sep 13 00:52:39.215000 audit: BPF prog-id=21 op=UNLOAD Sep 13 00:52:39.233749 kernel: audit: type=1334 audit(1757724759.215:155): prog-id=24 op=LOAD Sep 13 00:52:39.233792 kernel: audit: type=1334 audit(1757724759.215:156): prog-id=21 op=UNLOAD Sep 13 00:52:39.233813 kernel: audit: type=1334 audit(1757724759.232:157): prog-id=25 op=LOAD Sep 13 00:52:39.232000 audit: BPF prog-id=25 op=LOAD Sep 13 00:52:39.234000 audit: BPF prog-id=26 op=LOAD Sep 13 00:52:39.234000 audit: BPF prog-id=22 op=UNLOAD Sep 13 00:52:39.234000 audit: BPF prog-id=23 op=UNLOAD Sep 13 00:52:39.235000 audit: BPF prog-id=27 op=LOAD Sep 13 00:52:39.235000 audit: BPF prog-id=20 op=UNLOAD Sep 13 00:52:39.236000 audit: BPF prog-id=28 op=LOAD Sep 13 00:52:39.236000 audit: BPF prog-id=15 op=UNLOAD Sep 13 00:52:39.236000 audit: BPF prog-id=29 op=LOAD Sep 13 00:52:39.236000 audit: BPF prog-id=30 op=LOAD Sep 13 00:52:39.236000 audit: BPF prog-id=16 op=UNLOAD Sep 13 00:52:39.236000 audit: BPF prog-id=17 op=UNLOAD Sep 13 00:52:39.238000 audit: BPF prog-id=31 op=LOAD Sep 13 00:52:39.238000 audit: BPF prog-id=32 op=LOAD Sep 13 00:52:39.238000 audit: BPF prog-id=18 op=UNLOAD Sep 13 00:52:39.238000 audit: BPF prog-id=19 op=UNLOAD Sep 13 00:52:39.240278 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 13 00:52:39.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:39.244581 systemd[1]: Starting audit-rules.service... Sep 13 00:52:39.246148 systemd[1]: Starting clean-ca-certificates.service... Sep 13 00:52:39.248081 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 13 00:52:39.249000 audit: BPF prog-id=33 op=LOAD Sep 13 00:52:39.250427 systemd[1]: Starting systemd-resolved.service... Sep 13 00:52:39.251000 audit: BPF prog-id=34 op=LOAD Sep 13 00:52:39.252487 systemd[1]: Starting systemd-timesyncd.service... Sep 13 00:52:39.254058 systemd[1]: Starting systemd-update-utmp.service... Sep 13 00:52:39.258274 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:52:39.258439 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:52:39.259478 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:52:39.262043 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:52:39.263726 systemd[1]: Starting modprobe@loop.service... Sep 13 00:52:39.264437 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:52:39.264544 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:52:39.264640 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:52:39.265424 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:52:39.265526 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:52:39.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:39.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:39.266715 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:52:39.266821 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:52:39.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:39.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:39.268005 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:52:39.268160 systemd[1]: Finished modprobe@loop.service. Sep 13 00:52:39.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:39.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:39.269264 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:52:39.269347 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:52:39.270707 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:52:39.270876 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:52:39.272127 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:52:39.273825 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:52:39.275469 systemd[1]: Starting modprobe@loop.service... Sep 13 00:52:39.276190 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:52:39.276293 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:52:39.276386 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:52:39.277083 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:52:39.277202 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:52:39.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:39.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:39.278378 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:52:39.278471 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:52:39.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:39.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:39.279583 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:52:39.279682 systemd[1]: Finished modprobe@loop.service. Sep 13 00:52:39.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:39.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:39.280695 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:52:39.280776 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:52:39.282820 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:52:39.283010 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:52:39.284403 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:52:39.286004 systemd[1]: Starting modprobe@drm.service... Sep 13 00:52:39.287587 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:52:39.289266 systemd[1]: Starting modprobe@loop.service... Sep 13 00:52:39.290338 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:52:39.290447 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:52:39.291612 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 00:52:39.292578 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:52:39.293600 systemd[1]: Finished clean-ca-certificates.service. Sep 13 00:52:39.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:39.294891 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:52:39.294981 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:52:39.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:39.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:39.296237 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:52:39.296337 systemd[1]: Finished modprobe@drm.service. Sep 13 00:52:39.297000 audit[1143]: SYSTEM_BOOT pid=1143 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 13 00:52:39.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:39.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:39.299616 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:52:39.299752 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:52:39.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:39.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:39.301156 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:52:39.301264 systemd[1]: Finished modprobe@loop.service. Sep 13 00:52:39.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:39.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:39.304842 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:52:39.304996 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:52:39.305283 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:52:39.308683 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 13 00:52:39.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:39.309913 systemd[1]: Finished ensure-sysext.service. Sep 13 00:52:39.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:39.310894 systemd[1]: Finished systemd-update-utmp.service. Sep 13 00:52:39.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:39.322212 systemd-resolved[1141]: Positive Trust Anchors: Sep 13 00:52:39.322468 systemd-resolved[1141]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:52:39.322560 systemd-resolved[1141]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:52:39.326000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 00:52:39.326000 audit[1169]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff6d4ddaa0 a2=420 a3=0 items=0 ppid=1138 pid=1169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:52:39.326000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 13 00:52:39.326536 augenrules[1169]: No rules Sep 13 00:52:39.327000 systemd[1]: Finished audit-rules.service. Sep 13 00:52:39.328965 systemd[1]: Started systemd-timesyncd.service. Sep 13 00:52:39.330033 systemd[1]: Reached target time-set.target. Sep 13 00:52:39.989294 systemd-timesyncd[1142]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 13 00:52:39.989342 systemd-timesyncd[1142]: Initial clock synchronization to Sat 2025-09-13 00:52:39.989238 UTC. Sep 13 00:52:39.989665 systemd-resolved[1141]: Defaulting to hostname 'linux'. Sep 13 00:52:39.991009 systemd[1]: Started systemd-resolved.service. Sep 13 00:52:39.991898 systemd[1]: Reached target network.target. Sep 13 00:52:39.992675 systemd[1]: Reached target nss-lookup.target. Sep 13 00:52:40.028203 systemd[1]: Finished ldconfig.service. Sep 13 00:52:40.030066 systemd[1]: Starting systemd-update-done.service... Sep 13 00:52:40.162704 systemd[1]: Finished systemd-update-done.service. Sep 13 00:52:40.163668 systemd[1]: Reached target sysinit.target. Sep 13 00:52:40.164532 systemd[1]: Started motdgen.path. Sep 13 00:52:40.165252 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 13 00:52:40.166450 systemd[1]: Started logrotate.timer. Sep 13 00:52:40.167235 systemd[1]: Started mdadm.timer. Sep 13 00:52:40.167914 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 13 00:52:40.168759 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:52:40.168786 systemd[1]: Reached target paths.target. Sep 13 00:52:40.169629 systemd[1]: Reached target timers.target. Sep 13 00:52:40.170668 systemd[1]: Listening on dbus.socket. Sep 13 00:52:40.172228 systemd[1]: Starting docker.socket... Sep 13 00:52:40.185009 systemd[1]: Listening on sshd.socket. Sep 13 00:52:40.185862 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:52:40.186232 systemd[1]: Listening on docker.socket. Sep 13 00:52:40.187024 systemd[1]: Reached target sockets.target. Sep 13 00:52:40.187797 systemd[1]: Reached target basic.target. Sep 13 00:52:40.188583 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:52:40.188605 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:52:40.189400 systemd[1]: Starting containerd.service... Sep 13 00:52:40.190965 systemd[1]: Starting dbus.service... Sep 13 00:52:40.192473 systemd[1]: Starting enable-oem-cloudinit.service... Sep 13 00:52:40.194277 systemd[1]: Starting extend-filesystems.service... Sep 13 00:52:40.195276 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 13 00:52:40.196568 jq[1180]: false Sep 13 00:52:40.196178 systemd[1]: Starting motdgen.service... Sep 13 00:52:40.197659 systemd[1]: Starting prepare-helm.service... Sep 13 00:52:40.199291 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 13 00:52:40.201217 systemd[1]: Starting sshd-keygen.service... Sep 13 00:52:40.204372 systemd[1]: Starting systemd-logind.service... Sep 13 00:52:40.205291 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:52:40.205341 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:52:40.205649 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:52:40.206185 systemd[1]: Starting update-engine.service... Sep 13 00:52:40.209023 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 13 00:52:40.211191 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:52:40.211337 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 13 00:52:40.212250 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:52:40.212381 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 13 00:52:40.213119 dbus-daemon[1179]: [system] SELinux support is enabled Sep 13 00:52:40.213508 systemd[1]: Started dbus.service. Sep 13 00:52:40.216968 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:52:40.216993 systemd[1]: Reached target system-config.target. Sep 13 00:52:40.218036 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:52:40.218077 systemd[1]: Reached target user-config.target. Sep 13 00:52:40.219844 tar[1200]: linux-amd64/helm Sep 13 00:52:40.221907 extend-filesystems[1181]: Found loop1 Sep 13 00:52:40.226737 extend-filesystems[1181]: Found sr0 Sep 13 00:52:40.226737 extend-filesystems[1181]: Found vda Sep 13 00:52:40.226737 extend-filesystems[1181]: Found vda1 Sep 13 00:52:40.226737 extend-filesystems[1181]: Found vda2 Sep 13 00:52:40.226737 extend-filesystems[1181]: Found vda3 Sep 13 00:52:40.226737 extend-filesystems[1181]: Found usr Sep 13 00:52:40.226737 extend-filesystems[1181]: Found vda4 Sep 13 00:52:40.226737 extend-filesystems[1181]: Found vda6 Sep 13 00:52:40.226737 extend-filesystems[1181]: Found vda7 Sep 13 00:52:40.226737 extend-filesystems[1181]: Found vda9 Sep 13 00:52:40.226737 extend-filesystems[1181]: Checking size of /dev/vda9 Sep 13 00:52:40.254725 jq[1197]: true Sep 13 00:52:40.254806 jq[1210]: true Sep 13 00:52:40.259349 systemd-networkd[1023]: eth0: Gained IPv6LL Sep 13 00:52:40.261557 extend-filesystems[1181]: Resized partition /dev/vda9 Sep 13 00:52:40.261216 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 00:52:40.263446 systemd[1]: Reached target network-online.target. Sep 13 00:52:40.263582 extend-filesystems[1225]: resize2fs 1.46.5 (30-Dec-2021) Sep 13 00:52:40.266698 systemd[1]: Starting kubelet.service... Sep 13 00:52:40.277062 env[1203]: time="2025-09-13T00:52:40.273247024Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 13 00:52:40.284809 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:52:40.284956 systemd[1]: Finished motdgen.service. Sep 13 00:52:40.300906 env[1203]: time="2025-09-13T00:52:40.300866671Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:52:40.301077 env[1203]: time="2025-09-13T00:52:40.301035037Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:52:40.302199 env[1203]: time="2025-09-13T00:52:40.302158744Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:52:40.302199 env[1203]: time="2025-09-13T00:52:40.302184983Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:52:40.302424 env[1203]: time="2025-09-13T00:52:40.302398744Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:52:40.302424 env[1203]: time="2025-09-13T00:52:40.302422368Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:52:40.302508 env[1203]: time="2025-09-13T00:52:40.302435473Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 13 00:52:40.302508 env[1203]: time="2025-09-13T00:52:40.302444540Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:52:40.302557 env[1203]: time="2025-09-13T00:52:40.302520402Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:52:40.302755 env[1203]: time="2025-09-13T00:52:40.302736668Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:52:40.302870 env[1203]: time="2025-09-13T00:52:40.302851233Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:52:40.302870 env[1203]: time="2025-09-13T00:52:40.302867694Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:52:40.302920 env[1203]: time="2025-09-13T00:52:40.302907679Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 13 00:52:40.302920 env[1203]: time="2025-09-13T00:52:40.302917667Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:52:40.404999 update_engine[1195]: I0913 00:52:40.404172 1195 main.cc:92] Flatcar Update Engine starting Sep 13 00:52:40.408342 systemd[1]: Started update-engine.service. Sep 13 00:52:40.434750 update_engine[1195]: I0913 00:52:40.409396 1195 update_check_scheduler.cc:74] Next update check in 11m50s Sep 13 00:52:40.411130 systemd[1]: Started locksmithd.service. Sep 13 00:52:40.435815 systemd-logind[1194]: Watching system buttons on /dev/input/event1 (Power Button) Sep 13 00:52:40.436169 systemd-logind[1194]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:52:40.436401 systemd-logind[1194]: New seat seat0. Sep 13 00:52:40.437979 systemd[1]: Started systemd-logind.service. Sep 13 00:52:40.448957 locksmithd[1237]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:52:40.452090 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 13 00:52:40.619549 tar[1200]: linux-amd64/LICENSE Sep 13 00:52:40.619777 tar[1200]: linux-amd64/README.md Sep 13 00:52:40.624352 systemd[1]: Finished prepare-helm.service. Sep 13 00:52:41.441089 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 13 00:52:41.856196 extend-filesystems[1225]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 00:52:41.856196 extend-filesystems[1225]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:52:41.856196 extend-filesystems[1225]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 13 00:52:41.868118 extend-filesystems[1181]: Resized filesystem in /dev/vda9 Sep 13 00:52:41.857088 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:52:41.871275 sshd_keygen[1201]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:52:41.857292 systemd[1]: Finished extend-filesystems.service. Sep 13 00:52:41.889620 systemd[1]: Finished sshd-keygen.service. Sep 13 00:52:41.891749 systemd[1]: Starting issuegen.service... Sep 13 00:52:41.895977 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:52:41.896133 systemd[1]: Finished issuegen.service. Sep 13 00:52:41.897859 systemd[1]: Starting systemd-user-sessions.service... Sep 13 00:52:41.914004 systemd[1]: Finished systemd-user-sessions.service. Sep 13 00:52:41.916549 systemd[1]: Started getty@tty1.service. Sep 13 00:52:41.917240 env[1203]: time="2025-09-13T00:52:41.917163404Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:52:41.917786 env[1203]: time="2025-09-13T00:52:41.917257180Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:52:41.917786 env[1203]: time="2025-09-13T00:52:41.917295161Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:52:41.917786 env[1203]: time="2025-09-13T00:52:41.917355495Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:52:41.917786 env[1203]: time="2025-09-13T00:52:41.917375362Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:52:41.917786 env[1203]: time="2025-09-13T00:52:41.917389759Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:52:41.917786 env[1203]: time="2025-09-13T00:52:41.917402944Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:52:41.917786 env[1203]: time="2025-09-13T00:52:41.917423813Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:52:41.917786 env[1203]: time="2025-09-13T00:52:41.917437729Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 13 00:52:41.917786 env[1203]: time="2025-09-13T00:52:41.917451224Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:52:41.917786 env[1203]: time="2025-09-13T00:52:41.917474778Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:52:41.917786 env[1203]: time="2025-09-13T00:52:41.917494185Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:52:41.917786 env[1203]: time="2025-09-13T00:52:41.917626363Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:52:41.917786 env[1203]: time="2025-09-13T00:52:41.917694811Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:52:41.918477 env[1203]: time="2025-09-13T00:52:41.917911698Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:52:41.918477 env[1203]: time="2025-09-13T00:52:41.917934280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:52:41.918477 env[1203]: time="2025-09-13T00:52:41.917951021Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:52:41.918477 env[1203]: time="2025-09-13T00:52:41.917990465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:52:41.918477 env[1203]: time="2025-09-13T00:52:41.918001396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:52:41.918477 env[1203]: time="2025-09-13T00:52:41.918012176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:52:41.918477 env[1203]: time="2025-09-13T00:52:41.918022535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:52:41.918477 env[1203]: time="2025-09-13T00:52:41.918033446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:52:41.918477 env[1203]: time="2025-09-13T00:52:41.918060958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:52:41.918477 env[1203]: time="2025-09-13T00:52:41.918071127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:52:41.918477 env[1203]: time="2025-09-13T00:52:41.918082548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:52:41.918477 env[1203]: time="2025-09-13T00:52:41.918094701Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:52:41.918477 env[1203]: time="2025-09-13T00:52:41.918216990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:52:41.918477 env[1203]: time="2025-09-13T00:52:41.918232319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:52:41.918477 env[1203]: time="2025-09-13T00:52:41.918243059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:52:41.918797 env[1203]: time="2025-09-13T00:52:41.918253108Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:52:41.918797 env[1203]: time="2025-09-13T00:52:41.918271192Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 13 00:52:41.918797 env[1203]: time="2025-09-13T00:52:41.918281110Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:52:41.918797 env[1203]: time="2025-09-13T00:52:41.918298052Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 13 00:52:41.918797 env[1203]: time="2025-09-13T00:52:41.918330904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:52:41.918754 systemd[1]: Started serial-getty@ttyS0.service. Sep 13 00:52:41.918964 env[1203]: time="2025-09-13T00:52:41.918502836Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:52:41.918964 env[1203]: time="2025-09-13T00:52:41.918551978Z" level=info msg="Connect containerd service" Sep 13 00:52:41.918964 env[1203]: time="2025-09-13T00:52:41.918608244Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:52:41.919872 env[1203]: time="2025-09-13T00:52:41.919167553Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:52:41.919872 env[1203]: time="2025-09-13T00:52:41.919373359Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:52:41.919872 env[1203]: time="2025-09-13T00:52:41.919403355Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:52:41.919872 env[1203]: time="2025-09-13T00:52:41.919435525Z" level=info msg="containerd successfully booted in 1.649182s" Sep 13 00:52:41.920057 systemd[1]: Reached target getty.target. Sep 13 00:52:41.920888 env[1203]: time="2025-09-13T00:52:41.920780287Z" level=info msg="Start subscribing containerd event" Sep 13 00:52:41.921258 systemd[1]: Started containerd.service. Sep 13 00:52:41.923465 env[1203]: time="2025-09-13T00:52:41.921238867Z" level=info msg="Start recovering state" Sep 13 00:52:41.923465 env[1203]: time="2025-09-13T00:52:41.921330850Z" level=info msg="Start event monitor" Sep 13 00:52:41.923465 env[1203]: time="2025-09-13T00:52:41.921362249Z" level=info msg="Start snapshots syncer" Sep 13 00:52:41.923465 env[1203]: time="2025-09-13T00:52:41.921374842Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:52:41.923465 env[1203]: time="2025-09-13T00:52:41.923314971Z" level=info msg="Start streaming server" Sep 13 00:52:41.928243 bash[1229]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:52:41.930166 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:52:41.931188 systemd[1]: Finished systemd-machine-id-commit.service. Sep 13 00:52:41.932597 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 13 00:52:42.535267 systemd[1]: Started kubelet.service. Sep 13 00:52:42.536395 systemd[1]: Reached target multi-user.target. Sep 13 00:52:42.538410 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 13 00:52:42.544281 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 13 00:52:42.544406 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 13 00:52:42.545407 systemd[1]: Startup finished in 892ms (kernel) + 5.115s (initrd) + 7.904s (userspace) = 13.912s. Sep 13 00:52:42.567552 systemd[1]: Created slice system-sshd.slice. Sep 13 00:52:42.568410 systemd[1]: Started sshd@0-10.0.0.130:22-10.0.0.1:46226.service. Sep 13 00:52:42.601631 sshd[1264]: Accepted publickey for core from 10.0.0.1 port 46226 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:52:42.603002 sshd[1264]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:52:42.610191 systemd[1]: Created slice user-500.slice. Sep 13 00:52:42.611168 systemd[1]: Starting user-runtime-dir@500.service... Sep 13 00:52:42.613503 systemd-logind[1194]: New session 1 of user core. Sep 13 00:52:42.618443 systemd[1]: Finished user-runtime-dir@500.service. Sep 13 00:52:42.619813 systemd[1]: Starting user@500.service... Sep 13 00:52:42.622332 (systemd)[1272]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:52:42.694179 systemd[1272]: Queued start job for default target default.target. Sep 13 00:52:42.694614 systemd[1272]: Reached target paths.target. Sep 13 00:52:42.694633 systemd[1272]: Reached target sockets.target. Sep 13 00:52:42.694644 systemd[1272]: Reached target timers.target. Sep 13 00:52:42.694654 systemd[1272]: Reached target basic.target. Sep 13 00:52:42.694746 systemd[1]: Started user@500.service. Sep 13 00:52:42.695403 systemd[1272]: Reached target default.target. Sep 13 00:52:42.695455 systemd[1272]: Startup finished in 67ms. Sep 13 00:52:42.695531 systemd[1]: Started session-1.scope. Sep 13 00:52:42.746660 systemd[1]: Started sshd@1-10.0.0.130:22-10.0.0.1:46230.service. Sep 13 00:52:42.777621 sshd[1281]: Accepted publickey for core from 10.0.0.1 port 46230 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:52:42.778661 sshd[1281]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:52:42.782833 systemd-logind[1194]: New session 2 of user core. Sep 13 00:52:42.783852 systemd[1]: Started session-2.scope. Sep 13 00:52:42.837976 sshd[1281]: pam_unix(sshd:session): session closed for user core Sep 13 00:52:42.840825 systemd[1]: sshd@1-10.0.0.130:22-10.0.0.1:46230.service: Deactivated successfully. Sep 13 00:52:42.841340 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:52:42.841784 systemd-logind[1194]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:52:42.842830 systemd[1]: Started sshd@2-10.0.0.130:22-10.0.0.1:46240.service. Sep 13 00:52:42.843638 systemd-logind[1194]: Removed session 2. Sep 13 00:52:42.872943 sshd[1288]: Accepted publickey for core from 10.0.0.1 port 46240 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:52:42.874155 sshd[1288]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:52:42.877308 systemd-logind[1194]: New session 3 of user core. Sep 13 00:52:42.877954 systemd[1]: Started session-3.scope. Sep 13 00:52:42.928125 sshd[1288]: pam_unix(sshd:session): session closed for user core Sep 13 00:52:42.930863 systemd[1]: sshd@2-10.0.0.130:22-10.0.0.1:46240.service: Deactivated successfully. Sep 13 00:52:42.931336 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:52:42.931874 systemd-logind[1194]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:52:42.932912 systemd[1]: Started sshd@3-10.0.0.130:22-10.0.0.1:46248.service. Sep 13 00:52:42.933692 systemd-logind[1194]: Removed session 3. Sep 13 00:52:42.940020 kubelet[1260]: E0913 00:52:42.939981 1260 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:52:42.942137 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:52:42.942253 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:52:42.962874 sshd[1294]: Accepted publickey for core from 10.0.0.1 port 46248 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:52:42.963985 sshd[1294]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:52:42.967170 systemd-logind[1194]: New session 4 of user core. Sep 13 00:52:42.968126 systemd[1]: Started session-4.scope. Sep 13 00:52:43.018937 sshd[1294]: pam_unix(sshd:session): session closed for user core Sep 13 00:52:43.021402 systemd[1]: sshd@3-10.0.0.130:22-10.0.0.1:46248.service: Deactivated successfully. Sep 13 00:52:43.021864 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:52:43.022296 systemd-logind[1194]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:52:43.023189 systemd[1]: Started sshd@4-10.0.0.130:22-10.0.0.1:46264.service. Sep 13 00:52:43.023757 systemd-logind[1194]: Removed session 4. Sep 13 00:52:43.053678 sshd[1300]: Accepted publickey for core from 10.0.0.1 port 46264 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:52:43.054787 sshd[1300]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:52:43.058252 systemd-logind[1194]: New session 5 of user core. Sep 13 00:52:43.059188 systemd[1]: Started session-5.scope. Sep 13 00:52:43.113186 sudo[1303]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:52:43.113426 sudo[1303]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:52:43.130980 systemd[1]: Starting docker.service... Sep 13 00:52:43.172455 env[1315]: time="2025-09-13T00:52:43.172361719Z" level=info msg="Starting up" Sep 13 00:52:43.173965 env[1315]: time="2025-09-13T00:52:43.173944467Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:52:43.173965 env[1315]: time="2025-09-13T00:52:43.173962241Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:52:43.174029 env[1315]: time="2025-09-13T00:52:43.173980906Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:52:43.174029 env[1315]: time="2025-09-13T00:52:43.173992467Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:52:43.175666 env[1315]: time="2025-09-13T00:52:43.175646008Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:52:43.175666 env[1315]: time="2025-09-13T00:52:43.175661337Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:52:43.175739 env[1315]: time="2025-09-13T00:52:43.175673831Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:52:43.175739 env[1315]: time="2025-09-13T00:52:43.175686304Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:52:43.181154 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1974337096-merged.mount: Deactivated successfully. Sep 13 00:52:43.861799 env[1315]: time="2025-09-13T00:52:43.861728628Z" level=info msg="Loading containers: start." Sep 13 00:52:43.985087 kernel: Initializing XFRM netlink socket Sep 13 00:52:44.015226 env[1315]: time="2025-09-13T00:52:44.015178762Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 13 00:52:44.064127 systemd-networkd[1023]: docker0: Link UP Sep 13 00:52:44.080487 env[1315]: time="2025-09-13T00:52:44.080421445Z" level=info msg="Loading containers: done." Sep 13 00:52:44.090181 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3534304860-merged.mount: Deactivated successfully. Sep 13 00:52:44.093071 env[1315]: time="2025-09-13T00:52:44.093026434Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:52:44.093227 env[1315]: time="2025-09-13T00:52:44.093208486Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 13 00:52:44.093339 env[1315]: time="2025-09-13T00:52:44.093319674Z" level=info msg="Daemon has completed initialization" Sep 13 00:52:44.109327 systemd[1]: Started docker.service. Sep 13 00:52:44.116648 env[1315]: time="2025-09-13T00:52:44.116554508Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:52:44.802235 env[1203]: time="2025-09-13T00:52:44.802197012Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:52:45.423248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3216114351.mount: Deactivated successfully. Sep 13 00:52:46.783677 env[1203]: time="2025-09-13T00:52:46.783579512Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:46.785400 env[1203]: time="2025-09-13T00:52:46.785352727Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:46.787007 env[1203]: time="2025-09-13T00:52:46.786977534Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:46.788810 env[1203]: time="2025-09-13T00:52:46.788746682Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:46.789534 env[1203]: time="2025-09-13T00:52:46.789500676Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 13 00:52:46.790143 env[1203]: time="2025-09-13T00:52:46.790115068Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:52:48.406640 env[1203]: time="2025-09-13T00:52:48.406574216Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:48.408476 env[1203]: time="2025-09-13T00:52:48.408425989Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:48.410298 env[1203]: time="2025-09-13T00:52:48.410269015Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:48.411910 env[1203]: time="2025-09-13T00:52:48.411882040Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:48.412565 env[1203]: time="2025-09-13T00:52:48.412523583Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 13 00:52:48.413074 env[1203]: time="2025-09-13T00:52:48.413028881Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:52:50.188596 env[1203]: time="2025-09-13T00:52:50.188542072Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:50.190354 env[1203]: time="2025-09-13T00:52:50.190294288Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:50.191836 env[1203]: time="2025-09-13T00:52:50.191804660Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:50.194215 env[1203]: time="2025-09-13T00:52:50.194185906Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:50.194912 env[1203]: time="2025-09-13T00:52:50.194868887Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 13 00:52:50.195421 env[1203]: time="2025-09-13T00:52:50.195396697Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:52:51.535994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1821971582.mount: Deactivated successfully. Sep 13 00:52:52.481530 env[1203]: time="2025-09-13T00:52:52.481460292Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:52.483714 env[1203]: time="2025-09-13T00:52:52.483680445Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:52.485333 env[1203]: time="2025-09-13T00:52:52.485281688Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:52.486453 env[1203]: time="2025-09-13T00:52:52.486415745Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:52.486845 env[1203]: time="2025-09-13T00:52:52.486808051Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 13 00:52:52.487358 env[1203]: time="2025-09-13T00:52:52.487307147Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:52:52.911560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3188229576.mount: Deactivated successfully. Sep 13 00:52:53.087217 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:52:53.087393 systemd[1]: Stopped kubelet.service. Sep 13 00:52:53.088671 systemd[1]: Starting kubelet.service... Sep 13 00:52:53.178811 systemd[1]: Started kubelet.service. Sep 13 00:52:53.312910 kubelet[1448]: E0913 00:52:53.312851 1448 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:52:53.315888 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:52:53.316006 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:52:54.169882 env[1203]: time="2025-09-13T00:52:54.169808908Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:54.171814 env[1203]: time="2025-09-13T00:52:54.171757793Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:54.173846 env[1203]: time="2025-09-13T00:52:54.173809290Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:54.175628 env[1203]: time="2025-09-13T00:52:54.175591813Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:54.176314 env[1203]: time="2025-09-13T00:52:54.176281907Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 00:52:54.176756 env[1203]: time="2025-09-13T00:52:54.176732022Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:52:54.872247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4214884753.mount: Deactivated successfully. Sep 13 00:52:54.877522 env[1203]: time="2025-09-13T00:52:54.877475325Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:54.879577 env[1203]: time="2025-09-13T00:52:54.879536611Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:54.881195 env[1203]: time="2025-09-13T00:52:54.881161188Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:54.882504 env[1203]: time="2025-09-13T00:52:54.882472667Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:54.882903 env[1203]: time="2025-09-13T00:52:54.882874431Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:52:54.883438 env[1203]: time="2025-09-13T00:52:54.883411348Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:52:55.463124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount671901030.mount: Deactivated successfully. Sep 13 00:52:59.363352 env[1203]: time="2025-09-13T00:52:59.363268208Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:59.365109 env[1203]: time="2025-09-13T00:52:59.365071109Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:59.366695 env[1203]: time="2025-09-13T00:52:59.366659859Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:59.368413 env[1203]: time="2025-09-13T00:52:59.368379814Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:59.369121 env[1203]: time="2025-09-13T00:52:59.369081150Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 13 00:53:01.672807 systemd[1]: Stopped kubelet.service. Sep 13 00:53:01.674934 systemd[1]: Starting kubelet.service... Sep 13 00:53:01.692772 systemd[1]: Reloading. Sep 13 00:53:01.764140 /usr/lib/systemd/system-generators/torcx-generator[1504]: time="2025-09-13T00:53:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:53:01.764170 /usr/lib/systemd/system-generators/torcx-generator[1504]: time="2025-09-13T00:53:01Z" level=info msg="torcx already run" Sep 13 00:53:02.067411 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:53:02.067431 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:53:02.084225 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:53:02.159386 systemd[1]: Started kubelet.service. Sep 13 00:53:02.160602 systemd[1]: Stopping kubelet.service... Sep 13 00:53:02.160826 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:53:02.160998 systemd[1]: Stopped kubelet.service. Sep 13 00:53:02.162277 systemd[1]: Starting kubelet.service... Sep 13 00:53:02.251665 systemd[1]: Started kubelet.service. Sep 13 00:53:02.284643 kubelet[1551]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:53:02.284643 kubelet[1551]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:53:02.284643 kubelet[1551]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:53:02.285023 kubelet[1551]: I0913 00:53:02.284704 1551 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:53:02.587655 kubelet[1551]: I0913 00:53:02.587592 1551 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:53:02.587655 kubelet[1551]: I0913 00:53:02.587624 1551 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:53:02.587896 kubelet[1551]: I0913 00:53:02.587872 1551 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:53:02.607976 kubelet[1551]: I0913 00:53:02.607938 1551 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:53:02.608138 kubelet[1551]: E0913 00:53:02.608081 1551 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:53:02.614336 kubelet[1551]: E0913 00:53:02.614305 1551 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:53:02.614336 kubelet[1551]: I0913 00:53:02.614332 1551 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:53:02.618906 kubelet[1551]: I0913 00:53:02.618879 1551 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:53:02.619440 kubelet[1551]: I0913 00:53:02.619415 1551 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:53:02.619560 kubelet[1551]: I0913 00:53:02.619527 1551 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:53:02.619754 kubelet[1551]: I0913 00:53:02.619553 1551 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:53:02.619754 kubelet[1551]: I0913 00:53:02.619754 1551 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:53:02.619888 kubelet[1551]: I0913 00:53:02.619762 1551 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:53:02.619888 kubelet[1551]: I0913 00:53:02.619861 1551 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:53:02.625296 kubelet[1551]: I0913 00:53:02.625275 1551 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:53:02.625296 kubelet[1551]: I0913 00:53:02.625296 1551 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:53:02.625379 kubelet[1551]: I0913 00:53:02.625325 1551 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:53:02.625379 kubelet[1551]: I0913 00:53:02.625341 1551 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:53:02.636114 kubelet[1551]: I0913 00:53:02.636080 1551 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:53:02.636502 kubelet[1551]: I0913 00:53:02.636483 1551 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:53:02.637815 kubelet[1551]: W0913 00:53:02.637778 1551 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:53:02.638541 kubelet[1551]: W0913 00:53:02.638494 1551 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Sep 13 00:53:02.638593 kubelet[1551]: E0913 00:53:02.638546 1551 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:53:02.638593 kubelet[1551]: W0913 00:53:02.638490 1551 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Sep 13 00:53:02.638593 kubelet[1551]: E0913 00:53:02.638565 1551 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:53:02.639326 kubelet[1551]: I0913 00:53:02.639302 1551 server.go:1274] "Started kubelet" Sep 13 00:53:02.639448 kubelet[1551]: I0913 00:53:02.639426 1551 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:53:02.639513 kubelet[1551]: I0913 00:53:02.639472 1551 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:53:02.639810 kubelet[1551]: I0913 00:53:02.639794 1551 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:53:02.642262 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 13 00:53:02.642347 kubelet[1551]: I0913 00:53:02.642243 1551 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:53:02.642388 kubelet[1551]: I0913 00:53:02.642345 1551 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:53:02.646484 kubelet[1551]: I0913 00:53:02.646254 1551 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:53:02.646484 kubelet[1551]: E0913 00:53:02.646319 1551 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:53:02.646484 kubelet[1551]: I0913 00:53:02.646328 1551 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:53:02.647660 kubelet[1551]: I0913 00:53:02.647634 1551 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:53:02.647709 kubelet[1551]: I0913 00:53:02.647673 1551 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:53:02.648032 kubelet[1551]: W0913 00:53:02.647999 1551 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Sep 13 00:53:02.648105 kubelet[1551]: E0913 00:53:02.648035 1551 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:53:02.648105 kubelet[1551]: E0913 00:53:02.648095 1551 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="200ms" Sep 13 00:53:02.648516 kubelet[1551]: I0913 00:53:02.648498 1551 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:53:02.648586 kubelet[1551]: I0913 00:53:02.648551 1551 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:53:02.651023 kubelet[1551]: I0913 00:53:02.650997 1551 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:53:02.653258 kubelet[1551]: E0913 00:53:02.653233 1551 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:53:02.654590 kubelet[1551]: E0913 00:53:02.653644 1551 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.130:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.130:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864b160821d5ed4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:53:02.639275732 +0000 UTC m=+0.384506779,LastTimestamp:2025-09-13 00:53:02.639275732 +0000 UTC m=+0.384506779,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:53:02.662071 kubelet[1551]: I0913 00:53:02.662003 1551 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:53:02.663029 kubelet[1551]: I0913 00:53:02.663005 1551 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:53:02.663097 kubelet[1551]: I0913 00:53:02.663033 1551 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:53:02.663097 kubelet[1551]: I0913 00:53:02.663079 1551 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:53:02.663151 kubelet[1551]: E0913 00:53:02.663114 1551 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:53:02.663657 kubelet[1551]: W0913 00:53:02.663629 1551 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Sep 13 00:53:02.663703 kubelet[1551]: E0913 00:53:02.663662 1551 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:53:02.664680 kubelet[1551]: I0913 00:53:02.664642 1551 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:53:02.664680 kubelet[1551]: I0913 00:53:02.664660 1551 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:53:02.664680 kubelet[1551]: I0913 00:53:02.664676 1551 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:53:02.747331 kubelet[1551]: E0913 00:53:02.747283 1551 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:53:02.763708 kubelet[1551]: E0913 00:53:02.763637 1551 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:53:02.847581 kubelet[1551]: E0913 00:53:02.847407 1551 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:53:02.848927 kubelet[1551]: E0913 00:53:02.848888 1551 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="400ms" Sep 13 00:53:02.948139 kubelet[1551]: E0913 00:53:02.948090 1551 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:53:02.964347 kubelet[1551]: E0913 00:53:02.964322 1551 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:53:03.048665 kubelet[1551]: E0913 00:53:03.048615 1551 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:53:03.149659 kubelet[1551]: E0913 00:53:03.149520 1551 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:53:03.249431 kubelet[1551]: E0913 00:53:03.249379 1551 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="800ms" Sep 13 00:53:03.250376 kubelet[1551]: E0913 00:53:03.250341 1551 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:53:03.284548 kubelet[1551]: I0913 00:53:03.284503 1551 policy_none.go:49] "None policy: Start" Sep 13 00:53:03.285367 kubelet[1551]: I0913 00:53:03.285345 1551 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:53:03.285640 kubelet[1551]: I0913 00:53:03.285384 1551 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:53:03.348598 systemd[1]: Created slice kubepods.slice. Sep 13 00:53:03.350649 kubelet[1551]: E0913 00:53:03.350594 1551 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:53:03.352217 systemd[1]: Created slice kubepods-burstable.slice. Sep 13 00:53:03.354459 systemd[1]: Created slice kubepods-besteffort.slice. Sep 13 00:53:03.365342 kubelet[1551]: E0913 00:53:03.365316 1551 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:53:03.365619 kubelet[1551]: I0913 00:53:03.365592 1551 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:53:03.365791 kubelet[1551]: I0913 00:53:03.365775 1551 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:53:03.365834 kubelet[1551]: I0913 00:53:03.365794 1551 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:53:03.366250 kubelet[1551]: I0913 00:53:03.366093 1551 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:53:03.366843 kubelet[1551]: E0913 00:53:03.366818 1551 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 13 00:53:03.467548 kubelet[1551]: I0913 00:53:03.467411 1551 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:53:03.467812 kubelet[1551]: E0913 00:53:03.467772 1551 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Sep 13 00:53:03.669664 kubelet[1551]: I0913 00:53:03.669612 1551 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:53:03.669969 kubelet[1551]: E0913 00:53:03.669926 1551 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Sep 13 00:53:03.828215 kubelet[1551]: W0913 00:53:03.828055 1551 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Sep 13 00:53:03.828215 kubelet[1551]: E0913 00:53:03.828124 1551 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:53:03.906487 kubelet[1551]: W0913 00:53:03.906371 1551 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Sep 13 00:53:03.906487 kubelet[1551]: E0913 00:53:03.906452 1551 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:53:03.922058 kubelet[1551]: W0913 00:53:03.922005 1551 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Sep 13 00:53:03.922136 kubelet[1551]: E0913 00:53:03.922059 1551 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:53:04.050365 kubelet[1551]: E0913 00:53:04.050289 1551 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="1.6s" Sep 13 00:53:04.071687 kubelet[1551]: I0913 00:53:04.071669 1551 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:53:04.071915 kubelet[1551]: E0913 00:53:04.071897 1551 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Sep 13 00:53:04.114563 kubelet[1551]: W0913 00:53:04.114529 1551 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Sep 13 00:53:04.114676 kubelet[1551]: E0913 00:53:04.114569 1551 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:53:04.172100 systemd[1]: Created slice kubepods-burstable-pod7881aa82872413b6cd6df042fc8c0e9e.slice. Sep 13 00:53:04.183077 systemd[1]: Created slice kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice. Sep 13 00:53:04.189870 systemd[1]: Created slice kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice. Sep 13 00:53:04.257664 kubelet[1551]: I0913 00:53:04.257610 1551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:53:04.257664 kubelet[1551]: I0913 00:53:04.257647 1551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:53:04.257664 kubelet[1551]: I0913 00:53:04.257669 1551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:53:04.257924 kubelet[1551]: I0913 00:53:04.257699 1551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:53:04.257924 kubelet[1551]: I0913 00:53:04.257726 1551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:53:04.257924 kubelet[1551]: I0913 00:53:04.257743 1551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7881aa82872413b6cd6df042fc8c0e9e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7881aa82872413b6cd6df042fc8c0e9e\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:53:04.257924 kubelet[1551]: I0913 00:53:04.257756 1551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7881aa82872413b6cd6df042fc8c0e9e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7881aa82872413b6cd6df042fc8c0e9e\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:53:04.257924 kubelet[1551]: I0913 00:53:04.257775 1551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:53:04.258097 kubelet[1551]: I0913 00:53:04.257792 1551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7881aa82872413b6cd6df042fc8c0e9e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7881aa82872413b6cd6df042fc8c0e9e\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:53:04.483200 kubelet[1551]: E0913 00:53:04.483023 1551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:04.483789 env[1203]: time="2025-09-13T00:53:04.483755379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7881aa82872413b6cd6df042fc8c0e9e,Namespace:kube-system,Attempt:0,}" Sep 13 00:53:04.488454 kubelet[1551]: E0913 00:53:04.488427 1551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:04.488877 env[1203]: time="2025-09-13T00:53:04.488853931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 13 00:53:04.492113 kubelet[1551]: E0913 00:53:04.492082 1551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:04.492362 env[1203]: time="2025-09-13T00:53:04.492337704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 13 00:53:04.701146 kubelet[1551]: E0913 00:53:04.701092 1551 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:53:04.873261 kubelet[1551]: I0913 00:53:04.873219 1551 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:53:04.873685 kubelet[1551]: E0913 00:53:04.873644 1551 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Sep 13 00:53:05.138393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2255086517.mount: Deactivated successfully. Sep 13 00:53:05.144460 env[1203]: time="2025-09-13T00:53:05.144398181Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:05.145316 env[1203]: time="2025-09-13T00:53:05.145270516Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:05.147101 env[1203]: time="2025-09-13T00:53:05.147064982Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:05.149504 env[1203]: time="2025-09-13T00:53:05.149472346Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:05.150628 env[1203]: time="2025-09-13T00:53:05.150593990Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:05.151951 env[1203]: time="2025-09-13T00:53:05.151916700Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:05.153229 env[1203]: time="2025-09-13T00:53:05.153200668Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:05.155269 env[1203]: time="2025-09-13T00:53:05.155241305Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:05.157276 env[1203]: time="2025-09-13T00:53:05.157236818Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:05.159399 env[1203]: time="2025-09-13T00:53:05.159371571Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:05.160203 env[1203]: time="2025-09-13T00:53:05.160169337Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:05.161647 env[1203]: time="2025-09-13T00:53:05.161615679Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:05.188412 env[1203]: time="2025-09-13T00:53:05.188323105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:53:05.188542 env[1203]: time="2025-09-13T00:53:05.188413735Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:53:05.188542 env[1203]: time="2025-09-13T00:53:05.188444433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:53:05.190106 env[1203]: time="2025-09-13T00:53:05.189567900Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e164606a0573bc667504784f25babd8bd880e225508c07644a144f0dfab3b9b9 pid=1594 runtime=io.containerd.runc.v2 Sep 13 00:53:05.297503 systemd[1]: Started cri-containerd-e164606a0573bc667504784f25babd8bd880e225508c07644a144f0dfab3b9b9.scope. Sep 13 00:53:05.300108 env[1203]: time="2025-09-13T00:53:05.299977818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:53:05.300108 env[1203]: time="2025-09-13T00:53:05.300019987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:53:05.300108 env[1203]: time="2025-09-13T00:53:05.300051376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:53:05.301100 env[1203]: time="2025-09-13T00:53:05.300414557Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a24fd64be99badbf1d2cff0bfb6c2e0a6ad30d65f9394a7c77187e95062df47 pid=1619 runtime=io.containerd.runc.v2 Sep 13 00:53:05.302694 env[1203]: time="2025-09-13T00:53:05.302120366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:53:05.302694 env[1203]: time="2025-09-13T00:53:05.302168767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:53:05.302694 env[1203]: time="2025-09-13T00:53:05.302178795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:53:05.302694 env[1203]: time="2025-09-13T00:53:05.302475292Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/462797b13f4c5598b6bcdbb66fc6a400604b69b528e0bc2e5c61a670fc78e91a pid=1622 runtime=io.containerd.runc.v2 Sep 13 00:53:05.317978 systemd[1]: Started cri-containerd-462797b13f4c5598b6bcdbb66fc6a400604b69b528e0bc2e5c61a670fc78e91a.scope. Sep 13 00:53:05.335469 systemd[1]: Started cri-containerd-7a24fd64be99badbf1d2cff0bfb6c2e0a6ad30d65f9394a7c77187e95062df47.scope. Sep 13 00:53:05.472313 env[1203]: time="2025-09-13T00:53:05.472103164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a24fd64be99badbf1d2cff0bfb6c2e0a6ad30d65f9394a7c77187e95062df47\"" Sep 13 00:53:05.478906 kubelet[1551]: E0913 00:53:05.478428 1551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:05.482701 env[1203]: time="2025-09-13T00:53:05.482614918Z" level=info msg="CreateContainer within sandbox \"7a24fd64be99badbf1d2cff0bfb6c2e0a6ad30d65f9394a7c77187e95062df47\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:53:05.527995 env[1203]: time="2025-09-13T00:53:05.527909641Z" level=info msg="CreateContainer within sandbox \"7a24fd64be99badbf1d2cff0bfb6c2e0a6ad30d65f9394a7c77187e95062df47\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8954475af9d0c219e9c071705dd60952f833ae545aeed5c238fbd38cd503128d\"" Sep 13 00:53:05.529379 env[1203]: time="2025-09-13T00:53:05.529335185Z" level=info msg="StartContainer for \"8954475af9d0c219e9c071705dd60952f833ae545aeed5c238fbd38cd503128d\"" Sep 13 00:53:05.537838 env[1203]: time="2025-09-13T00:53:05.537779120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"e164606a0573bc667504784f25babd8bd880e225508c07644a144f0dfab3b9b9\"" Sep 13 00:53:05.538777 kubelet[1551]: E0913 00:53:05.538744 1551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:05.545708 env[1203]: time="2025-09-13T00:53:05.545661002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7881aa82872413b6cd6df042fc8c0e9e,Namespace:kube-system,Attempt:0,} returns sandbox id \"462797b13f4c5598b6bcdbb66fc6a400604b69b528e0bc2e5c61a670fc78e91a\"" Sep 13 00:53:05.546226 kubelet[1551]: W0913 00:53:05.546159 1551 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Sep 13 00:53:05.546285 kubelet[1551]: E0913 00:53:05.546230 1551 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:53:05.546502 env[1203]: time="2025-09-13T00:53:05.546474046Z" level=info msg="CreateContainer within sandbox \"e164606a0573bc667504784f25babd8bd880e225508c07644a144f0dfab3b9b9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:53:05.546873 kubelet[1551]: E0913 00:53:05.546849 1551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:05.548398 env[1203]: time="2025-09-13T00:53:05.548372507Z" level=info msg="CreateContainer within sandbox \"462797b13f4c5598b6bcdbb66fc6a400604b69b528e0bc2e5c61a670fc78e91a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:53:05.554944 systemd[1]: Started cri-containerd-8954475af9d0c219e9c071705dd60952f833ae545aeed5c238fbd38cd503128d.scope. Sep 13 00:53:05.571117 env[1203]: time="2025-09-13T00:53:05.571077987Z" level=info msg="CreateContainer within sandbox \"462797b13f4c5598b6bcdbb66fc6a400604b69b528e0bc2e5c61a670fc78e91a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ba6a00d85549a9cb27d9f2f420c18076f0fbddcf2ac5ee82c62249e81e0cc9fb\"" Sep 13 00:53:05.572019 env[1203]: time="2025-09-13T00:53:05.572000848Z" level=info msg="StartContainer for \"ba6a00d85549a9cb27d9f2f420c18076f0fbddcf2ac5ee82c62249e81e0cc9fb\"" Sep 13 00:53:05.583485 env[1203]: time="2025-09-13T00:53:05.583447685Z" level=info msg="CreateContainer within sandbox \"e164606a0573bc667504784f25babd8bd880e225508c07644a144f0dfab3b9b9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"987c1faf5235bb1746ab9fa946a5f7277fe1c38f99585afbb1f4c285042ea773\"" Sep 13 00:53:05.584327 env[1203]: time="2025-09-13T00:53:05.584288171Z" level=info msg="StartContainer for \"987c1faf5235bb1746ab9fa946a5f7277fe1c38f99585afbb1f4c285042ea773\"" Sep 13 00:53:05.587835 systemd[1]: Started cri-containerd-ba6a00d85549a9cb27d9f2f420c18076f0fbddcf2ac5ee82c62249e81e0cc9fb.scope. Sep 13 00:53:05.604542 env[1203]: time="2025-09-13T00:53:05.604484988Z" level=info msg="StartContainer for \"8954475af9d0c219e9c071705dd60952f833ae545aeed5c238fbd38cd503128d\" returns successfully" Sep 13 00:53:05.616375 systemd[1]: Started cri-containerd-987c1faf5235bb1746ab9fa946a5f7277fe1c38f99585afbb1f4c285042ea773.scope. Sep 13 00:53:05.651201 kubelet[1551]: E0913 00:53:05.651155 1551 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="3.2s" Sep 13 00:53:05.713892 env[1203]: time="2025-09-13T00:53:05.713828466Z" level=info msg="StartContainer for \"ba6a00d85549a9cb27d9f2f420c18076f0fbddcf2ac5ee82c62249e81e0cc9fb\" returns successfully" Sep 13 00:53:05.715635 env[1203]: time="2025-09-13T00:53:05.715616799Z" level=info msg="StartContainer for \"987c1faf5235bb1746ab9fa946a5f7277fe1c38f99585afbb1f4c285042ea773\" returns successfully" Sep 13 00:53:05.719025 kubelet[1551]: E0913 00:53:05.718988 1551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:05.720551 kubelet[1551]: E0913 00:53:05.720526 1551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:05.722695 kubelet[1551]: E0913 00:53:05.722629 1551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:06.475363 kubelet[1551]: I0913 00:53:06.475322 1551 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:53:06.724574 kubelet[1551]: E0913 00:53:06.724496 1551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:07.249657 kubelet[1551]: I0913 00:53:07.249614 1551 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 13 00:53:07.642698 kubelet[1551]: I0913 00:53:07.642641 1551 apiserver.go:52] "Watching apiserver" Sep 13 00:53:07.648124 kubelet[1551]: I0913 00:53:07.648099 1551 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:53:08.043771 kubelet[1551]: E0913 00:53:08.043643 1551 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 13 00:53:08.043771 kubelet[1551]: E0913 00:53:08.043774 1551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:09.688277 systemd[1]: Reloading. Sep 13 00:53:09.766501 /usr/lib/systemd/system-generators/torcx-generator[1852]: time="2025-09-13T00:53:09Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:53:09.766524 /usr/lib/systemd/system-generators/torcx-generator[1852]: time="2025-09-13T00:53:09Z" level=info msg="torcx already run" Sep 13 00:53:10.031194 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:53:10.031212 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:53:10.048470 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:53:10.168501 systemd[1]: Stopping kubelet.service... Sep 13 00:53:10.191575 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:53:10.191778 systemd[1]: Stopped kubelet.service. Sep 13 00:53:10.193440 systemd[1]: Starting kubelet.service... Sep 13 00:53:10.283023 systemd[1]: Started kubelet.service. Sep 13 00:53:10.371908 kubelet[1898]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:53:10.371908 kubelet[1898]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:53:10.371908 kubelet[1898]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:53:10.373349 kubelet[1898]: I0913 00:53:10.371927 1898 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:53:10.378079 kubelet[1898]: I0913 00:53:10.378034 1898 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:53:10.378079 kubelet[1898]: I0913 00:53:10.378081 1898 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:53:10.378408 kubelet[1898]: I0913 00:53:10.378384 1898 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:53:10.379952 kubelet[1898]: I0913 00:53:10.379926 1898 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:53:10.382036 kubelet[1898]: I0913 00:53:10.381966 1898 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:53:10.391272 kubelet[1898]: E0913 00:53:10.391230 1898 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:53:10.391526 kubelet[1898]: I0913 00:53:10.391479 1898 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:53:10.394997 kubelet[1898]: I0913 00:53:10.394966 1898 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:53:10.395087 kubelet[1898]: I0913 00:53:10.395078 1898 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:53:10.395237 kubelet[1898]: I0913 00:53:10.395198 1898 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:53:10.395397 kubelet[1898]: I0913 00:53:10.395227 1898 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:53:10.395397 kubelet[1898]: I0913 00:53:10.395395 1898 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:53:10.395532 kubelet[1898]: I0913 00:53:10.395404 1898 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:53:10.395532 kubelet[1898]: I0913 00:53:10.395439 1898 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:53:10.395532 kubelet[1898]: I0913 00:53:10.395528 1898 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:53:10.395626 kubelet[1898]: I0913 00:53:10.395538 1898 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:53:10.395626 kubelet[1898]: I0913 00:53:10.395564 1898 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:53:10.395626 kubelet[1898]: I0913 00:53:10.395573 1898 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:53:10.396430 kubelet[1898]: I0913 00:53:10.396392 1898 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:53:10.396894 kubelet[1898]: I0913 00:53:10.396873 1898 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:53:10.397453 kubelet[1898]: I0913 00:53:10.397420 1898 server.go:1274] "Started kubelet" Sep 13 00:53:10.397699 kubelet[1898]: I0913 00:53:10.397671 1898 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:53:10.397798 kubelet[1898]: I0913 00:53:10.397726 1898 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:53:10.398028 kubelet[1898]: I0913 00:53:10.397999 1898 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:53:10.400837 kubelet[1898]: I0913 00:53:10.400822 1898 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:53:10.400944 kubelet[1898]: I0913 00:53:10.400916 1898 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:53:10.403659 kubelet[1898]: E0913 00:53:10.403644 1898 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:53:10.404125 kubelet[1898]: I0913 00:53:10.404112 1898 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:53:10.404751 kubelet[1898]: I0913 00:53:10.404722 1898 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:53:10.404954 kubelet[1898]: E0913 00:53:10.404925 1898 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:53:10.406946 kubelet[1898]: I0913 00:53:10.406460 1898 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:53:10.406946 kubelet[1898]: I0913 00:53:10.406552 1898 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:53:10.408498 kubelet[1898]: I0913 00:53:10.408473 1898 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:53:10.410344 kubelet[1898]: I0913 00:53:10.410299 1898 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:53:10.410344 kubelet[1898]: I0913 00:53:10.410328 1898 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:53:10.410467 kubelet[1898]: I0913 00:53:10.410354 1898 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:53:10.410467 kubelet[1898]: E0913 00:53:10.410392 1898 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:53:10.412656 kubelet[1898]: I0913 00:53:10.412632 1898 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:53:10.412656 kubelet[1898]: I0913 00:53:10.412651 1898 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:53:10.412783 kubelet[1898]: I0913 00:53:10.412716 1898 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:53:10.450692 kubelet[1898]: I0913 00:53:10.450659 1898 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:53:10.450692 kubelet[1898]: I0913 00:53:10.450676 1898 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:53:10.450692 kubelet[1898]: I0913 00:53:10.450695 1898 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:53:10.450919 kubelet[1898]: I0913 00:53:10.450870 1898 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:53:10.450944 kubelet[1898]: I0913 00:53:10.450884 1898 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:53:10.450944 kubelet[1898]: I0913 00:53:10.450933 1898 policy_none.go:49] "None policy: Start" Sep 13 00:53:10.451531 kubelet[1898]: I0913 00:53:10.451512 1898 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:53:10.451584 kubelet[1898]: I0913 00:53:10.451537 1898 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:53:10.451715 kubelet[1898]: I0913 00:53:10.451700 1898 state_mem.go:75] "Updated machine memory state" Sep 13 00:53:10.455342 kubelet[1898]: I0913 00:53:10.455318 1898 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:53:10.455517 kubelet[1898]: I0913 00:53:10.455500 1898 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:53:10.455575 kubelet[1898]: I0913 00:53:10.455517 1898 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:53:10.455731 kubelet[1898]: I0913 00:53:10.455707 1898 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:53:10.559741 kubelet[1898]: I0913 00:53:10.559633 1898 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:53:10.707248 kubelet[1898]: I0913 00:53:10.707198 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:53:10.707248 kubelet[1898]: I0913 00:53:10.707236 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7881aa82872413b6cd6df042fc8c0e9e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7881aa82872413b6cd6df042fc8c0e9e\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:53:10.707487 kubelet[1898]: I0913 00:53:10.707265 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:53:10.707487 kubelet[1898]: I0913 00:53:10.707281 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:53:10.707487 kubelet[1898]: I0913 00:53:10.707294 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:53:10.707590 kubelet[1898]: I0913 00:53:10.707311 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:53:10.707590 kubelet[1898]: I0913 00:53:10.707536 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7881aa82872413b6cd6df042fc8c0e9e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7881aa82872413b6cd6df042fc8c0e9e\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:53:10.707590 kubelet[1898]: I0913 00:53:10.707561 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7881aa82872413b6cd6df042fc8c0e9e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7881aa82872413b6cd6df042fc8c0e9e\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:53:10.707590 kubelet[1898]: I0913 00:53:10.707580 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:53:10.711088 kubelet[1898]: I0913 00:53:10.711063 1898 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 13 00:53:10.711151 kubelet[1898]: I0913 00:53:10.711135 1898 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 13 00:53:11.009256 kubelet[1898]: E0913 00:53:11.009220 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:11.009523 kubelet[1898]: E0913 00:53:11.009318 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:11.011807 kubelet[1898]: E0913 00:53:11.011783 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:11.220965 sudo[1933]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 13 00:53:11.221188 sudo[1933]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 13 00:53:11.396348 kubelet[1898]: I0913 00:53:11.396301 1898 apiserver.go:52] "Watching apiserver" Sep 13 00:53:11.407608 kubelet[1898]: I0913 00:53:11.407575 1898 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:53:11.429249 kubelet[1898]: E0913 00:53:11.429215 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:11.430004 kubelet[1898]: E0913 00:53:11.429986 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:11.434173 kubelet[1898]: E0913 00:53:11.434141 1898 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 13 00:53:11.434311 kubelet[1898]: E0913 00:53:11.434284 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:11.446890 kubelet[1898]: I0913 00:53:11.446838 1898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.446809161 podStartE2EDuration="1.446809161s" podCreationTimestamp="2025-09-13 00:53:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:53:11.446631197 +0000 UTC m=+1.160022637" watchObservedRunningTime="2025-09-13 00:53:11.446809161 +0000 UTC m=+1.160200611" Sep 13 00:53:11.457907 kubelet[1898]: I0913 00:53:11.457851 1898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.457824639 podStartE2EDuration="1.457824639s" podCreationTimestamp="2025-09-13 00:53:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:53:11.452065709 +0000 UTC m=+1.165457159" watchObservedRunningTime="2025-09-13 00:53:11.457824639 +0000 UTC m=+1.171216079" Sep 13 00:53:11.458094 kubelet[1898]: I0913 00:53:11.457947 1898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.457943021 podStartE2EDuration="1.457943021s" podCreationTimestamp="2025-09-13 00:53:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:53:11.457693783 +0000 UTC m=+1.171085233" watchObservedRunningTime="2025-09-13 00:53:11.457943021 +0000 UTC m=+1.171334461" Sep 13 00:53:11.829645 sudo[1933]: pam_unix(sudo:session): session closed for user root Sep 13 00:53:12.430858 kubelet[1898]: E0913 00:53:12.430824 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:12.431207 kubelet[1898]: E0913 00:53:12.430927 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:12.665557 kubelet[1898]: E0913 00:53:12.665516 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:13.432076 kubelet[1898]: E0913 00:53:13.432023 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:13.475858 sudo[1303]: pam_unix(sudo:session): session closed for user root Sep 13 00:53:13.477107 sshd[1300]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:13.479423 systemd[1]: sshd@4-10.0.0.130:22-10.0.0.1:46264.service: Deactivated successfully. Sep 13 00:53:13.480129 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:53:13.480254 systemd[1]: session-5.scope: Consumed 4.312s CPU time. Sep 13 00:53:13.480811 systemd-logind[1194]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:53:13.481437 systemd-logind[1194]: Removed session 5. Sep 13 00:53:15.252417 kubelet[1898]: I0913 00:53:15.252381 1898 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:53:15.252821 env[1203]: time="2025-09-13T00:53:15.252785990Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:53:15.253079 kubelet[1898]: I0913 00:53:15.252969 1898 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:53:16.539306 kubelet[1898]: W0913 00:53:16.539195 1898 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 13 00:53:16.539306 kubelet[1898]: E0913 00:53:16.539272 1898 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 13 00:53:16.540963 kubelet[1898]: W0913 00:53:16.540676 1898 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 13 00:53:16.540963 kubelet[1898]: E0913 00:53:16.540702 1898 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 13 00:53:16.540963 kubelet[1898]: W0913 00:53:16.540783 1898 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 13 00:53:16.540963 kubelet[1898]: E0913 00:53:16.540796 1898 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 13 00:53:16.540963 kubelet[1898]: W0913 00:53:16.540834 1898 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 13 00:53:16.541155 kubelet[1898]: E0913 00:53:16.540844 1898 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 13 00:53:16.541365 kubelet[1898]: W0913 00:53:16.541340 1898 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 13 00:53:16.541432 kubelet[1898]: E0913 00:53:16.541367 1898 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 13 00:53:16.543756 systemd[1]: Created slice kubepods-burstable-podd994bcba_ef33_4e46_8658_37609ee72b0f.slice. Sep 13 00:53:16.547683 systemd[1]: Created slice kubepods-besteffort-pod6f43b2b0_b568_44d9_9bd6_8944ae33a069.slice. Sep 13 00:53:16.551886 kubelet[1898]: I0913 00:53:16.551853 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptxcp\" (UniqueName: \"kubernetes.io/projected/d994bcba-ef33-4e46-8658-37609ee72b0f-kube-api-access-ptxcp\") pod \"cilium-lr8rs\" (UID: \"d994bcba-ef33-4e46-8658-37609ee72b0f\") " pod="kube-system/cilium-lr8rs" Sep 13 00:53:16.551886 kubelet[1898]: I0913 00:53:16.551889 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-host-proc-sys-kernel\") pod \"cilium-lr8rs\" (UID: \"d994bcba-ef33-4e46-8658-37609ee72b0f\") " pod="kube-system/cilium-lr8rs" Sep 13 00:53:16.552034 kubelet[1898]: I0913 00:53:16.551910 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-cni-path\") pod \"cilium-lr8rs\" (UID: \"d994bcba-ef33-4e46-8658-37609ee72b0f\") " pod="kube-system/cilium-lr8rs" Sep 13 00:53:16.552034 kubelet[1898]: I0913 00:53:16.551925 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-etc-cni-netd\") pod \"cilium-lr8rs\" (UID: \"d994bcba-ef33-4e46-8658-37609ee72b0f\") " pod="kube-system/cilium-lr8rs" Sep 13 00:53:16.552034 kubelet[1898]: I0913 00:53:16.551938 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-host-proc-sys-net\") pod \"cilium-lr8rs\" (UID: \"d994bcba-ef33-4e46-8658-37609ee72b0f\") " pod="kube-system/cilium-lr8rs" Sep 13 00:53:16.552034 kubelet[1898]: I0913 00:53:16.551953 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6f43b2b0-b568-44d9-9bd6-8944ae33a069-kube-proxy\") pod \"kube-proxy-xxbg6\" (UID: \"6f43b2b0-b568-44d9-9bd6-8944ae33a069\") " pod="kube-system/kube-proxy-xxbg6" Sep 13 00:53:16.552034 kubelet[1898]: I0913 00:53:16.551967 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f43b2b0-b568-44d9-9bd6-8944ae33a069-xtables-lock\") pod \"kube-proxy-xxbg6\" (UID: \"6f43b2b0-b568-44d9-9bd6-8944ae33a069\") " pod="kube-system/kube-proxy-xxbg6" Sep 13 00:53:16.552034 kubelet[1898]: I0913 00:53:16.551990 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-cilium-run\") pod \"cilium-lr8rs\" (UID: \"d994bcba-ef33-4e46-8658-37609ee72b0f\") " pod="kube-system/cilium-lr8rs" Sep 13 00:53:16.552202 kubelet[1898]: I0913 00:53:16.552002 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-bpf-maps\") pod \"cilium-lr8rs\" (UID: \"d994bcba-ef33-4e46-8658-37609ee72b0f\") " pod="kube-system/cilium-lr8rs" Sep 13 00:53:16.552202 kubelet[1898]: I0913 00:53:16.552013 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-lib-modules\") pod \"cilium-lr8rs\" (UID: \"d994bcba-ef33-4e46-8658-37609ee72b0f\") " pod="kube-system/cilium-lr8rs" Sep 13 00:53:16.552202 kubelet[1898]: I0913 00:53:16.552025 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d994bcba-ef33-4e46-8658-37609ee72b0f-clustermesh-secrets\") pod \"cilium-lr8rs\" (UID: \"d994bcba-ef33-4e46-8658-37609ee72b0f\") " pod="kube-system/cilium-lr8rs" Sep 13 00:53:16.552202 kubelet[1898]: I0913 00:53:16.552067 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d994bcba-ef33-4e46-8658-37609ee72b0f-cilium-config-path\") pod \"cilium-lr8rs\" (UID: \"d994bcba-ef33-4e46-8658-37609ee72b0f\") " pod="kube-system/cilium-lr8rs" Sep 13 00:53:16.552202 kubelet[1898]: I0913 00:53:16.552082 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f43b2b0-b568-44d9-9bd6-8944ae33a069-lib-modules\") pod \"kube-proxy-xxbg6\" (UID: \"6f43b2b0-b568-44d9-9bd6-8944ae33a069\") " pod="kube-system/kube-proxy-xxbg6" Sep 13 00:53:16.552202 kubelet[1898]: I0913 00:53:16.552097 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-xtables-lock\") pod \"cilium-lr8rs\" (UID: \"d994bcba-ef33-4e46-8658-37609ee72b0f\") " pod="kube-system/cilium-lr8rs" Sep 13 00:53:16.552338 kubelet[1898]: I0913 00:53:16.552111 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68dp9\" (UniqueName: \"kubernetes.io/projected/6f43b2b0-b568-44d9-9bd6-8944ae33a069-kube-api-access-68dp9\") pod \"kube-proxy-xxbg6\" (UID: \"6f43b2b0-b568-44d9-9bd6-8944ae33a069\") " pod="kube-system/kube-proxy-xxbg6" Sep 13 00:53:16.552338 kubelet[1898]: I0913 00:53:16.552130 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-hostproc\") pod \"cilium-lr8rs\" (UID: \"d994bcba-ef33-4e46-8658-37609ee72b0f\") " pod="kube-system/cilium-lr8rs" Sep 13 00:53:16.552338 kubelet[1898]: I0913 00:53:16.552151 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-cilium-cgroup\") pod \"cilium-lr8rs\" (UID: \"d994bcba-ef33-4e46-8658-37609ee72b0f\") " pod="kube-system/cilium-lr8rs" Sep 13 00:53:16.552338 kubelet[1898]: I0913 00:53:16.552171 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d994bcba-ef33-4e46-8658-37609ee72b0f-hubble-tls\") pod \"cilium-lr8rs\" (UID: \"d994bcba-ef33-4e46-8658-37609ee72b0f\") " pod="kube-system/cilium-lr8rs" Sep 13 00:53:16.568469 systemd[1]: Created slice kubepods-besteffort-pod7d43ccf7_2bc1_4fa2_a9df_98aee5a32e95.slice. Sep 13 00:53:16.652949 kubelet[1898]: I0913 00:53:16.652893 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7d43ccf7-2bc1-4fa2-a9df-98aee5a32e95-cilium-config-path\") pod \"cilium-operator-5d85765b45-7vf7l\" (UID: \"7d43ccf7-2bc1-4fa2-a9df-98aee5a32e95\") " pod="kube-system/cilium-operator-5d85765b45-7vf7l" Sep 13 00:53:16.653125 kubelet[1898]: I0913 00:53:16.652997 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g7qw\" (UniqueName: \"kubernetes.io/projected/7d43ccf7-2bc1-4fa2-a9df-98aee5a32e95-kube-api-access-8g7qw\") pod \"cilium-operator-5d85765b45-7vf7l\" (UID: \"7d43ccf7-2bc1-4fa2-a9df-98aee5a32e95\") " pod="kube-system/cilium-operator-5d85765b45-7vf7l" Sep 13 00:53:17.605225 kubelet[1898]: I0913 00:53:17.605186 1898 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 13 00:53:17.653429 kubelet[1898]: E0913 00:53:17.653389 1898 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Sep 13 00:53:17.653555 kubelet[1898]: E0913 00:53:17.653471 1898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d994bcba-ef33-4e46-8658-37609ee72b0f-cilium-config-path podName:d994bcba-ef33-4e46-8658-37609ee72b0f nodeName:}" failed. No retries permitted until 2025-09-13 00:53:18.153449858 +0000 UTC m=+7.866841308 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/d994bcba-ef33-4e46-8658-37609ee72b0f-cilium-config-path") pod "cilium-lr8rs" (UID: "d994bcba-ef33-4e46-8658-37609ee72b0f") : failed to sync configmap cache: timed out waiting for the condition Sep 13 00:53:17.653555 kubelet[1898]: E0913 00:53:17.653389 1898 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Sep 13 00:53:17.653555 kubelet[1898]: E0913 00:53:17.653497 1898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6f43b2b0-b568-44d9-9bd6-8944ae33a069-kube-proxy podName:6f43b2b0-b568-44d9-9bd6-8944ae33a069 nodeName:}" failed. No retries permitted until 2025-09-13 00:53:18.153490806 +0000 UTC m=+7.866882256 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/6f43b2b0-b568-44d9-9bd6-8944ae33a069-kube-proxy") pod "kube-proxy-xxbg6" (UID: "6f43b2b0-b568-44d9-9bd6-8944ae33a069") : failed to sync configmap cache: timed out waiting for the condition Sep 13 00:53:17.654534 kubelet[1898]: E0913 00:53:17.654480 1898 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Sep 13 00:53:17.654534 kubelet[1898]: E0913 00:53:17.654504 1898 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-lr8rs: failed to sync secret cache: timed out waiting for the condition Sep 13 00:53:17.654832 kubelet[1898]: E0913 00:53:17.654577 1898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d994bcba-ef33-4e46-8658-37609ee72b0f-hubble-tls podName:d994bcba-ef33-4e46-8658-37609ee72b0f nodeName:}" failed. No retries permitted until 2025-09-13 00:53:18.154556844 +0000 UTC m=+7.867948294 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/d994bcba-ef33-4e46-8658-37609ee72b0f-hubble-tls") pod "cilium-lr8rs" (UID: "d994bcba-ef33-4e46-8658-37609ee72b0f") : failed to sync secret cache: timed out waiting for the condition Sep 13 00:53:17.657672 kubelet[1898]: E0913 00:53:17.657644 1898 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 13 00:53:17.657672 kubelet[1898]: E0913 00:53:17.657668 1898 projected.go:194] Error preparing data for projected volume kube-api-access-68dp9 for pod kube-system/kube-proxy-xxbg6: failed to sync configmap cache: timed out waiting for the condition Sep 13 00:53:17.657779 kubelet[1898]: E0913 00:53:17.657685 1898 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 13 00:53:17.657779 kubelet[1898]: E0913 00:53:17.657701 1898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6f43b2b0-b568-44d9-9bd6-8944ae33a069-kube-api-access-68dp9 podName:6f43b2b0-b568-44d9-9bd6-8944ae33a069 nodeName:}" failed. No retries permitted until 2025-09-13 00:53:18.15769109 +0000 UTC m=+7.871082540 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-68dp9" (UniqueName: "kubernetes.io/projected/6f43b2b0-b568-44d9-9bd6-8944ae33a069-kube-api-access-68dp9") pod "kube-proxy-xxbg6" (UID: "6f43b2b0-b568-44d9-9bd6-8944ae33a069") : failed to sync configmap cache: timed out waiting for the condition Sep 13 00:53:17.657779 kubelet[1898]: E0913 00:53:17.657718 1898 projected.go:194] Error preparing data for projected volume kube-api-access-ptxcp for pod kube-system/cilium-lr8rs: failed to sync configmap cache: timed out waiting for the condition Sep 13 00:53:17.657896 kubelet[1898]: E0913 00:53:17.657782 1898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d994bcba-ef33-4e46-8658-37609ee72b0f-kube-api-access-ptxcp podName:d994bcba-ef33-4e46-8658-37609ee72b0f nodeName:}" failed. No retries permitted until 2025-09-13 00:53:18.157754712 +0000 UTC m=+7.871146253 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ptxcp" (UniqueName: "kubernetes.io/projected/d994bcba-ef33-4e46-8658-37609ee72b0f-kube-api-access-ptxcp") pod "cilium-lr8rs" (UID: "d994bcba-ef33-4e46-8658-37609ee72b0f") : failed to sync configmap cache: timed out waiting for the condition Sep 13 00:53:17.759231 kubelet[1898]: E0913 00:53:17.759184 1898 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 13 00:53:17.759231 kubelet[1898]: E0913 00:53:17.759222 1898 projected.go:194] Error preparing data for projected volume kube-api-access-8g7qw for pod kube-system/cilium-operator-5d85765b45-7vf7l: failed to sync configmap cache: timed out waiting for the condition Sep 13 00:53:17.759402 kubelet[1898]: E0913 00:53:17.759298 1898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d43ccf7-2bc1-4fa2-a9df-98aee5a32e95-kube-api-access-8g7qw podName:7d43ccf7-2bc1-4fa2-a9df-98aee5a32e95 nodeName:}" failed. No retries permitted until 2025-09-13 00:53:18.259272615 +0000 UTC m=+7.972664145 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8g7qw" (UniqueName: "kubernetes.io/projected/7d43ccf7-2bc1-4fa2-a9df-98aee5a32e95-kube-api-access-8g7qw") pod "cilium-operator-5d85765b45-7vf7l" (UID: "7d43ccf7-2bc1-4fa2-a9df-98aee5a32e95") : failed to sync configmap cache: timed out waiting for the condition Sep 13 00:53:18.346498 kubelet[1898]: E0913 00:53:18.346452 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:18.346965 env[1203]: time="2025-09-13T00:53:18.346929729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lr8rs,Uid:d994bcba-ef33-4e46-8658-37609ee72b0f,Namespace:kube-system,Attempt:0,}" Sep 13 00:53:18.355346 kubelet[1898]: E0913 00:53:18.355312 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:18.355713 env[1203]: time="2025-09-13T00:53:18.355680623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xxbg6,Uid:6f43b2b0-b568-44d9-9bd6-8944ae33a069,Namespace:kube-system,Attempt:0,}" Sep 13 00:53:18.372092 kubelet[1898]: E0913 00:53:18.372072 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:18.372357 env[1203]: time="2025-09-13T00:53:18.372330811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-7vf7l,Uid:7d43ccf7-2bc1-4fa2-a9df-98aee5a32e95,Namespace:kube-system,Attempt:0,}" Sep 13 00:53:18.421421 env[1203]: time="2025-09-13T00:53:18.421356058Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:53:18.421540 env[1203]: time="2025-09-13T00:53:18.421432555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:53:18.421540 env[1203]: time="2025-09-13T00:53:18.421446973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:53:18.422490 env[1203]: time="2025-09-13T00:53:18.421610346Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/800ef04da00bf1cc548f59ded243cf5d35aeed92557faf5167103fea4278c1cf pid=1990 runtime=io.containerd.runc.v2 Sep 13 00:53:18.426407 env[1203]: time="2025-09-13T00:53:18.426348098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:53:18.426492 env[1203]: time="2025-09-13T00:53:18.426417992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:53:18.426492 env[1203]: time="2025-09-13T00:53:18.426459221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:53:18.426635 env[1203]: time="2025-09-13T00:53:18.426598818Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b3a4ea9a33eb8073bf1fd8bcf79af0e992e5cf739d5b26f7efae41efc5d98f1c pid=2010 runtime=io.containerd.runc.v2 Sep 13 00:53:18.435064 env[1203]: time="2025-09-13T00:53:18.432954451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:53:18.435064 env[1203]: time="2025-09-13T00:53:18.432983276Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:53:18.435064 env[1203]: time="2025-09-13T00:53:18.432992174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:53:18.435064 env[1203]: time="2025-09-13T00:53:18.433107455Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce1b26ae0b037e6191d290d679dc30304e971acf4d000baeb20a3191b1e817b0 pid=2024 runtime=io.containerd.runc.v2 Sep 13 00:53:18.443527 systemd[1]: Started cri-containerd-b3a4ea9a33eb8073bf1fd8bcf79af0e992e5cf739d5b26f7efae41efc5d98f1c.scope. Sep 13 00:53:18.450852 systemd[1]: Started cri-containerd-800ef04da00bf1cc548f59ded243cf5d35aeed92557faf5167103fea4278c1cf.scope. Sep 13 00:53:18.455245 systemd[1]: Started cri-containerd-ce1b26ae0b037e6191d290d679dc30304e971acf4d000baeb20a3191b1e817b0.scope. Sep 13 00:53:18.482862 env[1203]: time="2025-09-13T00:53:18.480377979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xxbg6,Uid:6f43b2b0-b568-44d9-9bd6-8944ae33a069,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3a4ea9a33eb8073bf1fd8bcf79af0e992e5cf739d5b26f7efae41efc5d98f1c\"" Sep 13 00:53:18.483035 kubelet[1898]: E0913 00:53:18.481205 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:18.484219 env[1203]: time="2025-09-13T00:53:18.483726267Z" level=info msg="CreateContainer within sandbox \"b3a4ea9a33eb8073bf1fd8bcf79af0e992e5cf739d5b26f7efae41efc5d98f1c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:53:18.484302 env[1203]: time="2025-09-13T00:53:18.484278916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lr8rs,Uid:d994bcba-ef33-4e46-8658-37609ee72b0f,Namespace:kube-system,Attempt:0,} returns sandbox id \"800ef04da00bf1cc548f59ded243cf5d35aeed92557faf5167103fea4278c1cf\"" Sep 13 00:53:18.485130 kubelet[1898]: E0913 00:53:18.485107 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:18.486341 env[1203]: time="2025-09-13T00:53:18.485897259Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 00:53:18.503161 env[1203]: time="2025-09-13T00:53:18.503089064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-7vf7l,Uid:7d43ccf7-2bc1-4fa2-a9df-98aee5a32e95,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce1b26ae0b037e6191d290d679dc30304e971acf4d000baeb20a3191b1e817b0\"" Sep 13 00:53:18.504074 kubelet[1898]: E0913 00:53:18.503902 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:18.537696 env[1203]: time="2025-09-13T00:53:18.537652169Z" level=info msg="CreateContainer within sandbox \"b3a4ea9a33eb8073bf1fd8bcf79af0e992e5cf739d5b26f7efae41efc5d98f1c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"659bf7339dcc191f36cf96b739fbd9f8adb8d57de18a55fc38e8de274ee28947\"" Sep 13 00:53:18.538245 env[1203]: time="2025-09-13T00:53:18.538197995Z" level=info msg="StartContainer for \"659bf7339dcc191f36cf96b739fbd9f8adb8d57de18a55fc38e8de274ee28947\"" Sep 13 00:53:18.551476 systemd[1]: Started cri-containerd-659bf7339dcc191f36cf96b739fbd9f8adb8d57de18a55fc38e8de274ee28947.scope. Sep 13 00:53:18.575084 env[1203]: time="2025-09-13T00:53:18.575024659Z" level=info msg="StartContainer for \"659bf7339dcc191f36cf96b739fbd9f8adb8d57de18a55fc38e8de274ee28947\" returns successfully" Sep 13 00:53:19.441437 kubelet[1898]: E0913 00:53:19.441094 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:19.894207 kubelet[1898]: E0913 00:53:19.894156 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:19.909097 kubelet[1898]: I0913 00:53:19.908995 1898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xxbg6" podStartSLOduration=3.908973388 podStartE2EDuration="3.908973388s" podCreationTimestamp="2025-09-13 00:53:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:53:19.449483095 +0000 UTC m=+9.162874545" watchObservedRunningTime="2025-09-13 00:53:19.908973388 +0000 UTC m=+9.622364838" Sep 13 00:53:20.448732 kubelet[1898]: E0913 00:53:20.448662 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:21.690343 kubelet[1898]: E0913 00:53:21.690278 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:21.930070 kubelet[1898]: E0913 00:53:21.930001 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:22.670167 kubelet[1898]: E0913 00:53:22.669853 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:25.615172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1526514310.mount: Deactivated successfully. Sep 13 00:53:25.640171 update_engine[1195]: I0913 00:53:25.640110 1195 update_attempter.cc:509] Updating boot flags... Sep 13 00:53:30.278676 env[1203]: time="2025-09-13T00:53:30.278625353Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:30.280708 env[1203]: time="2025-09-13T00:53:30.280646923Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:30.282455 env[1203]: time="2025-09-13T00:53:30.282404442Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:30.282858 env[1203]: time="2025-09-13T00:53:30.282827404Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 13 00:53:30.284030 env[1203]: time="2025-09-13T00:53:30.284005495Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 00:53:30.285014 env[1203]: time="2025-09-13T00:53:30.284967408Z" level=info msg="CreateContainer within sandbox \"800ef04da00bf1cc548f59ded243cf5d35aeed92557faf5167103fea4278c1cf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:53:30.302124 env[1203]: time="2025-09-13T00:53:30.302071141Z" level=info msg="CreateContainer within sandbox \"800ef04da00bf1cc548f59ded243cf5d35aeed92557faf5167103fea4278c1cf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cc3a72cd33e7faf6dc3789039a570fa9a2de7de669111e7ff098558e413f7e82\"" Sep 13 00:53:30.302704 env[1203]: time="2025-09-13T00:53:30.302653975Z" level=info msg="StartContainer for \"cc3a72cd33e7faf6dc3789039a570fa9a2de7de669111e7ff098558e413f7e82\"" Sep 13 00:53:30.324921 systemd[1]: Started cri-containerd-cc3a72cd33e7faf6dc3789039a570fa9a2de7de669111e7ff098558e413f7e82.scope. Sep 13 00:53:30.350782 env[1203]: time="2025-09-13T00:53:30.350704677Z" level=info msg="StartContainer for \"cc3a72cd33e7faf6dc3789039a570fa9a2de7de669111e7ff098558e413f7e82\" returns successfully" Sep 13 00:53:30.359514 systemd[1]: cri-containerd-cc3a72cd33e7faf6dc3789039a570fa9a2de7de669111e7ff098558e413f7e82.scope: Deactivated successfully. Sep 13 00:53:30.466505 kubelet[1898]: E0913 00:53:30.466455 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:30.602579 env[1203]: time="2025-09-13T00:53:30.602523432Z" level=info msg="shim disconnected" id=cc3a72cd33e7faf6dc3789039a570fa9a2de7de669111e7ff098558e413f7e82 Sep 13 00:53:30.602579 env[1203]: time="2025-09-13T00:53:30.602575211Z" level=warning msg="cleaning up after shim disconnected" id=cc3a72cd33e7faf6dc3789039a570fa9a2de7de669111e7ff098558e413f7e82 namespace=k8s.io Sep 13 00:53:30.602579 env[1203]: time="2025-09-13T00:53:30.602586321Z" level=info msg="cleaning up dead shim" Sep 13 00:53:30.609485 env[1203]: time="2025-09-13T00:53:30.609438624Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:53:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2340 runtime=io.containerd.runc.v2\n" Sep 13 00:53:31.298255 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc3a72cd33e7faf6dc3789039a570fa9a2de7de669111e7ff098558e413f7e82-rootfs.mount: Deactivated successfully. Sep 13 00:53:31.470208 kubelet[1898]: E0913 00:53:31.470165 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:31.473325 env[1203]: time="2025-09-13T00:53:31.472636883Z" level=info msg="CreateContainer within sandbox \"800ef04da00bf1cc548f59ded243cf5d35aeed92557faf5167103fea4278c1cf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:53:31.488988 env[1203]: time="2025-09-13T00:53:31.488921686Z" level=info msg="CreateContainer within sandbox \"800ef04da00bf1cc548f59ded243cf5d35aeed92557faf5167103fea4278c1cf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c411bb6ad3ae730c215dcdc221b05a963575e9ba6352a9af73e72f89e3c55b32\"" Sep 13 00:53:31.489511 env[1203]: time="2025-09-13T00:53:31.489452270Z" level=info msg="StartContainer for \"c411bb6ad3ae730c215dcdc221b05a963575e9ba6352a9af73e72f89e3c55b32\"" Sep 13 00:53:31.508235 systemd[1]: Started cri-containerd-c411bb6ad3ae730c215dcdc221b05a963575e9ba6352a9af73e72f89e3c55b32.scope. Sep 13 00:53:31.530068 env[1203]: time="2025-09-13T00:53:31.528616621Z" level=info msg="StartContainer for \"c411bb6ad3ae730c215dcdc221b05a963575e9ba6352a9af73e72f89e3c55b32\" returns successfully" Sep 13 00:53:31.538236 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:53:31.538472 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:53:31.538620 systemd[1]: Stopping systemd-sysctl.service... Sep 13 00:53:31.539990 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:53:31.543817 systemd[1]: cri-containerd-c411bb6ad3ae730c215dcdc221b05a963575e9ba6352a9af73e72f89e3c55b32.scope: Deactivated successfully. Sep 13 00:53:31.549675 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:53:31.567057 env[1203]: time="2025-09-13T00:53:31.566998240Z" level=info msg="shim disconnected" id=c411bb6ad3ae730c215dcdc221b05a963575e9ba6352a9af73e72f89e3c55b32 Sep 13 00:53:31.567193 env[1203]: time="2025-09-13T00:53:31.567075325Z" level=warning msg="cleaning up after shim disconnected" id=c411bb6ad3ae730c215dcdc221b05a963575e9ba6352a9af73e72f89e3c55b32 namespace=k8s.io Sep 13 00:53:31.567193 env[1203]: time="2025-09-13T00:53:31.567085404Z" level=info msg="cleaning up dead shim" Sep 13 00:53:31.573483 env[1203]: time="2025-09-13T00:53:31.573438373Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:53:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2406 runtime=io.containerd.runc.v2\n" Sep 13 00:53:32.298084 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c411bb6ad3ae730c215dcdc221b05a963575e9ba6352a9af73e72f89e3c55b32-rootfs.mount: Deactivated successfully. Sep 13 00:53:32.472409 kubelet[1898]: E0913 00:53:32.472372 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:32.473726 env[1203]: time="2025-09-13T00:53:32.473683594Z" level=info msg="CreateContainer within sandbox \"800ef04da00bf1cc548f59ded243cf5d35aeed92557faf5167103fea4278c1cf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:53:32.486568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount967545299.mount: Deactivated successfully. Sep 13 00:53:32.492911 env[1203]: time="2025-09-13T00:53:32.492855099Z" level=info msg="CreateContainer within sandbox \"800ef04da00bf1cc548f59ded243cf5d35aeed92557faf5167103fea4278c1cf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"afe6f0ea0299e9534ab215b6fe9de7b26bbd6d90dc000b67336059daecf35ba0\"" Sep 13 00:53:32.493368 env[1203]: time="2025-09-13T00:53:32.493342521Z" level=info msg="StartContainer for \"afe6f0ea0299e9534ab215b6fe9de7b26bbd6d90dc000b67336059daecf35ba0\"" Sep 13 00:53:32.513020 systemd[1]: Started cri-containerd-afe6f0ea0299e9534ab215b6fe9de7b26bbd6d90dc000b67336059daecf35ba0.scope. Sep 13 00:53:32.541457 systemd[1]: cri-containerd-afe6f0ea0299e9534ab215b6fe9de7b26bbd6d90dc000b67336059daecf35ba0.scope: Deactivated successfully. Sep 13 00:53:32.544732 env[1203]: time="2025-09-13T00:53:32.544686535Z" level=info msg="StartContainer for \"afe6f0ea0299e9534ab215b6fe9de7b26bbd6d90dc000b67336059daecf35ba0\" returns successfully" Sep 13 00:53:32.683834 env[1203]: time="2025-09-13T00:53:32.683762354Z" level=info msg="shim disconnected" id=afe6f0ea0299e9534ab215b6fe9de7b26bbd6d90dc000b67336059daecf35ba0 Sep 13 00:53:32.683834 env[1203]: time="2025-09-13T00:53:32.683820574Z" level=warning msg="cleaning up after shim disconnected" id=afe6f0ea0299e9534ab215b6fe9de7b26bbd6d90dc000b67336059daecf35ba0 namespace=k8s.io Sep 13 00:53:32.683834 env[1203]: time="2025-09-13T00:53:32.683830753Z" level=info msg="cleaning up dead shim" Sep 13 00:53:32.689451 env[1203]: time="2025-09-13T00:53:32.689400793Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:53:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2463 runtime=io.containerd.runc.v2\n" Sep 13 00:53:32.988808 env[1203]: time="2025-09-13T00:53:32.988628319Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:32.991227 env[1203]: time="2025-09-13T00:53:32.991179908Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:32.992705 env[1203]: time="2025-09-13T00:53:32.992664867Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:32.993252 env[1203]: time="2025-09-13T00:53:32.993204739Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 13 00:53:32.995257 env[1203]: time="2025-09-13T00:53:32.995212227Z" level=info msg="CreateContainer within sandbox \"ce1b26ae0b037e6191d290d679dc30304e971acf4d000baeb20a3191b1e817b0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 00:53:33.008979 env[1203]: time="2025-09-13T00:53:33.008909117Z" level=info msg="CreateContainer within sandbox \"ce1b26ae0b037e6191d290d679dc30304e971acf4d000baeb20a3191b1e817b0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d4b1055733ba1c32439b7f5f799ff8485bdf565d193bec4128cf354eb8d01dc4\"" Sep 13 00:53:33.009598 env[1203]: time="2025-09-13T00:53:33.009560418Z" level=info msg="StartContainer for \"d4b1055733ba1c32439b7f5f799ff8485bdf565d193bec4128cf354eb8d01dc4\"" Sep 13 00:53:33.027003 systemd[1]: Started cri-containerd-d4b1055733ba1c32439b7f5f799ff8485bdf565d193bec4128cf354eb8d01dc4.scope. Sep 13 00:53:33.052475 env[1203]: time="2025-09-13T00:53:33.052404423Z" level=info msg="StartContainer for \"d4b1055733ba1c32439b7f5f799ff8485bdf565d193bec4128cf354eb8d01dc4\" returns successfully" Sep 13 00:53:33.298882 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-afe6f0ea0299e9534ab215b6fe9de7b26bbd6d90dc000b67336059daecf35ba0-rootfs.mount: Deactivated successfully. Sep 13 00:53:33.474659 kubelet[1898]: E0913 00:53:33.474612 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:33.476312 kubelet[1898]: E0913 00:53:33.476288 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:33.477876 env[1203]: time="2025-09-13T00:53:33.477833089Z" level=info msg="CreateContainer within sandbox \"800ef04da00bf1cc548f59ded243cf5d35aeed92557faf5167103fea4278c1cf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:53:33.649716 env[1203]: time="2025-09-13T00:53:33.649658426Z" level=info msg="CreateContainer within sandbox \"800ef04da00bf1cc548f59ded243cf5d35aeed92557faf5167103fea4278c1cf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0e57f694765ff5bdf0de2f4c8901a8afcb75e18fe47594f23526c0cba25a3848\"" Sep 13 00:53:33.650267 env[1203]: time="2025-09-13T00:53:33.650239315Z" level=info msg="StartContainer for \"0e57f694765ff5bdf0de2f4c8901a8afcb75e18fe47594f23526c0cba25a3848\"" Sep 13 00:53:33.689980 systemd[1]: Started cri-containerd-0e57f694765ff5bdf0de2f4c8901a8afcb75e18fe47594f23526c0cba25a3848.scope. Sep 13 00:53:33.714922 kubelet[1898]: I0913 00:53:33.714852 1898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-7vf7l" podStartSLOduration=3.225423927 podStartE2EDuration="17.714829925s" podCreationTimestamp="2025-09-13 00:53:16 +0000 UTC" firstStartedPulling="2025-09-13 00:53:18.504494638 +0000 UTC m=+8.217886088" lastFinishedPulling="2025-09-13 00:53:32.993900646 +0000 UTC m=+22.707292086" observedRunningTime="2025-09-13 00:53:33.655481028 +0000 UTC m=+23.368872478" watchObservedRunningTime="2025-09-13 00:53:33.714829925 +0000 UTC m=+23.428221375" Sep 13 00:53:33.748183 systemd[1]: cri-containerd-0e57f694765ff5bdf0de2f4c8901a8afcb75e18fe47594f23526c0cba25a3848.scope: Deactivated successfully. Sep 13 00:53:33.877974 env[1203]: time="2025-09-13T00:53:33.877923768Z" level=info msg="StartContainer for \"0e57f694765ff5bdf0de2f4c8901a8afcb75e18fe47594f23526c0cba25a3848\" returns successfully" Sep 13 00:53:33.910988 env[1203]: time="2025-09-13T00:53:33.910838095Z" level=info msg="shim disconnected" id=0e57f694765ff5bdf0de2f4c8901a8afcb75e18fe47594f23526c0cba25a3848 Sep 13 00:53:33.910988 env[1203]: time="2025-09-13T00:53:33.910883180Z" level=warning msg="cleaning up after shim disconnected" id=0e57f694765ff5bdf0de2f4c8901a8afcb75e18fe47594f23526c0cba25a3848 namespace=k8s.io Sep 13 00:53:33.910988 env[1203]: time="2025-09-13T00:53:33.910892137Z" level=info msg="cleaning up dead shim" Sep 13 00:53:33.926441 env[1203]: time="2025-09-13T00:53:33.926376029Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:53:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2559 runtime=io.containerd.runc.v2\n" Sep 13 00:53:34.298287 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e57f694765ff5bdf0de2f4c8901a8afcb75e18fe47594f23526c0cba25a3848-rootfs.mount: Deactivated successfully. Sep 13 00:53:34.480457 kubelet[1898]: E0913 00:53:34.480405 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:34.480936 kubelet[1898]: E0913 00:53:34.480587 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:34.482200 env[1203]: time="2025-09-13T00:53:34.482162998Z" level=info msg="CreateContainer within sandbox \"800ef04da00bf1cc548f59ded243cf5d35aeed92557faf5167103fea4278c1cf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:53:34.499420 env[1203]: time="2025-09-13T00:53:34.499356676Z" level=info msg="CreateContainer within sandbox \"800ef04da00bf1cc548f59ded243cf5d35aeed92557faf5167103fea4278c1cf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6dba7a71856119025b309db6c96d2ab44fb18aaf12c302e77a4e95ecd74b28f3\"" Sep 13 00:53:34.499922 env[1203]: time="2025-09-13T00:53:34.499889604Z" level=info msg="StartContainer for \"6dba7a71856119025b309db6c96d2ab44fb18aaf12c302e77a4e95ecd74b28f3\"" Sep 13 00:53:34.517964 systemd[1]: Started cri-containerd-6dba7a71856119025b309db6c96d2ab44fb18aaf12c302e77a4e95ecd74b28f3.scope. Sep 13 00:53:34.556253 env[1203]: time="2025-09-13T00:53:34.556127202Z" level=info msg="StartContainer for \"6dba7a71856119025b309db6c96d2ab44fb18aaf12c302e77a4e95ecd74b28f3\" returns successfully" Sep 13 00:53:34.669937 kubelet[1898]: I0913 00:53:34.669888 1898 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:53:34.700313 systemd[1]: Created slice kubepods-burstable-pod0cc3c8fb_69b6_43fe_a2b3_806cc52cab49.slice. Sep 13 00:53:34.705595 systemd[1]: Created slice kubepods-burstable-poda9284add_a0f7_41ab_ac2f_b602bfc3657c.slice. Sep 13 00:53:34.877352 kubelet[1898]: I0913 00:53:34.877315 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg4tc\" (UniqueName: \"kubernetes.io/projected/a9284add-a0f7-41ab-ac2f-b602bfc3657c-kube-api-access-fg4tc\") pod \"coredns-7c65d6cfc9-z2v78\" (UID: \"a9284add-a0f7-41ab-ac2f-b602bfc3657c\") " pod="kube-system/coredns-7c65d6cfc9-z2v78" Sep 13 00:53:34.877352 kubelet[1898]: I0913 00:53:34.877354 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0cc3c8fb-69b6-43fe-a2b3-806cc52cab49-config-volume\") pod \"coredns-7c65d6cfc9-g8mz4\" (UID: \"0cc3c8fb-69b6-43fe-a2b3-806cc52cab49\") " pod="kube-system/coredns-7c65d6cfc9-g8mz4" Sep 13 00:53:34.877352 kubelet[1898]: I0913 00:53:34.877375 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9284add-a0f7-41ab-ac2f-b602bfc3657c-config-volume\") pod \"coredns-7c65d6cfc9-z2v78\" (UID: \"a9284add-a0f7-41ab-ac2f-b602bfc3657c\") " pod="kube-system/coredns-7c65d6cfc9-z2v78" Sep 13 00:53:34.877608 kubelet[1898]: I0913 00:53:34.877389 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmzdd\" (UniqueName: \"kubernetes.io/projected/0cc3c8fb-69b6-43fe-a2b3-806cc52cab49-kube-api-access-jmzdd\") pod \"coredns-7c65d6cfc9-g8mz4\" (UID: \"0cc3c8fb-69b6-43fe-a2b3-806cc52cab49\") " pod="kube-system/coredns-7c65d6cfc9-g8mz4" Sep 13 00:53:35.004841 kubelet[1898]: E0913 00:53:35.004780 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:35.005551 env[1203]: time="2025-09-13T00:53:35.005495054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-g8mz4,Uid:0cc3c8fb-69b6-43fe-a2b3-806cc52cab49,Namespace:kube-system,Attempt:0,}" Sep 13 00:53:35.008908 kubelet[1898]: E0913 00:53:35.008879 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:35.009214 env[1203]: time="2025-09-13T00:53:35.009175817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-z2v78,Uid:a9284add-a0f7-41ab-ac2f-b602bfc3657c,Namespace:kube-system,Attempt:0,}" Sep 13 00:53:35.484193 kubelet[1898]: E0913 00:53:35.484160 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:36.486021 kubelet[1898]: E0913 00:53:36.485977 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:36.832968 systemd-networkd[1023]: cilium_host: Link UP Sep 13 00:53:36.833101 systemd-networkd[1023]: cilium_net: Link UP Sep 13 00:53:36.835969 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 13 00:53:36.836033 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 13 00:53:36.836160 systemd-networkd[1023]: cilium_net: Gained carrier Sep 13 00:53:36.836319 systemd-networkd[1023]: cilium_host: Gained carrier Sep 13 00:53:36.890192 systemd-networkd[1023]: cilium_host: Gained IPv6LL Sep 13 00:53:36.911534 systemd-networkd[1023]: cilium_vxlan: Link UP Sep 13 00:53:36.911542 systemd-networkd[1023]: cilium_vxlan: Gained carrier Sep 13 00:53:37.024020 systemd[1]: Started sshd@5-10.0.0.130:22-10.0.0.1:43020.service. Sep 13 00:53:37.056373 sshd[2830]: Accepted publickey for core from 10.0.0.1 port 43020 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:53:37.057765 sshd[2830]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:37.061662 systemd[1]: Started session-6.scope. Sep 13 00:53:37.061799 systemd-logind[1194]: New session 6 of user core. Sep 13 00:53:37.103075 kernel: NET: Registered PF_ALG protocol family Sep 13 00:53:37.285477 sshd[2830]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:37.288066 systemd[1]: sshd@5-10.0.0.130:22-10.0.0.1:43020.service: Deactivated successfully. Sep 13 00:53:37.288758 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:53:37.289342 systemd-logind[1194]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:53:37.290190 systemd-logind[1194]: Removed session 6. Sep 13 00:53:37.489977 kubelet[1898]: E0913 00:53:37.489872 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:37.621384 systemd-networkd[1023]: lxc_health: Link UP Sep 13 00:53:37.632461 systemd-networkd[1023]: lxc_health: Gained carrier Sep 13 00:53:37.633063 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 00:53:37.797308 systemd-networkd[1023]: cilium_net: Gained IPv6LL Sep 13 00:53:37.986224 systemd-networkd[1023]: cilium_vxlan: Gained IPv6LL Sep 13 00:53:38.077645 systemd-networkd[1023]: lxc969dd9adabf8: Link UP Sep 13 00:53:38.087226 systemd-networkd[1023]: lxc5b5e397d63b6: Link UP Sep 13 00:53:38.096078 kernel: eth0: renamed from tmp2c3dd Sep 13 00:53:38.112070 kernel: eth0: renamed from tmp0c6c3 Sep 13 00:53:38.125305 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:53:38.125363 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc969dd9adabf8: link becomes ready Sep 13 00:53:38.125707 systemd-networkd[1023]: lxc969dd9adabf8: Gained carrier Sep 13 00:53:38.127410 systemd-networkd[1023]: lxc5b5e397d63b6: Gained carrier Sep 13 00:53:38.128057 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc5b5e397d63b6: link becomes ready Sep 13 00:53:38.472374 kubelet[1898]: I0913 00:53:38.472315 1898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lr8rs" podStartSLOduration=10.674102223 podStartE2EDuration="22.472295588s" podCreationTimestamp="2025-09-13 00:53:16 +0000 UTC" firstStartedPulling="2025-09-13 00:53:18.485555213 +0000 UTC m=+8.198946663" lastFinishedPulling="2025-09-13 00:53:30.283748588 +0000 UTC m=+19.997140028" observedRunningTime="2025-09-13 00:53:35.620114502 +0000 UTC m=+25.333505952" watchObservedRunningTime="2025-09-13 00:53:38.472295588 +0000 UTC m=+28.185687038" Sep 13 00:53:38.491839 kubelet[1898]: E0913 00:53:38.491811 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:39.074236 systemd-networkd[1023]: lxc_health: Gained IPv6LL Sep 13 00:53:39.843206 systemd-networkd[1023]: lxc5b5e397d63b6: Gained IPv6LL Sep 13 00:53:40.162192 systemd-networkd[1023]: lxc969dd9adabf8: Gained IPv6LL Sep 13 00:53:41.373407 env[1203]: time="2025-09-13T00:53:41.373320984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:53:41.373790 env[1203]: time="2025-09-13T00:53:41.373409721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:53:41.373790 env[1203]: time="2025-09-13T00:53:41.373425521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:53:41.373970 env[1203]: time="2025-09-13T00:53:41.373920955Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0c6c38f284236308a6ac2fff47ed9bb813eea4403152caf4df7aa866d47534ed pid=3148 runtime=io.containerd.runc.v2 Sep 13 00:53:41.374147 env[1203]: time="2025-09-13T00:53:41.374074173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:53:41.374147 env[1203]: time="2025-09-13T00:53:41.374125820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:53:41.374147 env[1203]: time="2025-09-13T00:53:41.374136321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:53:41.374439 env[1203]: time="2025-09-13T00:53:41.374376994Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2c3dd536513f72bc2c0a0720c4225760134ec9331877fd2b5ecf416e069a7787 pid=3157 runtime=io.containerd.runc.v2 Sep 13 00:53:41.389708 systemd[1]: Started cri-containerd-0c6c38f284236308a6ac2fff47ed9bb813eea4403152caf4df7aa866d47534ed.scope. Sep 13 00:53:41.399375 systemd[1]: Started cri-containerd-2c3dd536513f72bc2c0a0720c4225760134ec9331877fd2b5ecf416e069a7787.scope. Sep 13 00:53:41.402950 systemd-resolved[1141]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:53:41.411147 systemd-resolved[1141]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:53:41.420221 kubelet[1898]: I0913 00:53:41.420188 1898 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:53:41.427303 kubelet[1898]: E0913 00:53:41.422746 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:41.427411 env[1203]: time="2025-09-13T00:53:41.427360697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-z2v78,Uid:a9284add-a0f7-41ab-ac2f-b602bfc3657c,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c6c38f284236308a6ac2fff47ed9bb813eea4403152caf4df7aa866d47534ed\"" Sep 13 00:53:41.432331 kubelet[1898]: E0913 00:53:41.432302 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:41.442806 env[1203]: time="2025-09-13T00:53:41.442755793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-g8mz4,Uid:0cc3c8fb-69b6-43fe-a2b3-806cc52cab49,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c3dd536513f72bc2c0a0720c4225760134ec9331877fd2b5ecf416e069a7787\"" Sep 13 00:53:41.443350 kubelet[1898]: E0913 00:53:41.443326 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:41.447930 env[1203]: time="2025-09-13T00:53:41.447100892Z" level=info msg="CreateContainer within sandbox \"0c6c38f284236308a6ac2fff47ed9bb813eea4403152caf4df7aa866d47534ed\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:53:41.448008 env[1203]: time="2025-09-13T00:53:41.447924164Z" level=info msg="CreateContainer within sandbox \"2c3dd536513f72bc2c0a0720c4225760134ec9331877fd2b5ecf416e069a7787\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:53:41.470420 env[1203]: time="2025-09-13T00:53:41.470368166Z" level=info msg="CreateContainer within sandbox \"0c6c38f284236308a6ac2fff47ed9bb813eea4403152caf4df7aa866d47534ed\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"60122a67daf1085fed2bc9d41eb25940769fba1778589484d48c9adc898133a0\"" Sep 13 00:53:41.471114 env[1203]: time="2025-09-13T00:53:41.471055641Z" level=info msg="StartContainer for \"60122a67daf1085fed2bc9d41eb25940769fba1778589484d48c9adc898133a0\"" Sep 13 00:53:41.471864 env[1203]: time="2025-09-13T00:53:41.471829260Z" level=info msg="CreateContainer within sandbox \"2c3dd536513f72bc2c0a0720c4225760134ec9331877fd2b5ecf416e069a7787\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"993221835b2fc764c079d169d1fb3ada03edf2bd5f4472fe663d58036f51ab84\"" Sep 13 00:53:41.472109 env[1203]: time="2025-09-13T00:53:41.472084942Z" level=info msg="StartContainer for \"993221835b2fc764c079d169d1fb3ada03edf2bd5f4472fe663d58036f51ab84\"" Sep 13 00:53:41.491593 systemd[1]: Started cri-containerd-60122a67daf1085fed2bc9d41eb25940769fba1778589484d48c9adc898133a0.scope. Sep 13 00:53:41.495404 systemd[1]: Started cri-containerd-993221835b2fc764c079d169d1fb3ada03edf2bd5f4472fe663d58036f51ab84.scope. Sep 13 00:53:41.505695 kubelet[1898]: E0913 00:53:41.504576 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:41.526444 env[1203]: time="2025-09-13T00:53:41.526370087Z" level=info msg="StartContainer for \"993221835b2fc764c079d169d1fb3ada03edf2bd5f4472fe663d58036f51ab84\" returns successfully" Sep 13 00:53:41.528657 env[1203]: time="2025-09-13T00:53:41.528615580Z" level=info msg="StartContainer for \"60122a67daf1085fed2bc9d41eb25940769fba1778589484d48c9adc898133a0\" returns successfully" Sep 13 00:53:42.289396 systemd[1]: Started sshd@6-10.0.0.130:22-10.0.0.1:45128.service. Sep 13 00:53:42.321749 sshd[3295]: Accepted publickey for core from 10.0.0.1 port 45128 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:53:42.323087 sshd[3295]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:42.327047 systemd-logind[1194]: New session 7 of user core. Sep 13 00:53:42.328158 systemd[1]: Started session-7.scope. Sep 13 00:53:42.379240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3552098768.mount: Deactivated successfully. Sep 13 00:53:42.450426 sshd[3295]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:42.452656 systemd[1]: sshd@6-10.0.0.130:22-10.0.0.1:45128.service: Deactivated successfully. Sep 13 00:53:42.453414 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:53:42.454252 systemd-logind[1194]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:53:42.454966 systemd-logind[1194]: Removed session 7. Sep 13 00:53:42.507841 kubelet[1898]: E0913 00:53:42.507794 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:42.510611 kubelet[1898]: E0913 00:53:42.510569 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:42.528608 kubelet[1898]: I0913 00:53:42.528400 1898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-z2v78" podStartSLOduration=26.528382727 podStartE2EDuration="26.528382727s" podCreationTimestamp="2025-09-13 00:53:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:53:42.528217645 +0000 UTC m=+32.241609095" watchObservedRunningTime="2025-09-13 00:53:42.528382727 +0000 UTC m=+32.241774177" Sep 13 00:53:42.528846 kubelet[1898]: I0913 00:53:42.528754 1898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-g8mz4" podStartSLOduration=26.528746773 podStartE2EDuration="26.528746773s" podCreationTimestamp="2025-09-13 00:53:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:53:42.517601573 +0000 UTC m=+32.230993033" watchObservedRunningTime="2025-09-13 00:53:42.528746773 +0000 UTC m=+32.242138223" Sep 13 00:53:43.511968 kubelet[1898]: E0913 00:53:43.511924 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:43.512323 kubelet[1898]: E0913 00:53:43.511924 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:44.513833 kubelet[1898]: E0913 00:53:44.513803 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:44.514259 kubelet[1898]: E0913 00:53:44.513803 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:47.454978 systemd[1]: Started sshd@7-10.0.0.130:22-10.0.0.1:45132.service. Sep 13 00:53:47.486197 sshd[3316]: Accepted publickey for core from 10.0.0.1 port 45132 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:53:47.487231 sshd[3316]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:47.490234 systemd-logind[1194]: New session 8 of user core. Sep 13 00:53:47.491138 systemd[1]: Started session-8.scope. Sep 13 00:53:47.655883 sshd[3316]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:47.658184 systemd[1]: sshd@7-10.0.0.130:22-10.0.0.1:45132.service: Deactivated successfully. Sep 13 00:53:47.659241 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:53:47.659772 systemd-logind[1194]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:53:47.660393 systemd-logind[1194]: Removed session 8. Sep 13 00:53:52.660913 systemd[1]: Started sshd@8-10.0.0.130:22-10.0.0.1:53352.service. Sep 13 00:53:52.696539 sshd[3332]: Accepted publickey for core from 10.0.0.1 port 53352 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:53:52.697742 sshd[3332]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:52.701124 systemd-logind[1194]: New session 9 of user core. Sep 13 00:53:52.702133 systemd[1]: Started session-9.scope. Sep 13 00:53:52.813492 sshd[3332]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:52.815655 systemd[1]: sshd@8-10.0.0.130:22-10.0.0.1:53352.service: Deactivated successfully. Sep 13 00:53:52.816343 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:53:52.816993 systemd-logind[1194]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:53:52.817662 systemd-logind[1194]: Removed session 9. Sep 13 00:53:57.817269 systemd[1]: Started sshd@9-10.0.0.130:22-10.0.0.1:53364.service. Sep 13 00:53:57.846798 sshd[3346]: Accepted publickey for core from 10.0.0.1 port 53364 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:53:57.847740 sshd[3346]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:57.850910 systemd-logind[1194]: New session 10 of user core. Sep 13 00:53:57.851881 systemd[1]: Started session-10.scope. Sep 13 00:53:57.960723 sshd[3346]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:57.963285 systemd[1]: sshd@9-10.0.0.130:22-10.0.0.1:53364.service: Deactivated successfully. Sep 13 00:53:57.963736 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:53:57.965698 systemd[1]: Started sshd@10-10.0.0.130:22-10.0.0.1:53374.service. Sep 13 00:53:57.966424 systemd-logind[1194]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:53:57.967468 systemd-logind[1194]: Removed session 10. Sep 13 00:53:57.998016 sshd[3361]: Accepted publickey for core from 10.0.0.1 port 53374 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:53:57.999188 sshd[3361]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:58.002342 systemd-logind[1194]: New session 11 of user core. Sep 13 00:53:58.003246 systemd[1]: Started session-11.scope. Sep 13 00:53:58.154202 sshd[3361]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:58.156601 systemd[1]: sshd@10-10.0.0.130:22-10.0.0.1:53374.service: Deactivated successfully. Sep 13 00:53:58.157071 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:53:58.158699 systemd-logind[1194]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:53:58.158905 systemd[1]: Started sshd@11-10.0.0.130:22-10.0.0.1:53388.service. Sep 13 00:53:58.160250 systemd-logind[1194]: Removed session 11. Sep 13 00:53:58.198207 sshd[3373]: Accepted publickey for core from 10.0.0.1 port 53388 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:53:58.199321 sshd[3373]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:58.202231 systemd-logind[1194]: New session 12 of user core. Sep 13 00:53:58.202968 systemd[1]: Started session-12.scope. Sep 13 00:53:58.310951 sshd[3373]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:58.312985 systemd[1]: sshd@11-10.0.0.130:22-10.0.0.1:53388.service: Deactivated successfully. Sep 13 00:53:58.313724 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:53:58.314538 systemd-logind[1194]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:53:58.315268 systemd-logind[1194]: Removed session 12. Sep 13 00:54:03.315700 systemd[1]: Started sshd@12-10.0.0.130:22-10.0.0.1:57460.service. Sep 13 00:54:03.346542 sshd[3387]: Accepted publickey for core from 10.0.0.1 port 57460 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:54:03.347664 sshd[3387]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:03.351123 systemd-logind[1194]: New session 13 of user core. Sep 13 00:54:03.351820 systemd[1]: Started session-13.scope. Sep 13 00:54:03.461111 sshd[3387]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:03.463053 systemd[1]: sshd@12-10.0.0.130:22-10.0.0.1:57460.service: Deactivated successfully. Sep 13 00:54:03.463712 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:54:03.464392 systemd-logind[1194]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:54:03.465008 systemd-logind[1194]: Removed session 13. Sep 13 00:54:08.464769 systemd[1]: Started sshd@13-10.0.0.130:22-10.0.0.1:57474.service. Sep 13 00:54:08.497108 sshd[3401]: Accepted publickey for core from 10.0.0.1 port 57474 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:54:08.498138 sshd[3401]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:08.501216 systemd-logind[1194]: New session 14 of user core. Sep 13 00:54:08.502131 systemd[1]: Started session-14.scope. Sep 13 00:54:08.606421 sshd[3401]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:08.609240 systemd[1]: sshd@13-10.0.0.130:22-10.0.0.1:57474.service: Deactivated successfully. Sep 13 00:54:08.609773 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:54:08.610336 systemd-logind[1194]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:54:08.611327 systemd[1]: Started sshd@14-10.0.0.130:22-10.0.0.1:57486.service. Sep 13 00:54:08.612111 systemd-logind[1194]: Removed session 14. Sep 13 00:54:08.641493 sshd[3414]: Accepted publickey for core from 10.0.0.1 port 57486 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:54:08.642751 sshd[3414]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:08.646556 systemd-logind[1194]: New session 15 of user core. Sep 13 00:54:08.647401 systemd[1]: Started session-15.scope. Sep 13 00:54:08.875790 sshd[3414]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:08.878733 systemd[1]: sshd@14-10.0.0.130:22-10.0.0.1:57486.service: Deactivated successfully. Sep 13 00:54:08.879469 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:54:08.880109 systemd-logind[1194]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:54:08.881791 systemd[1]: Started sshd@15-10.0.0.130:22-10.0.0.1:57494.service. Sep 13 00:54:08.882915 systemd-logind[1194]: Removed session 15. Sep 13 00:54:08.913135 sshd[3425]: Accepted publickey for core from 10.0.0.1 port 57494 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:54:08.913989 sshd[3425]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:08.917542 systemd-logind[1194]: New session 16 of user core. Sep 13 00:54:08.918642 systemd[1]: Started session-16.scope. Sep 13 00:54:09.971269 sshd[3425]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:09.975548 systemd[1]: Started sshd@16-10.0.0.130:22-10.0.0.1:42082.service. Sep 13 00:54:09.976028 systemd[1]: sshd@15-10.0.0.130:22-10.0.0.1:57494.service: Deactivated successfully. Sep 13 00:54:09.976553 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:54:09.978689 systemd-logind[1194]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:54:09.979984 systemd-logind[1194]: Removed session 16. Sep 13 00:54:10.009414 sshd[3444]: Accepted publickey for core from 10.0.0.1 port 42082 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:54:10.010701 sshd[3444]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:10.014281 systemd-logind[1194]: New session 17 of user core. Sep 13 00:54:10.015094 systemd[1]: Started session-17.scope. Sep 13 00:54:10.254165 sshd[3444]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:10.258029 systemd[1]: Started sshd@17-10.0.0.130:22-10.0.0.1:42092.service. Sep 13 00:54:10.259168 systemd[1]: sshd@16-10.0.0.130:22-10.0.0.1:42082.service: Deactivated successfully. Sep 13 00:54:10.259737 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:54:10.260662 systemd-logind[1194]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:54:10.261497 systemd-logind[1194]: Removed session 17. Sep 13 00:54:10.289401 sshd[3456]: Accepted publickey for core from 10.0.0.1 port 42092 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:54:10.290465 sshd[3456]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:10.294062 systemd-logind[1194]: New session 18 of user core. Sep 13 00:54:10.294903 systemd[1]: Started session-18.scope. Sep 13 00:54:10.404398 sshd[3456]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:10.408235 systemd[1]: sshd@17-10.0.0.130:22-10.0.0.1:42092.service: Deactivated successfully. Sep 13 00:54:10.409111 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:54:10.409734 systemd-logind[1194]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:54:10.410584 systemd-logind[1194]: Removed session 18. Sep 13 00:54:15.408265 systemd[1]: Started sshd@18-10.0.0.130:22-10.0.0.1:42094.service. Sep 13 00:54:15.438241 sshd[3475]: Accepted publickey for core from 10.0.0.1 port 42094 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:54:15.439366 sshd[3475]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:15.443281 systemd-logind[1194]: New session 19 of user core. Sep 13 00:54:15.444182 systemd[1]: Started session-19.scope. Sep 13 00:54:15.542561 sshd[3475]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:15.544712 systemd[1]: sshd@18-10.0.0.130:22-10.0.0.1:42094.service: Deactivated successfully. Sep 13 00:54:15.545356 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:54:15.545891 systemd-logind[1194]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:54:15.546523 systemd-logind[1194]: Removed session 19. Sep 13 00:54:20.547314 systemd[1]: Started sshd@19-10.0.0.130:22-10.0.0.1:48678.service. Sep 13 00:54:20.577704 sshd[3494]: Accepted publickey for core from 10.0.0.1 port 48678 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:54:20.578997 sshd[3494]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:20.582527 systemd-logind[1194]: New session 20 of user core. Sep 13 00:54:20.583425 systemd[1]: Started session-20.scope. Sep 13 00:54:20.682691 sshd[3494]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:20.685025 systemd[1]: sshd@19-10.0.0.130:22-10.0.0.1:48678.service: Deactivated successfully. Sep 13 00:54:20.685735 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:54:20.686202 systemd-logind[1194]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:54:20.686860 systemd-logind[1194]: Removed session 20. Sep 13 00:54:25.687723 systemd[1]: Started sshd@20-10.0.0.130:22-10.0.0.1:48694.service. Sep 13 00:54:25.718509 sshd[3507]: Accepted publickey for core from 10.0.0.1 port 48694 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:54:25.719492 sshd[3507]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:25.722593 systemd-logind[1194]: New session 21 of user core. Sep 13 00:54:25.723336 systemd[1]: Started session-21.scope. Sep 13 00:54:25.818928 sshd[3507]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:25.821204 systemd[1]: sshd@20-10.0.0.130:22-10.0.0.1:48694.service: Deactivated successfully. Sep 13 00:54:25.821808 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:54:25.822505 systemd-logind[1194]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:54:25.823125 systemd-logind[1194]: Removed session 21. Sep 13 00:54:30.824254 systemd[1]: Started sshd@21-10.0.0.130:22-10.0.0.1:53120.service. Sep 13 00:54:30.855473 sshd[3520]: Accepted publickey for core from 10.0.0.1 port 53120 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:54:30.856467 sshd[3520]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:30.859946 systemd-logind[1194]: New session 22 of user core. Sep 13 00:54:30.860483 systemd[1]: Started session-22.scope. Sep 13 00:54:30.957317 sshd[3520]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:30.960418 systemd[1]: sshd@21-10.0.0.130:22-10.0.0.1:53120.service: Deactivated successfully. Sep 13 00:54:30.961129 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:54:30.961673 systemd-logind[1194]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:54:30.962795 systemd[1]: Started sshd@22-10.0.0.130:22-10.0.0.1:53136.service. Sep 13 00:54:30.963588 systemd-logind[1194]: Removed session 22. Sep 13 00:54:30.992616 sshd[3533]: Accepted publickey for core from 10.0.0.1 port 53136 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:54:30.993547 sshd[3533]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:30.996524 systemd-logind[1194]: New session 23 of user core. Sep 13 00:54:30.997289 systemd[1]: Started session-23.scope. Sep 13 00:54:32.310695 env[1203]: time="2025-09-13T00:54:32.310419627Z" level=info msg="StopContainer for \"d4b1055733ba1c32439b7f5f799ff8485bdf565d193bec4128cf354eb8d01dc4\" with timeout 30 (s)" Sep 13 00:54:32.311148 env[1203]: time="2025-09-13T00:54:32.310908616Z" level=info msg="Stop container \"d4b1055733ba1c32439b7f5f799ff8485bdf565d193bec4128cf354eb8d01dc4\" with signal terminated" Sep 13 00:54:32.321124 systemd[1]: cri-containerd-d4b1055733ba1c32439b7f5f799ff8485bdf565d193bec4128cf354eb8d01dc4.scope: Deactivated successfully. Sep 13 00:54:32.337715 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4b1055733ba1c32439b7f5f799ff8485bdf565d193bec4128cf354eb8d01dc4-rootfs.mount: Deactivated successfully. Sep 13 00:54:32.338309 env[1203]: time="2025-09-13T00:54:32.338242345Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:54:32.340940 env[1203]: time="2025-09-13T00:54:32.340878245Z" level=info msg="StopContainer for \"6dba7a71856119025b309db6c96d2ab44fb18aaf12c302e77a4e95ecd74b28f3\" with timeout 2 (s)" Sep 13 00:54:32.341188 env[1203]: time="2025-09-13T00:54:32.341132252Z" level=info msg="Stop container \"6dba7a71856119025b309db6c96d2ab44fb18aaf12c302e77a4e95ecd74b28f3\" with signal terminated" Sep 13 00:54:32.348829 systemd-networkd[1023]: lxc_health: Link DOWN Sep 13 00:54:32.348836 systemd-networkd[1023]: lxc_health: Lost carrier Sep 13 00:54:32.349350 env[1203]: time="2025-09-13T00:54:32.349311934Z" level=info msg="shim disconnected" id=d4b1055733ba1c32439b7f5f799ff8485bdf565d193bec4128cf354eb8d01dc4 Sep 13 00:54:32.349350 env[1203]: time="2025-09-13T00:54:32.349349025Z" level=warning msg="cleaning up after shim disconnected" id=d4b1055733ba1c32439b7f5f799ff8485bdf565d193bec4128cf354eb8d01dc4 namespace=k8s.io Sep 13 00:54:32.349455 env[1203]: time="2025-09-13T00:54:32.349357502Z" level=info msg="cleaning up dead shim" Sep 13 00:54:32.357142 env[1203]: time="2025-09-13T00:54:32.357100906Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:54:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3587 runtime=io.containerd.runc.v2\n" Sep 13 00:54:32.422418 systemd[1]: cri-containerd-6dba7a71856119025b309db6c96d2ab44fb18aaf12c302e77a4e95ecd74b28f3.scope: Deactivated successfully. Sep 13 00:54:32.422725 systemd[1]: cri-containerd-6dba7a71856119025b309db6c96d2ab44fb18aaf12c302e77a4e95ecd74b28f3.scope: Consumed 5.922s CPU time. Sep 13 00:54:32.437844 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6dba7a71856119025b309db6c96d2ab44fb18aaf12c302e77a4e95ecd74b28f3-rootfs.mount: Deactivated successfully. Sep 13 00:54:32.438225 env[1203]: time="2025-09-13T00:54:32.437821827Z" level=info msg="StopContainer for \"d4b1055733ba1c32439b7f5f799ff8485bdf565d193bec4128cf354eb8d01dc4\" returns successfully" Sep 13 00:54:32.438918 env[1203]: time="2025-09-13T00:54:32.438884486Z" level=info msg="StopPodSandbox for \"ce1b26ae0b037e6191d290d679dc30304e971acf4d000baeb20a3191b1e817b0\"" Sep 13 00:54:32.438997 env[1203]: time="2025-09-13T00:54:32.438951966Z" level=info msg="Container to stop \"d4b1055733ba1c32439b7f5f799ff8485bdf565d193bec4128cf354eb8d01dc4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:54:32.440392 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ce1b26ae0b037e6191d290d679dc30304e971acf4d000baeb20a3191b1e817b0-shm.mount: Deactivated successfully. Sep 13 00:54:32.446835 systemd[1]: cri-containerd-ce1b26ae0b037e6191d290d679dc30304e971acf4d000baeb20a3191b1e817b0.scope: Deactivated successfully. Sep 13 00:54:32.462421 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce1b26ae0b037e6191d290d679dc30304e971acf4d000baeb20a3191b1e817b0-rootfs.mount: Deactivated successfully. Sep 13 00:54:32.627735 env[1203]: time="2025-09-13T00:54:32.627685046Z" level=info msg="shim disconnected" id=6dba7a71856119025b309db6c96d2ab44fb18aaf12c302e77a4e95ecd74b28f3 Sep 13 00:54:32.627735 env[1203]: time="2025-09-13T00:54:32.627695165Z" level=info msg="shim disconnected" id=ce1b26ae0b037e6191d290d679dc30304e971acf4d000baeb20a3191b1e817b0 Sep 13 00:54:32.627735 env[1203]: time="2025-09-13T00:54:32.627730784Z" level=warning msg="cleaning up after shim disconnected" id=6dba7a71856119025b309db6c96d2ab44fb18aaf12c302e77a4e95ecd74b28f3 namespace=k8s.io Sep 13 00:54:32.628071 env[1203]: time="2025-09-13T00:54:32.627745852Z" level=info msg="cleaning up dead shim" Sep 13 00:54:32.628071 env[1203]: time="2025-09-13T00:54:32.627749310Z" level=warning msg="cleaning up after shim disconnected" id=ce1b26ae0b037e6191d290d679dc30304e971acf4d000baeb20a3191b1e817b0 namespace=k8s.io Sep 13 00:54:32.628071 env[1203]: time="2025-09-13T00:54:32.627761162Z" level=info msg="cleaning up dead shim" Sep 13 00:54:32.634694 env[1203]: time="2025-09-13T00:54:32.634651538Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:54:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3633 runtime=io.containerd.runc.v2\n" Sep 13 00:54:32.635085 env[1203]: time="2025-09-13T00:54:32.634664804Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:54:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3634 runtime=io.containerd.runc.v2\n" Sep 13 00:54:32.635317 env[1203]: time="2025-09-13T00:54:32.635292279Z" level=info msg="TearDown network for sandbox \"ce1b26ae0b037e6191d290d679dc30304e971acf4d000baeb20a3191b1e817b0\" successfully" Sep 13 00:54:32.635317 env[1203]: time="2025-09-13T00:54:32.635313600Z" level=info msg="StopPodSandbox for \"ce1b26ae0b037e6191d290d679dc30304e971acf4d000baeb20a3191b1e817b0\" returns successfully" Sep 13 00:54:32.705714 env[1203]: time="2025-09-13T00:54:32.705676688Z" level=info msg="StopContainer for \"6dba7a71856119025b309db6c96d2ab44fb18aaf12c302e77a4e95ecd74b28f3\" returns successfully" Sep 13 00:54:32.706080 env[1203]: time="2025-09-13T00:54:32.706032650Z" level=info msg="StopPodSandbox for \"800ef04da00bf1cc548f59ded243cf5d35aeed92557faf5167103fea4278c1cf\"" Sep 13 00:54:32.706171 env[1203]: time="2025-09-13T00:54:32.706125268Z" level=info msg="Container to stop \"cc3a72cd33e7faf6dc3789039a570fa9a2de7de669111e7ff098558e413f7e82\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:54:32.706171 env[1203]: time="2025-09-13T00:54:32.706151309Z" level=info msg="Container to stop \"afe6f0ea0299e9534ab215b6fe9de7b26bbd6d90dc000b67336059daecf35ba0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:54:32.706251 env[1203]: time="2025-09-13T00:54:32.706168281Z" level=info msg="Container to stop \"0e57f694765ff5bdf0de2f4c8901a8afcb75e18fe47594f23526c0cba25a3848\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:54:32.706251 env[1203]: time="2025-09-13T00:54:32.706183410Z" level=info msg="Container to stop \"c411bb6ad3ae730c215dcdc221b05a963575e9ba6352a9af73e72f89e3c55b32\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:54:32.706251 env[1203]: time="2025-09-13T00:54:32.706200072Z" level=info msg="Container to stop \"6dba7a71856119025b309db6c96d2ab44fb18aaf12c302e77a4e95ecd74b28f3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:54:32.711236 systemd[1]: cri-containerd-800ef04da00bf1cc548f59ded243cf5d35aeed92557faf5167103fea4278c1cf.scope: Deactivated successfully. Sep 13 00:54:32.736990 env[1203]: time="2025-09-13T00:54:32.736938487Z" level=info msg="shim disconnected" id=800ef04da00bf1cc548f59ded243cf5d35aeed92557faf5167103fea4278c1cf Sep 13 00:54:32.736990 env[1203]: time="2025-09-13T00:54:32.736987110Z" level=warning msg="cleaning up after shim disconnected" id=800ef04da00bf1cc548f59ded243cf5d35aeed92557faf5167103fea4278c1cf namespace=k8s.io Sep 13 00:54:32.736990 env[1203]: time="2025-09-13T00:54:32.736995858Z" level=info msg="cleaning up dead shim" Sep 13 00:54:32.742890 env[1203]: time="2025-09-13T00:54:32.742861327Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:54:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3674 runtime=io.containerd.runc.v2\n" Sep 13 00:54:32.743148 env[1203]: time="2025-09-13T00:54:32.743129491Z" level=info msg="TearDown network for sandbox \"800ef04da00bf1cc548f59ded243cf5d35aeed92557faf5167103fea4278c1cf\" successfully" Sep 13 00:54:32.743180 env[1203]: time="2025-09-13T00:54:32.743149440Z" level=info msg="StopPodSandbox for \"800ef04da00bf1cc548f59ded243cf5d35aeed92557faf5167103fea4278c1cf\" returns successfully" Sep 13 00:54:32.877393 kubelet[1898]: I0913 00:54:32.876448 1898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-etc-cni-netd\") pod \"d994bcba-ef33-4e46-8658-37609ee72b0f\" (UID: \"d994bcba-ef33-4e46-8658-37609ee72b0f\") " Sep 13 00:54:32.877393 kubelet[1898]: I0913 00:54:32.876509 1898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-bpf-maps\") pod \"d994bcba-ef33-4e46-8658-37609ee72b0f\" (UID: \"d994bcba-ef33-4e46-8658-37609ee72b0f\") " Sep 13 00:54:32.877393 kubelet[1898]: I0913 00:54:32.876534 1898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-hostproc\") pod \"d994bcba-ef33-4e46-8658-37609ee72b0f\" (UID: \"d994bcba-ef33-4e46-8658-37609ee72b0f\") " Sep 13 00:54:32.877393 kubelet[1898]: I0913 00:54:32.876531 1898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d994bcba-ef33-4e46-8658-37609ee72b0f" (UID: "d994bcba-ef33-4e46-8658-37609ee72b0f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:54:32.877393 kubelet[1898]: I0913 00:54:32.876553 1898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-lib-modules\") pod \"d994bcba-ef33-4e46-8658-37609ee72b0f\" (UID: \"d994bcba-ef33-4e46-8658-37609ee72b0f\") " Sep 13 00:54:32.877393 kubelet[1898]: I0913 00:54:32.876579 1898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d994bcba-ef33-4e46-8658-37609ee72b0f" (UID: "d994bcba-ef33-4e46-8658-37609ee72b0f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:54:32.878008 kubelet[1898]: I0913 00:54:32.876580 1898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d994bcba-ef33-4e46-8658-37609ee72b0f-clustermesh-secrets\") pod \"d994bcba-ef33-4e46-8658-37609ee72b0f\" (UID: \"d994bcba-ef33-4e46-8658-37609ee72b0f\") " Sep 13 00:54:32.878008 kubelet[1898]: I0913 00:54:32.876594 1898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-hostproc" (OuterVolumeSpecName: "hostproc") pod "d994bcba-ef33-4e46-8658-37609ee72b0f" (UID: "d994bcba-ef33-4e46-8658-37609ee72b0f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:54:32.878008 kubelet[1898]: I0913 00:54:32.876607 1898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d994bcba-ef33-4e46-8658-37609ee72b0f" (UID: "d994bcba-ef33-4e46-8658-37609ee72b0f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:54:32.878008 kubelet[1898]: I0913 00:54:32.876611 1898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d994bcba-ef33-4e46-8658-37609ee72b0f-cilium-config-path\") pod \"d994bcba-ef33-4e46-8658-37609ee72b0f\" (UID: \"d994bcba-ef33-4e46-8658-37609ee72b0f\") " Sep 13 00:54:32.878008 kubelet[1898]: I0913 00:54:32.876633 1898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-xtables-lock\") pod \"d994bcba-ef33-4e46-8658-37609ee72b0f\" (UID: \"d994bcba-ef33-4e46-8658-37609ee72b0f\") " Sep 13 00:54:32.878344 kubelet[1898]: I0913 00:54:32.876651 1898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-host-proc-sys-kernel\") pod \"d994bcba-ef33-4e46-8658-37609ee72b0f\" (UID: \"d994bcba-ef33-4e46-8658-37609ee72b0f\") " Sep 13 00:54:32.878344 kubelet[1898]: I0913 00:54:32.876669 1898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-cni-path\") pod \"d994bcba-ef33-4e46-8658-37609ee72b0f\" (UID: \"d994bcba-ef33-4e46-8658-37609ee72b0f\") " Sep 13 00:54:32.878344 kubelet[1898]: I0913 00:54:32.876686 1898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-host-proc-sys-net\") pod \"d994bcba-ef33-4e46-8658-37609ee72b0f\" (UID: \"d994bcba-ef33-4e46-8658-37609ee72b0f\") " Sep 13 00:54:32.878344 kubelet[1898]: I0913 00:54:32.876704 1898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-cilium-run\") pod \"d994bcba-ef33-4e46-8658-37609ee72b0f\" (UID: \"d994bcba-ef33-4e46-8658-37609ee72b0f\") " Sep 13 00:54:32.878344 kubelet[1898]: I0913 00:54:32.876720 1898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-cilium-cgroup\") pod \"d994bcba-ef33-4e46-8658-37609ee72b0f\" (UID: \"d994bcba-ef33-4e46-8658-37609ee72b0f\") " Sep 13 00:54:32.878344 kubelet[1898]: I0913 00:54:32.876737 1898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7d43ccf7-2bc1-4fa2-a9df-98aee5a32e95-cilium-config-path\") pod \"7d43ccf7-2bc1-4fa2-a9df-98aee5a32e95\" (UID: \"7d43ccf7-2bc1-4fa2-a9df-98aee5a32e95\") " Sep 13 00:54:32.878528 kubelet[1898]: I0913 00:54:32.876756 1898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d994bcba-ef33-4e46-8658-37609ee72b0f-hubble-tls\") pod \"d994bcba-ef33-4e46-8658-37609ee72b0f\" (UID: \"d994bcba-ef33-4e46-8658-37609ee72b0f\") " Sep 13 00:54:32.878528 kubelet[1898]: I0913 00:54:32.876773 1898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8g7qw\" (UniqueName: \"kubernetes.io/projected/7d43ccf7-2bc1-4fa2-a9df-98aee5a32e95-kube-api-access-8g7qw\") pod \"7d43ccf7-2bc1-4fa2-a9df-98aee5a32e95\" (UID: \"7d43ccf7-2bc1-4fa2-a9df-98aee5a32e95\") " Sep 13 00:54:32.878528 kubelet[1898]: I0913 00:54:32.876795 1898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptxcp\" (UniqueName: \"kubernetes.io/projected/d994bcba-ef33-4e46-8658-37609ee72b0f-kube-api-access-ptxcp\") pod \"d994bcba-ef33-4e46-8658-37609ee72b0f\" (UID: \"d994bcba-ef33-4e46-8658-37609ee72b0f\") " Sep 13 00:54:32.878528 kubelet[1898]: I0913 00:54:32.876830 1898 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 13 00:54:32.878528 kubelet[1898]: I0913 00:54:32.876843 1898 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 13 00:54:32.878528 kubelet[1898]: I0913 00:54:32.876854 1898 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 13 00:54:32.878528 kubelet[1898]: I0913 00:54:32.876865 1898 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 13 00:54:32.878736 kubelet[1898]: I0913 00:54:32.876874 1898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d994bcba-ef33-4e46-8658-37609ee72b0f" (UID: "d994bcba-ef33-4e46-8658-37609ee72b0f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:54:32.880071 kubelet[1898]: I0913 00:54:32.879177 1898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d43ccf7-2bc1-4fa2-a9df-98aee5a32e95-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7d43ccf7-2bc1-4fa2-a9df-98aee5a32e95" (UID: "7d43ccf7-2bc1-4fa2-a9df-98aee5a32e95"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:54:32.880071 kubelet[1898]: I0913 00:54:32.879218 1898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d994bcba-ef33-4e46-8658-37609ee72b0f" (UID: "d994bcba-ef33-4e46-8658-37609ee72b0f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:54:32.880071 kubelet[1898]: I0913 00:54:32.879237 1898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d994bcba-ef33-4e46-8658-37609ee72b0f" (UID: "d994bcba-ef33-4e46-8658-37609ee72b0f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:54:32.880071 kubelet[1898]: I0913 00:54:32.879251 1898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d994bcba-ef33-4e46-8658-37609ee72b0f" (UID: "d994bcba-ef33-4e46-8658-37609ee72b0f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:54:32.880071 kubelet[1898]: I0913 00:54:32.879265 1898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d994bcba-ef33-4e46-8658-37609ee72b0f" (UID: "d994bcba-ef33-4e46-8658-37609ee72b0f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:54:32.880297 kubelet[1898]: I0913 00:54:32.879360 1898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d994bcba-ef33-4e46-8658-37609ee72b0f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d994bcba-ef33-4e46-8658-37609ee72b0f" (UID: "d994bcba-ef33-4e46-8658-37609ee72b0f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:54:32.880297 kubelet[1898]: I0913 00:54:32.879395 1898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-cni-path" (OuterVolumeSpecName: "cni-path") pod "d994bcba-ef33-4e46-8658-37609ee72b0f" (UID: "d994bcba-ef33-4e46-8658-37609ee72b0f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:54:32.881787 kubelet[1898]: I0913 00:54:32.881759 1898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d43ccf7-2bc1-4fa2-a9df-98aee5a32e95-kube-api-access-8g7qw" (OuterVolumeSpecName: "kube-api-access-8g7qw") pod "7d43ccf7-2bc1-4fa2-a9df-98aee5a32e95" (UID: "7d43ccf7-2bc1-4fa2-a9df-98aee5a32e95"). InnerVolumeSpecName "kube-api-access-8g7qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:54:32.881872 kubelet[1898]: I0913 00:54:32.881766 1898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d994bcba-ef33-4e46-8658-37609ee72b0f-kube-api-access-ptxcp" (OuterVolumeSpecName: "kube-api-access-ptxcp") pod "d994bcba-ef33-4e46-8658-37609ee72b0f" (UID: "d994bcba-ef33-4e46-8658-37609ee72b0f"). InnerVolumeSpecName "kube-api-access-ptxcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:54:32.882054 kubelet[1898]: I0913 00:54:32.882010 1898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d994bcba-ef33-4e46-8658-37609ee72b0f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d994bcba-ef33-4e46-8658-37609ee72b0f" (UID: "d994bcba-ef33-4e46-8658-37609ee72b0f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:54:32.882477 kubelet[1898]: I0913 00:54:32.882450 1898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d994bcba-ef33-4e46-8658-37609ee72b0f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d994bcba-ef33-4e46-8658-37609ee72b0f" (UID: "d994bcba-ef33-4e46-8658-37609ee72b0f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:54:32.977222 kubelet[1898]: I0913 00:54:32.977171 1898 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 13 00:54:32.977222 kubelet[1898]: I0913 00:54:32.977212 1898 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:54:32.977222 kubelet[1898]: I0913 00:54:32.977221 1898 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 13 00:54:32.977222 kubelet[1898]: I0913 00:54:32.977229 1898 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7d43ccf7-2bc1-4fa2-a9df-98aee5a32e95-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:54:32.977222 kubelet[1898]: I0913 00:54:32.977236 1898 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 13 00:54:32.977492 kubelet[1898]: I0913 00:54:32.977244 1898 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 13 00:54:32.977492 kubelet[1898]: I0913 00:54:32.977252 1898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8g7qw\" (UniqueName: \"kubernetes.io/projected/7d43ccf7-2bc1-4fa2-a9df-98aee5a32e95-kube-api-access-8g7qw\") on node \"localhost\" DevicePath \"\"" Sep 13 00:54:32.977492 kubelet[1898]: I0913 00:54:32.977260 1898 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d994bcba-ef33-4e46-8658-37609ee72b0f-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 13 00:54:32.977492 kubelet[1898]: I0913 00:54:32.977267 1898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptxcp\" (UniqueName: \"kubernetes.io/projected/d994bcba-ef33-4e46-8658-37609ee72b0f-kube-api-access-ptxcp\") on node \"localhost\" DevicePath \"\"" Sep 13 00:54:32.977492 kubelet[1898]: I0913 00:54:32.977274 1898 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d994bcba-ef33-4e46-8658-37609ee72b0f-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 13 00:54:32.977492 kubelet[1898]: I0913 00:54:32.977281 1898 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d994bcba-ef33-4e46-8658-37609ee72b0f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:54:32.977492 kubelet[1898]: I0913 00:54:32.977289 1898 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d994bcba-ef33-4e46-8658-37609ee72b0f-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 13 00:54:33.319723 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-800ef04da00bf1cc548f59ded243cf5d35aeed92557faf5167103fea4278c1cf-rootfs.mount: Deactivated successfully. Sep 13 00:54:33.319808 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-800ef04da00bf1cc548f59ded243cf5d35aeed92557faf5167103fea4278c1cf-shm.mount: Deactivated successfully. Sep 13 00:54:33.319861 systemd[1]: var-lib-kubelet-pods-7d43ccf7\x2d2bc1\x2d4fa2\x2da9df\x2d98aee5a32e95-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8g7qw.mount: Deactivated successfully. Sep 13 00:54:33.319933 systemd[1]: var-lib-kubelet-pods-d994bcba\x2def33\x2d4e46\x2d8658\x2d37609ee72b0f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dptxcp.mount: Deactivated successfully. Sep 13 00:54:33.319987 systemd[1]: var-lib-kubelet-pods-d994bcba\x2def33\x2d4e46\x2d8658\x2d37609ee72b0f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:54:33.320034 systemd[1]: var-lib-kubelet-pods-d994bcba\x2def33\x2d4e46\x2d8658\x2d37609ee72b0f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:54:33.606103 kubelet[1898]: I0913 00:54:33.606077 1898 scope.go:117] "RemoveContainer" containerID="d4b1055733ba1c32439b7f5f799ff8485bdf565d193bec4128cf354eb8d01dc4" Sep 13 00:54:33.607384 env[1203]: time="2025-09-13T00:54:33.607338480Z" level=info msg="RemoveContainer for \"d4b1055733ba1c32439b7f5f799ff8485bdf565d193bec4128cf354eb8d01dc4\"" Sep 13 00:54:33.610976 systemd[1]: Removed slice kubepods-besteffort-pod7d43ccf7_2bc1_4fa2_a9df_98aee5a32e95.slice. Sep 13 00:54:33.612109 kubelet[1898]: I0913 00:54:33.611495 1898 scope.go:117] "RemoveContainer" containerID="6dba7a71856119025b309db6c96d2ab44fb18aaf12c302e77a4e95ecd74b28f3" Sep 13 00:54:33.612168 env[1203]: time="2025-09-13T00:54:33.611284289Z" level=info msg="RemoveContainer for \"d4b1055733ba1c32439b7f5f799ff8485bdf565d193bec4128cf354eb8d01dc4\" returns successfully" Sep 13 00:54:33.612439 env[1203]: time="2025-09-13T00:54:33.612406393Z" level=info msg="RemoveContainer for \"6dba7a71856119025b309db6c96d2ab44fb18aaf12c302e77a4e95ecd74b28f3\"" Sep 13 00:54:33.614030 systemd[1]: Removed slice kubepods-burstable-podd994bcba_ef33_4e46_8658_37609ee72b0f.slice. Sep 13 00:54:33.614140 systemd[1]: kubepods-burstable-podd994bcba_ef33_4e46_8658_37609ee72b0f.slice: Consumed 6.017s CPU time. Sep 13 00:54:33.615567 env[1203]: time="2025-09-13T00:54:33.615536257Z" level=info msg="RemoveContainer for \"6dba7a71856119025b309db6c96d2ab44fb18aaf12c302e77a4e95ecd74b28f3\" returns successfully" Sep 13 00:54:33.615686 kubelet[1898]: I0913 00:54:33.615654 1898 scope.go:117] "RemoveContainer" containerID="0e57f694765ff5bdf0de2f4c8901a8afcb75e18fe47594f23526c0cba25a3848" Sep 13 00:54:33.616638 env[1203]: time="2025-09-13T00:54:33.616580450Z" level=info msg="RemoveContainer for \"0e57f694765ff5bdf0de2f4c8901a8afcb75e18fe47594f23526c0cba25a3848\"" Sep 13 00:54:33.619327 env[1203]: time="2025-09-13T00:54:33.619306751Z" level=info msg="RemoveContainer for \"0e57f694765ff5bdf0de2f4c8901a8afcb75e18fe47594f23526c0cba25a3848\" returns successfully" Sep 13 00:54:33.619463 kubelet[1898]: I0913 00:54:33.619441 1898 scope.go:117] "RemoveContainer" containerID="afe6f0ea0299e9534ab215b6fe9de7b26bbd6d90dc000b67336059daecf35ba0" Sep 13 00:54:33.620210 env[1203]: time="2025-09-13T00:54:33.620189845Z" level=info msg="RemoveContainer for \"afe6f0ea0299e9534ab215b6fe9de7b26bbd6d90dc000b67336059daecf35ba0\"" Sep 13 00:54:33.623227 env[1203]: time="2025-09-13T00:54:33.623194358Z" level=info msg="RemoveContainer for \"afe6f0ea0299e9534ab215b6fe9de7b26bbd6d90dc000b67336059daecf35ba0\" returns successfully" Sep 13 00:54:33.623386 kubelet[1898]: I0913 00:54:33.623366 1898 scope.go:117] "RemoveContainer" containerID="c411bb6ad3ae730c215dcdc221b05a963575e9ba6352a9af73e72f89e3c55b32" Sep 13 00:54:33.624594 env[1203]: time="2025-09-13T00:54:33.624537305Z" level=info msg="RemoveContainer for \"c411bb6ad3ae730c215dcdc221b05a963575e9ba6352a9af73e72f89e3c55b32\"" Sep 13 00:54:33.627685 env[1203]: time="2025-09-13T00:54:33.627655928Z" level=info msg="RemoveContainer for \"c411bb6ad3ae730c215dcdc221b05a963575e9ba6352a9af73e72f89e3c55b32\" returns successfully" Sep 13 00:54:33.627800 kubelet[1898]: I0913 00:54:33.627780 1898 scope.go:117] "RemoveContainer" containerID="cc3a72cd33e7faf6dc3789039a570fa9a2de7de669111e7ff098558e413f7e82" Sep 13 00:54:33.628693 env[1203]: time="2025-09-13T00:54:33.628673921Z" level=info msg="RemoveContainer for \"cc3a72cd33e7faf6dc3789039a570fa9a2de7de669111e7ff098558e413f7e82\"" Sep 13 00:54:33.631621 env[1203]: time="2025-09-13T00:54:33.631572862Z" level=info msg="RemoveContainer for \"cc3a72cd33e7faf6dc3789039a570fa9a2de7de669111e7ff098558e413f7e82\" returns successfully" Sep 13 00:54:33.632117 kubelet[1898]: I0913 00:54:33.632096 1898 scope.go:117] "RemoveContainer" containerID="6dba7a71856119025b309db6c96d2ab44fb18aaf12c302e77a4e95ecd74b28f3" Sep 13 00:54:33.632325 env[1203]: time="2025-09-13T00:54:33.632257827Z" level=error msg="ContainerStatus for \"6dba7a71856119025b309db6c96d2ab44fb18aaf12c302e77a4e95ecd74b28f3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6dba7a71856119025b309db6c96d2ab44fb18aaf12c302e77a4e95ecd74b28f3\": not found" Sep 13 00:54:33.632617 kubelet[1898]: E0913 00:54:33.632573 1898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6dba7a71856119025b309db6c96d2ab44fb18aaf12c302e77a4e95ecd74b28f3\": not found" containerID="6dba7a71856119025b309db6c96d2ab44fb18aaf12c302e77a4e95ecd74b28f3" Sep 13 00:54:33.632724 kubelet[1898]: I0913 00:54:33.632609 1898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6dba7a71856119025b309db6c96d2ab44fb18aaf12c302e77a4e95ecd74b28f3"} err="failed to get container status \"6dba7a71856119025b309db6c96d2ab44fb18aaf12c302e77a4e95ecd74b28f3\": rpc error: code = NotFound desc = an error occurred when try to find container \"6dba7a71856119025b309db6c96d2ab44fb18aaf12c302e77a4e95ecd74b28f3\": not found" Sep 13 00:54:33.632724 kubelet[1898]: I0913 00:54:33.632715 1898 scope.go:117] "RemoveContainer" containerID="0e57f694765ff5bdf0de2f4c8901a8afcb75e18fe47594f23526c0cba25a3848" Sep 13 00:54:33.632885 env[1203]: time="2025-09-13T00:54:33.632846826Z" level=error msg="ContainerStatus for \"0e57f694765ff5bdf0de2f4c8901a8afcb75e18fe47594f23526c0cba25a3848\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0e57f694765ff5bdf0de2f4c8901a8afcb75e18fe47594f23526c0cba25a3848\": not found" Sep 13 00:54:33.633010 kubelet[1898]: E0913 00:54:33.632977 1898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0e57f694765ff5bdf0de2f4c8901a8afcb75e18fe47594f23526c0cba25a3848\": not found" containerID="0e57f694765ff5bdf0de2f4c8901a8afcb75e18fe47594f23526c0cba25a3848" Sep 13 00:54:33.633082 kubelet[1898]: I0913 00:54:33.633030 1898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0e57f694765ff5bdf0de2f4c8901a8afcb75e18fe47594f23526c0cba25a3848"} err="failed to get container status \"0e57f694765ff5bdf0de2f4c8901a8afcb75e18fe47594f23526c0cba25a3848\": rpc error: code = NotFound desc = an error occurred when try to find container \"0e57f694765ff5bdf0de2f4c8901a8afcb75e18fe47594f23526c0cba25a3848\": not found" Sep 13 00:54:33.633183 kubelet[1898]: I0913 00:54:33.633086 1898 scope.go:117] "RemoveContainer" containerID="afe6f0ea0299e9534ab215b6fe9de7b26bbd6d90dc000b67336059daecf35ba0" Sep 13 00:54:33.633373 env[1203]: time="2025-09-13T00:54:33.633300838Z" level=error msg="ContainerStatus for \"afe6f0ea0299e9534ab215b6fe9de7b26bbd6d90dc000b67336059daecf35ba0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"afe6f0ea0299e9534ab215b6fe9de7b26bbd6d90dc000b67336059daecf35ba0\": not found" Sep 13 00:54:33.633459 kubelet[1898]: E0913 00:54:33.633434 1898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"afe6f0ea0299e9534ab215b6fe9de7b26bbd6d90dc000b67336059daecf35ba0\": not found" containerID="afe6f0ea0299e9534ab215b6fe9de7b26bbd6d90dc000b67336059daecf35ba0" Sep 13 00:54:33.633459 kubelet[1898]: I0913 00:54:33.633451 1898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"afe6f0ea0299e9534ab215b6fe9de7b26bbd6d90dc000b67336059daecf35ba0"} err="failed to get container status \"afe6f0ea0299e9534ab215b6fe9de7b26bbd6d90dc000b67336059daecf35ba0\": rpc error: code = NotFound desc = an error occurred when try to find container \"afe6f0ea0299e9534ab215b6fe9de7b26bbd6d90dc000b67336059daecf35ba0\": not found" Sep 13 00:54:33.633571 kubelet[1898]: I0913 00:54:33.633464 1898 scope.go:117] "RemoveContainer" containerID="c411bb6ad3ae730c215dcdc221b05a963575e9ba6352a9af73e72f89e3c55b32" Sep 13 00:54:33.633636 env[1203]: time="2025-09-13T00:54:33.633566407Z" level=error msg="ContainerStatus for \"c411bb6ad3ae730c215dcdc221b05a963575e9ba6352a9af73e72f89e3c55b32\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c411bb6ad3ae730c215dcdc221b05a963575e9ba6352a9af73e72f89e3c55b32\": not found" Sep 13 00:54:33.634083 kubelet[1898]: E0913 00:54:33.634058 1898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c411bb6ad3ae730c215dcdc221b05a963575e9ba6352a9af73e72f89e3c55b32\": not found" containerID="c411bb6ad3ae730c215dcdc221b05a963575e9ba6352a9af73e72f89e3c55b32" Sep 13 00:54:33.634211 kubelet[1898]: I0913 00:54:33.634186 1898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c411bb6ad3ae730c215dcdc221b05a963575e9ba6352a9af73e72f89e3c55b32"} err="failed to get container status \"c411bb6ad3ae730c215dcdc221b05a963575e9ba6352a9af73e72f89e3c55b32\": rpc error: code = NotFound desc = an error occurred when try to find container \"c411bb6ad3ae730c215dcdc221b05a963575e9ba6352a9af73e72f89e3c55b32\": not found" Sep 13 00:54:33.634211 kubelet[1898]: I0913 00:54:33.634208 1898 scope.go:117] "RemoveContainer" containerID="cc3a72cd33e7faf6dc3789039a570fa9a2de7de669111e7ff098558e413f7e82" Sep 13 00:54:33.634427 env[1203]: time="2025-09-13T00:54:33.634369457Z" level=error msg="ContainerStatus for \"cc3a72cd33e7faf6dc3789039a570fa9a2de7de669111e7ff098558e413f7e82\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cc3a72cd33e7faf6dc3789039a570fa9a2de7de669111e7ff098558e413f7e82\": not found" Sep 13 00:54:33.634544 kubelet[1898]: E0913 00:54:33.634524 1898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cc3a72cd33e7faf6dc3789039a570fa9a2de7de669111e7ff098558e413f7e82\": not found" containerID="cc3a72cd33e7faf6dc3789039a570fa9a2de7de669111e7ff098558e413f7e82" Sep 13 00:54:33.634592 kubelet[1898]: I0913 00:54:33.634545 1898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cc3a72cd33e7faf6dc3789039a570fa9a2de7de669111e7ff098558e413f7e82"} err="failed to get container status \"cc3a72cd33e7faf6dc3789039a570fa9a2de7de669111e7ff098558e413f7e82\": rpc error: code = NotFound desc = an error occurred when try to find container \"cc3a72cd33e7faf6dc3789039a570fa9a2de7de669111e7ff098558e413f7e82\": not found" Sep 13 00:54:34.280917 sshd[3533]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:34.283553 systemd[1]: sshd@22-10.0.0.130:22-10.0.0.1:53136.service: Deactivated successfully. Sep 13 00:54:34.284093 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:54:34.284626 systemd-logind[1194]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:54:34.285660 systemd[1]: Started sshd@23-10.0.0.130:22-10.0.0.1:53144.service. Sep 13 00:54:34.288393 systemd-logind[1194]: Removed session 23. Sep 13 00:54:34.317157 sshd[3692]: Accepted publickey for core from 10.0.0.1 port 53144 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:54:34.318217 sshd[3692]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:34.321952 systemd-logind[1194]: New session 24 of user core. Sep 13 00:54:34.322664 systemd[1]: Started session-24.scope. Sep 13 00:54:34.411666 kubelet[1898]: E0913 00:54:34.411627 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:34.413406 kubelet[1898]: I0913 00:54:34.413377 1898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d43ccf7-2bc1-4fa2-a9df-98aee5a32e95" path="/var/lib/kubelet/pods/7d43ccf7-2bc1-4fa2-a9df-98aee5a32e95/volumes" Sep 13 00:54:34.413724 kubelet[1898]: I0913 00:54:34.413706 1898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d994bcba-ef33-4e46-8658-37609ee72b0f" path="/var/lib/kubelet/pods/d994bcba-ef33-4e46-8658-37609ee72b0f/volumes" Sep 13 00:54:34.759517 sshd[3692]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:34.761106 systemd[1]: Started sshd@24-10.0.0.130:22-10.0.0.1:53154.service. Sep 13 00:54:34.765983 systemd[1]: sshd@23-10.0.0.130:22-10.0.0.1:53144.service: Deactivated successfully. Sep 13 00:54:34.766908 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:54:34.768197 systemd-logind[1194]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:54:34.769163 systemd-logind[1194]: Removed session 24. Sep 13 00:54:34.781327 kubelet[1898]: E0913 00:54:34.781296 1898 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d994bcba-ef33-4e46-8658-37609ee72b0f" containerName="cilium-agent" Sep 13 00:54:34.781522 kubelet[1898]: E0913 00:54:34.781507 1898 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d994bcba-ef33-4e46-8658-37609ee72b0f" containerName="clean-cilium-state" Sep 13 00:54:34.781619 kubelet[1898]: E0913 00:54:34.781604 1898 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7d43ccf7-2bc1-4fa2-a9df-98aee5a32e95" containerName="cilium-operator" Sep 13 00:54:34.781690 kubelet[1898]: E0913 00:54:34.781676 1898 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d994bcba-ef33-4e46-8658-37609ee72b0f" containerName="mount-cgroup" Sep 13 00:54:34.781763 kubelet[1898]: E0913 00:54:34.781749 1898 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d994bcba-ef33-4e46-8658-37609ee72b0f" containerName="apply-sysctl-overwrites" Sep 13 00:54:34.781833 kubelet[1898]: E0913 00:54:34.781819 1898 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d994bcba-ef33-4e46-8658-37609ee72b0f" containerName="mount-bpf-fs" Sep 13 00:54:34.781963 kubelet[1898]: I0913 00:54:34.781947 1898 memory_manager.go:354] "RemoveStaleState removing state" podUID="d994bcba-ef33-4e46-8658-37609ee72b0f" containerName="cilium-agent" Sep 13 00:54:34.782034 kubelet[1898]: I0913 00:54:34.782019 1898 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d43ccf7-2bc1-4fa2-a9df-98aee5a32e95" containerName="cilium-operator" Sep 13 00:54:34.787166 systemd[1]: Created slice kubepods-burstable-pod8fa91f20_0185_49fb_87b8_ac93ce126c61.slice. Sep 13 00:54:34.792849 sshd[3703]: Accepted publickey for core from 10.0.0.1 port 53154 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:54:34.794343 sshd[3703]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:34.801187 systemd[1]: Started session-25.scope. Sep 13 00:54:34.801612 systemd-logind[1194]: New session 25 of user core. Sep 13 00:54:34.886443 kubelet[1898]: I0913 00:54:34.886382 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-etc-cni-netd\") pod \"cilium-ntlg8\" (UID: \"8fa91f20-0185-49fb-87b8-ac93ce126c61\") " pod="kube-system/cilium-ntlg8" Sep 13 00:54:34.886443 kubelet[1898]: I0913 00:54:34.886425 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-xtables-lock\") pod \"cilium-ntlg8\" (UID: \"8fa91f20-0185-49fb-87b8-ac93ce126c61\") " pod="kube-system/cilium-ntlg8" Sep 13 00:54:34.886443 kubelet[1898]: I0913 00:54:34.886440 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-bpf-maps\") pod \"cilium-ntlg8\" (UID: \"8fa91f20-0185-49fb-87b8-ac93ce126c61\") " pod="kube-system/cilium-ntlg8" Sep 13 00:54:34.886443 kubelet[1898]: I0913 00:54:34.886456 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-hostproc\") pod \"cilium-ntlg8\" (UID: \"8fa91f20-0185-49fb-87b8-ac93ce126c61\") " pod="kube-system/cilium-ntlg8" Sep 13 00:54:34.886684 kubelet[1898]: I0913 00:54:34.886473 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8fa91f20-0185-49fb-87b8-ac93ce126c61-cilium-config-path\") pod \"cilium-ntlg8\" (UID: \"8fa91f20-0185-49fb-87b8-ac93ce126c61\") " pod="kube-system/cilium-ntlg8" Sep 13 00:54:34.886684 kubelet[1898]: I0913 00:54:34.886489 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-host-proc-sys-kernel\") pod \"cilium-ntlg8\" (UID: \"8fa91f20-0185-49fb-87b8-ac93ce126c61\") " pod="kube-system/cilium-ntlg8" Sep 13 00:54:34.886684 kubelet[1898]: I0913 00:54:34.886506 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8fa91f20-0185-49fb-87b8-ac93ce126c61-hubble-tls\") pod \"cilium-ntlg8\" (UID: \"8fa91f20-0185-49fb-87b8-ac93ce126c61\") " pod="kube-system/cilium-ntlg8" Sep 13 00:54:34.886684 kubelet[1898]: I0913 00:54:34.886518 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-lib-modules\") pod \"cilium-ntlg8\" (UID: \"8fa91f20-0185-49fb-87b8-ac93ce126c61\") " pod="kube-system/cilium-ntlg8" Sep 13 00:54:34.886684 kubelet[1898]: I0913 00:54:34.886534 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8fa91f20-0185-49fb-87b8-ac93ce126c61-clustermesh-secrets\") pod \"cilium-ntlg8\" (UID: \"8fa91f20-0185-49fb-87b8-ac93ce126c61\") " pod="kube-system/cilium-ntlg8" Sep 13 00:54:34.886684 kubelet[1898]: I0913 00:54:34.886547 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-cilium-cgroup\") pod \"cilium-ntlg8\" (UID: \"8fa91f20-0185-49fb-87b8-ac93ce126c61\") " pod="kube-system/cilium-ntlg8" Sep 13 00:54:34.886821 kubelet[1898]: I0913 00:54:34.886560 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9b4k\" (UniqueName: \"kubernetes.io/projected/8fa91f20-0185-49fb-87b8-ac93ce126c61-kube-api-access-t9b4k\") pod \"cilium-ntlg8\" (UID: \"8fa91f20-0185-49fb-87b8-ac93ce126c61\") " pod="kube-system/cilium-ntlg8" Sep 13 00:54:34.886821 kubelet[1898]: I0913 00:54:34.886572 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-cni-path\") pod \"cilium-ntlg8\" (UID: \"8fa91f20-0185-49fb-87b8-ac93ce126c61\") " pod="kube-system/cilium-ntlg8" Sep 13 00:54:34.886821 kubelet[1898]: I0913 00:54:34.886588 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8fa91f20-0185-49fb-87b8-ac93ce126c61-cilium-ipsec-secrets\") pod \"cilium-ntlg8\" (UID: \"8fa91f20-0185-49fb-87b8-ac93ce126c61\") " pod="kube-system/cilium-ntlg8" Sep 13 00:54:34.886821 kubelet[1898]: I0913 00:54:34.886600 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-host-proc-sys-net\") pod \"cilium-ntlg8\" (UID: \"8fa91f20-0185-49fb-87b8-ac93ce126c61\") " pod="kube-system/cilium-ntlg8" Sep 13 00:54:34.886821 kubelet[1898]: I0913 00:54:34.886612 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-cilium-run\") pod \"cilium-ntlg8\" (UID: \"8fa91f20-0185-49fb-87b8-ac93ce126c61\") " pod="kube-system/cilium-ntlg8" Sep 13 00:54:34.915991 sshd[3703]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:34.918919 systemd[1]: sshd@24-10.0.0.130:22-10.0.0.1:53154.service: Deactivated successfully. Sep 13 00:54:34.919476 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:54:34.921276 systemd-logind[1194]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:54:34.921653 systemd[1]: Started sshd@25-10.0.0.130:22-10.0.0.1:53170.service. Sep 13 00:54:34.922866 systemd-logind[1194]: Removed session 25. Sep 13 00:54:34.928480 kubelet[1898]: E0913 00:54:34.928426 1898 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-t9b4k lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-ntlg8" podUID="8fa91f20-0185-49fb-87b8-ac93ce126c61" Sep 13 00:54:34.953023 sshd[3718]: Accepted publickey for core from 10.0.0.1 port 53170 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:54:34.954244 sshd[3718]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:34.957478 systemd-logind[1194]: New session 26 of user core. Sep 13 00:54:34.958279 systemd[1]: Started session-26.scope. Sep 13 00:54:35.469834 kubelet[1898]: E0913 00:54:35.469766 1898 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:54:35.792759 kubelet[1898]: I0913 00:54:35.792588 1898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-etc-cni-netd\") pod \"8fa91f20-0185-49fb-87b8-ac93ce126c61\" (UID: \"8fa91f20-0185-49fb-87b8-ac93ce126c61\") " Sep 13 00:54:35.792759 kubelet[1898]: I0913 00:54:35.792655 1898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8fa91f20-0185-49fb-87b8-ac93ce126c61-cilium-config-path\") pod \"8fa91f20-0185-49fb-87b8-ac93ce126c61\" (UID: \"8fa91f20-0185-49fb-87b8-ac93ce126c61\") " Sep 13 00:54:35.792759 kubelet[1898]: I0913 00:54:35.792678 1898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-host-proc-sys-net\") pod \"8fa91f20-0185-49fb-87b8-ac93ce126c61\" (UID: \"8fa91f20-0185-49fb-87b8-ac93ce126c61\") " Sep 13 00:54:35.792759 kubelet[1898]: I0913 00:54:35.792692 1898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-bpf-maps\") pod \"8fa91f20-0185-49fb-87b8-ac93ce126c61\" (UID: \"8fa91f20-0185-49fb-87b8-ac93ce126c61\") " Sep 13 00:54:35.793023 kubelet[1898]: I0913 00:54:35.792745 1898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8fa91f20-0185-49fb-87b8-ac93ce126c61" (UID: "8fa91f20-0185-49fb-87b8-ac93ce126c61"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:54:35.793023 kubelet[1898]: I0913 00:54:35.792766 1898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8fa91f20-0185-49fb-87b8-ac93ce126c61" (UID: "8fa91f20-0185-49fb-87b8-ac93ce126c61"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:54:35.793023 kubelet[1898]: I0913 00:54:35.792801 1898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-cilium-cgroup\") pod \"8fa91f20-0185-49fb-87b8-ac93ce126c61\" (UID: \"8fa91f20-0185-49fb-87b8-ac93ce126c61\") " Sep 13 00:54:35.793023 kubelet[1898]: I0913 00:54:35.792820 1898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8fa91f20-0185-49fb-87b8-ac93ce126c61" (UID: "8fa91f20-0185-49fb-87b8-ac93ce126c61"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:54:35.793023 kubelet[1898]: I0913 00:54:35.792856 1898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8fa91f20-0185-49fb-87b8-ac93ce126c61" (UID: "8fa91f20-0185-49fb-87b8-ac93ce126c61"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:54:35.793220 kubelet[1898]: I0913 00:54:35.792851 1898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-host-proc-sys-kernel\") pod \"8fa91f20-0185-49fb-87b8-ac93ce126c61\" (UID: \"8fa91f20-0185-49fb-87b8-ac93ce126c61\") " Sep 13 00:54:35.793220 kubelet[1898]: I0913 00:54:35.792870 1898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8fa91f20-0185-49fb-87b8-ac93ce126c61" (UID: "8fa91f20-0185-49fb-87b8-ac93ce126c61"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:54:35.793220 kubelet[1898]: I0913 00:54:35.792924 1898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8fa91f20-0185-49fb-87b8-ac93ce126c61-hubble-tls\") pod \"8fa91f20-0185-49fb-87b8-ac93ce126c61\" (UID: \"8fa91f20-0185-49fb-87b8-ac93ce126c61\") " Sep 13 00:54:35.793220 kubelet[1898]: I0913 00:54:35.792953 1898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9b4k\" (UniqueName: \"kubernetes.io/projected/8fa91f20-0185-49fb-87b8-ac93ce126c61-kube-api-access-t9b4k\") pod \"8fa91f20-0185-49fb-87b8-ac93ce126c61\" (UID: \"8fa91f20-0185-49fb-87b8-ac93ce126c61\") " Sep 13 00:54:35.793220 kubelet[1898]: I0913 00:54:35.792974 1898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8fa91f20-0185-49fb-87b8-ac93ce126c61-cilium-ipsec-secrets\") pod \"8fa91f20-0185-49fb-87b8-ac93ce126c61\" (UID: \"8fa91f20-0185-49fb-87b8-ac93ce126c61\") " Sep 13 00:54:35.793220 kubelet[1898]: I0913 00:54:35.792992 1898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-xtables-lock\") pod \"8fa91f20-0185-49fb-87b8-ac93ce126c61\" (UID: \"8fa91f20-0185-49fb-87b8-ac93ce126c61\") " Sep 13 00:54:35.793406 kubelet[1898]: I0913 00:54:35.793013 1898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8fa91f20-0185-49fb-87b8-ac93ce126c61-clustermesh-secrets\") pod \"8fa91f20-0185-49fb-87b8-ac93ce126c61\" (UID: \"8fa91f20-0185-49fb-87b8-ac93ce126c61\") " Sep 13 00:54:35.793406 kubelet[1898]: I0913 00:54:35.793032 1898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-cni-path\") pod \"8fa91f20-0185-49fb-87b8-ac93ce126c61\" (UID: \"8fa91f20-0185-49fb-87b8-ac93ce126c61\") " Sep 13 00:54:35.793406 kubelet[1898]: I0913 00:54:35.793068 1898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-cilium-run\") pod \"8fa91f20-0185-49fb-87b8-ac93ce126c61\" (UID: \"8fa91f20-0185-49fb-87b8-ac93ce126c61\") " Sep 13 00:54:35.793406 kubelet[1898]: I0913 00:54:35.793088 1898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-hostproc\") pod \"8fa91f20-0185-49fb-87b8-ac93ce126c61\" (UID: \"8fa91f20-0185-49fb-87b8-ac93ce126c61\") " Sep 13 00:54:35.793406 kubelet[1898]: I0913 00:54:35.793105 1898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-lib-modules\") pod \"8fa91f20-0185-49fb-87b8-ac93ce126c61\" (UID: \"8fa91f20-0185-49fb-87b8-ac93ce126c61\") " Sep 13 00:54:35.793406 kubelet[1898]: I0913 00:54:35.793133 1898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8fa91f20-0185-49fb-87b8-ac93ce126c61" (UID: "8fa91f20-0185-49fb-87b8-ac93ce126c61"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:54:35.793578 kubelet[1898]: I0913 00:54:35.793543 1898 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 13 00:54:35.793578 kubelet[1898]: I0913 00:54:35.793556 1898 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 13 00:54:35.793578 kubelet[1898]: I0913 00:54:35.793564 1898 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 13 00:54:35.793578 kubelet[1898]: I0913 00:54:35.793572 1898 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 13 00:54:35.793578 kubelet[1898]: I0913 00:54:35.793579 1898 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 13 00:54:35.793578 kubelet[1898]: I0913 00:54:35.793586 1898 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 13 00:54:35.794856 kubelet[1898]: I0913 00:54:35.794768 1898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fa91f20-0185-49fb-87b8-ac93ce126c61-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8fa91f20-0185-49fb-87b8-ac93ce126c61" (UID: "8fa91f20-0185-49fb-87b8-ac93ce126c61"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:54:35.794856 kubelet[1898]: I0913 00:54:35.794842 1898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8fa91f20-0185-49fb-87b8-ac93ce126c61" (UID: "8fa91f20-0185-49fb-87b8-ac93ce126c61"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:54:35.794953 kubelet[1898]: I0913 00:54:35.794864 1898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-cni-path" (OuterVolumeSpecName: "cni-path") pod "8fa91f20-0185-49fb-87b8-ac93ce126c61" (UID: "8fa91f20-0185-49fb-87b8-ac93ce126c61"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:54:35.794953 kubelet[1898]: I0913 00:54:35.794879 1898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-hostproc" (OuterVolumeSpecName: "hostproc") pod "8fa91f20-0185-49fb-87b8-ac93ce126c61" (UID: "8fa91f20-0185-49fb-87b8-ac93ce126c61"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:54:35.795337 kubelet[1898]: I0913 00:54:35.795295 1898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8fa91f20-0185-49fb-87b8-ac93ce126c61" (UID: "8fa91f20-0185-49fb-87b8-ac93ce126c61"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:54:35.796841 kubelet[1898]: I0913 00:54:35.796805 1898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fa91f20-0185-49fb-87b8-ac93ce126c61-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8fa91f20-0185-49fb-87b8-ac93ce126c61" (UID: "8fa91f20-0185-49fb-87b8-ac93ce126c61"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:54:35.797790 systemd[1]: var-lib-kubelet-pods-8fa91f20\x2d0185\x2d49fb\x2d87b8\x2dac93ce126c61-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:54:35.800444 systemd[1]: var-lib-kubelet-pods-8fa91f20\x2d0185\x2d49fb\x2d87b8\x2dac93ce126c61-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:54:35.800555 systemd[1]: var-lib-kubelet-pods-8fa91f20\x2d0185\x2d49fb\x2d87b8\x2dac93ce126c61-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt9b4k.mount: Deactivated successfully. Sep 13 00:54:35.800652 systemd[1]: var-lib-kubelet-pods-8fa91f20\x2d0185\x2d49fb\x2d87b8\x2dac93ce126c61-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 13 00:54:35.800844 kubelet[1898]: I0913 00:54:35.800810 1898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fa91f20-0185-49fb-87b8-ac93ce126c61-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8fa91f20-0185-49fb-87b8-ac93ce126c61" (UID: "8fa91f20-0185-49fb-87b8-ac93ce126c61"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:54:35.800892 kubelet[1898]: I0913 00:54:35.800811 1898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fa91f20-0185-49fb-87b8-ac93ce126c61-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "8fa91f20-0185-49fb-87b8-ac93ce126c61" (UID: "8fa91f20-0185-49fb-87b8-ac93ce126c61"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:54:35.801380 kubelet[1898]: I0913 00:54:35.801243 1898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fa91f20-0185-49fb-87b8-ac93ce126c61-kube-api-access-t9b4k" (OuterVolumeSpecName: "kube-api-access-t9b4k") pod "8fa91f20-0185-49fb-87b8-ac93ce126c61" (UID: "8fa91f20-0185-49fb-87b8-ac93ce126c61"). InnerVolumeSpecName "kube-api-access-t9b4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:54:35.894298 kubelet[1898]: I0913 00:54:35.894240 1898 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 13 00:54:35.894298 kubelet[1898]: I0913 00:54:35.894272 1898 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8fa91f20-0185-49fb-87b8-ac93ce126c61-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:54:35.894298 kubelet[1898]: I0913 00:54:35.894282 1898 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8fa91f20-0185-49fb-87b8-ac93ce126c61-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 13 00:54:35.894298 kubelet[1898]: I0913 00:54:35.894291 1898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9b4k\" (UniqueName: \"kubernetes.io/projected/8fa91f20-0185-49fb-87b8-ac93ce126c61-kube-api-access-t9b4k\") on node \"localhost\" DevicePath \"\"" Sep 13 00:54:35.894298 kubelet[1898]: I0913 00:54:35.894300 1898 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8fa91f20-0185-49fb-87b8-ac93ce126c61-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Sep 13 00:54:35.894298 kubelet[1898]: I0913 00:54:35.894307 1898 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 13 00:54:35.894298 kubelet[1898]: I0913 00:54:35.894313 1898 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8fa91f20-0185-49fb-87b8-ac93ce126c61-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 13 00:54:35.894635 kubelet[1898]: I0913 00:54:35.894322 1898 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:54:35.894635 kubelet[1898]: I0913 00:54:35.894330 1898 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8fa91f20-0185-49fb-87b8-ac93ce126c61-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 13 00:54:36.416618 systemd[1]: Removed slice kubepods-burstable-pod8fa91f20_0185_49fb_87b8_ac93ce126c61.slice. Sep 13 00:54:36.649415 systemd[1]: Created slice kubepods-burstable-pod814aabe1_ea0e_4e94_94bb_8cb49f2251c3.slice. Sep 13 00:54:36.799657 kubelet[1898]: I0913 00:54:36.799536 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/814aabe1-ea0e-4e94-94bb-8cb49f2251c3-bpf-maps\") pod \"cilium-njcdh\" (UID: \"814aabe1-ea0e-4e94-94bb-8cb49f2251c3\") " pod="kube-system/cilium-njcdh" Sep 13 00:54:36.799657 kubelet[1898]: I0913 00:54:36.799585 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/814aabe1-ea0e-4e94-94bb-8cb49f2251c3-hostproc\") pod \"cilium-njcdh\" (UID: \"814aabe1-ea0e-4e94-94bb-8cb49f2251c3\") " pod="kube-system/cilium-njcdh" Sep 13 00:54:36.799657 kubelet[1898]: I0913 00:54:36.799599 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/814aabe1-ea0e-4e94-94bb-8cb49f2251c3-cilium-run\") pod \"cilium-njcdh\" (UID: \"814aabe1-ea0e-4e94-94bb-8cb49f2251c3\") " pod="kube-system/cilium-njcdh" Sep 13 00:54:36.799657 kubelet[1898]: I0913 00:54:36.799611 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/814aabe1-ea0e-4e94-94bb-8cb49f2251c3-cilium-cgroup\") pod \"cilium-njcdh\" (UID: \"814aabe1-ea0e-4e94-94bb-8cb49f2251c3\") " pod="kube-system/cilium-njcdh" Sep 13 00:54:36.799657 kubelet[1898]: I0913 00:54:36.799622 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/814aabe1-ea0e-4e94-94bb-8cb49f2251c3-etc-cni-netd\") pod \"cilium-njcdh\" (UID: \"814aabe1-ea0e-4e94-94bb-8cb49f2251c3\") " pod="kube-system/cilium-njcdh" Sep 13 00:54:36.799657 kubelet[1898]: I0913 00:54:36.799635 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/814aabe1-ea0e-4e94-94bb-8cb49f2251c3-hubble-tls\") pod \"cilium-njcdh\" (UID: \"814aabe1-ea0e-4e94-94bb-8cb49f2251c3\") " pod="kube-system/cilium-njcdh" Sep 13 00:54:36.800084 kubelet[1898]: I0913 00:54:36.799649 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/814aabe1-ea0e-4e94-94bb-8cb49f2251c3-cni-path\") pod \"cilium-njcdh\" (UID: \"814aabe1-ea0e-4e94-94bb-8cb49f2251c3\") " pod="kube-system/cilium-njcdh" Sep 13 00:54:36.800084 kubelet[1898]: I0913 00:54:36.799691 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/814aabe1-ea0e-4e94-94bb-8cb49f2251c3-lib-modules\") pod \"cilium-njcdh\" (UID: \"814aabe1-ea0e-4e94-94bb-8cb49f2251c3\") " pod="kube-system/cilium-njcdh" Sep 13 00:54:36.800084 kubelet[1898]: I0913 00:54:36.799720 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/814aabe1-ea0e-4e94-94bb-8cb49f2251c3-cilium-config-path\") pod \"cilium-njcdh\" (UID: \"814aabe1-ea0e-4e94-94bb-8cb49f2251c3\") " pod="kube-system/cilium-njcdh" Sep 13 00:54:36.800084 kubelet[1898]: I0913 00:54:36.799741 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/814aabe1-ea0e-4e94-94bb-8cb49f2251c3-xtables-lock\") pod \"cilium-njcdh\" (UID: \"814aabe1-ea0e-4e94-94bb-8cb49f2251c3\") " pod="kube-system/cilium-njcdh" Sep 13 00:54:36.800084 kubelet[1898]: I0913 00:54:36.799759 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jlg6\" (UniqueName: \"kubernetes.io/projected/814aabe1-ea0e-4e94-94bb-8cb49f2251c3-kube-api-access-7jlg6\") pod \"cilium-njcdh\" (UID: \"814aabe1-ea0e-4e94-94bb-8cb49f2251c3\") " pod="kube-system/cilium-njcdh" Sep 13 00:54:36.800084 kubelet[1898]: I0913 00:54:36.799775 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/814aabe1-ea0e-4e94-94bb-8cb49f2251c3-clustermesh-secrets\") pod \"cilium-njcdh\" (UID: \"814aabe1-ea0e-4e94-94bb-8cb49f2251c3\") " pod="kube-system/cilium-njcdh" Sep 13 00:54:36.800245 kubelet[1898]: I0913 00:54:36.799790 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/814aabe1-ea0e-4e94-94bb-8cb49f2251c3-cilium-ipsec-secrets\") pod \"cilium-njcdh\" (UID: \"814aabe1-ea0e-4e94-94bb-8cb49f2251c3\") " pod="kube-system/cilium-njcdh" Sep 13 00:54:36.800245 kubelet[1898]: I0913 00:54:36.799809 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/814aabe1-ea0e-4e94-94bb-8cb49f2251c3-host-proc-sys-net\") pod \"cilium-njcdh\" (UID: \"814aabe1-ea0e-4e94-94bb-8cb49f2251c3\") " pod="kube-system/cilium-njcdh" Sep 13 00:54:36.800245 kubelet[1898]: I0913 00:54:36.799834 1898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/814aabe1-ea0e-4e94-94bb-8cb49f2251c3-host-proc-sys-kernel\") pod \"cilium-njcdh\" (UID: \"814aabe1-ea0e-4e94-94bb-8cb49f2251c3\") " pod="kube-system/cilium-njcdh" Sep 13 00:54:36.952381 kubelet[1898]: E0913 00:54:36.952305 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:36.952902 env[1203]: time="2025-09-13T00:54:36.952832688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-njcdh,Uid:814aabe1-ea0e-4e94-94bb-8cb49f2251c3,Namespace:kube-system,Attempt:0,}" Sep 13 00:54:36.964947 env[1203]: time="2025-09-13T00:54:36.964892936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:36.964947 env[1203]: time="2025-09-13T00:54:36.964925028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:36.964947 env[1203]: time="2025-09-13T00:54:36.964934125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:36.965132 env[1203]: time="2025-09-13T00:54:36.965054064Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/91e4072603c67632a1050d5d7106379397a21952edc693dbdf3eedcf1e337cc8 pid=3748 runtime=io.containerd.runc.v2 Sep 13 00:54:36.976293 systemd[1]: Started cri-containerd-91e4072603c67632a1050d5d7106379397a21952edc693dbdf3eedcf1e337cc8.scope. Sep 13 00:54:36.993822 env[1203]: time="2025-09-13T00:54:36.993783157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-njcdh,Uid:814aabe1-ea0e-4e94-94bb-8cb49f2251c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"91e4072603c67632a1050d5d7106379397a21952edc693dbdf3eedcf1e337cc8\"" Sep 13 00:54:36.994883 kubelet[1898]: E0913 00:54:36.994666 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:36.996693 env[1203]: time="2025-09-13T00:54:36.996669414Z" level=info msg="CreateContainer within sandbox \"91e4072603c67632a1050d5d7106379397a21952edc693dbdf3eedcf1e337cc8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:54:37.010528 env[1203]: time="2025-09-13T00:54:37.010473493Z" level=info msg="CreateContainer within sandbox \"91e4072603c67632a1050d5d7106379397a21952edc693dbdf3eedcf1e337cc8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cfaedc788630d25dbb1c76d271c8747870d114d9b9d19ad40e812a8d7b4bc52e\"" Sep 13 00:54:37.010947 env[1203]: time="2025-09-13T00:54:37.010920147Z" level=info msg="StartContainer for \"cfaedc788630d25dbb1c76d271c8747870d114d9b9d19ad40e812a8d7b4bc52e\"" Sep 13 00:54:37.027391 systemd[1]: Started cri-containerd-cfaedc788630d25dbb1c76d271c8747870d114d9b9d19ad40e812a8d7b4bc52e.scope. Sep 13 00:54:37.053544 env[1203]: time="2025-09-13T00:54:37.053419228Z" level=info msg="StartContainer for \"cfaedc788630d25dbb1c76d271c8747870d114d9b9d19ad40e812a8d7b4bc52e\" returns successfully" Sep 13 00:54:37.061619 systemd[1]: cri-containerd-cfaedc788630d25dbb1c76d271c8747870d114d9b9d19ad40e812a8d7b4bc52e.scope: Deactivated successfully. Sep 13 00:54:37.100629 env[1203]: time="2025-09-13T00:54:37.100564523Z" level=info msg="shim disconnected" id=cfaedc788630d25dbb1c76d271c8747870d114d9b9d19ad40e812a8d7b4bc52e Sep 13 00:54:37.100629 env[1203]: time="2025-09-13T00:54:37.100610531Z" level=warning msg="cleaning up after shim disconnected" id=cfaedc788630d25dbb1c76d271c8747870d114d9b9d19ad40e812a8d7b4bc52e namespace=k8s.io Sep 13 00:54:37.100629 env[1203]: time="2025-09-13T00:54:37.100619388Z" level=info msg="cleaning up dead shim" Sep 13 00:54:37.106945 env[1203]: time="2025-09-13T00:54:37.106890562Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:54:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3833 runtime=io.containerd.runc.v2\n" Sep 13 00:54:37.619408 kubelet[1898]: E0913 00:54:37.619377 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:37.621131 env[1203]: time="2025-09-13T00:54:37.621097899Z" level=info msg="CreateContainer within sandbox \"91e4072603c67632a1050d5d7106379397a21952edc693dbdf3eedcf1e337cc8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:54:37.905193 systemd[1]: run-containerd-runc-k8s.io-91e4072603c67632a1050d5d7106379397a21952edc693dbdf3eedcf1e337cc8-runc.qqqi8K.mount: Deactivated successfully. Sep 13 00:54:37.933672 env[1203]: time="2025-09-13T00:54:37.933615275Z" level=info msg="CreateContainer within sandbox \"91e4072603c67632a1050d5d7106379397a21952edc693dbdf3eedcf1e337cc8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5d73b87eef200442c8ed6516a8dfc82ed6903b93a97675d742ac800e1fa6ccc2\"" Sep 13 00:54:37.934381 env[1203]: time="2025-09-13T00:54:37.934338800Z" level=info msg="StartContainer for \"5d73b87eef200442c8ed6516a8dfc82ed6903b93a97675d742ac800e1fa6ccc2\"" Sep 13 00:54:37.960790 systemd[1]: Started cri-containerd-5d73b87eef200442c8ed6516a8dfc82ed6903b93a97675d742ac800e1fa6ccc2.scope. Sep 13 00:54:37.988410 systemd[1]: cri-containerd-5d73b87eef200442c8ed6516a8dfc82ed6903b93a97675d742ac800e1fa6ccc2.scope: Deactivated successfully. Sep 13 00:54:38.199428 env[1203]: time="2025-09-13T00:54:38.198810654Z" level=info msg="StartContainer for \"5d73b87eef200442c8ed6516a8dfc82ed6903b93a97675d742ac800e1fa6ccc2\" returns successfully" Sep 13 00:54:38.223653 env[1203]: time="2025-09-13T00:54:38.223576816Z" level=info msg="shim disconnected" id=5d73b87eef200442c8ed6516a8dfc82ed6903b93a97675d742ac800e1fa6ccc2 Sep 13 00:54:38.223653 env[1203]: time="2025-09-13T00:54:38.223650668Z" level=warning msg="cleaning up after shim disconnected" id=5d73b87eef200442c8ed6516a8dfc82ed6903b93a97675d742ac800e1fa6ccc2 namespace=k8s.io Sep 13 00:54:38.223887 env[1203]: time="2025-09-13T00:54:38.223665947Z" level=info msg="cleaning up dead shim" Sep 13 00:54:38.231427 env[1203]: time="2025-09-13T00:54:38.231374215Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:54:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3893 runtime=io.containerd.runc.v2\n" Sep 13 00:54:38.413511 kubelet[1898]: I0913 00:54:38.413435 1898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fa91f20-0185-49fb-87b8-ac93ce126c61" path="/var/lib/kubelet/pods/8fa91f20-0185-49fb-87b8-ac93ce126c61/volumes" Sep 13 00:54:38.624243 kubelet[1898]: E0913 00:54:38.621963 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:38.626192 env[1203]: time="2025-09-13T00:54:38.626149940Z" level=info msg="CreateContainer within sandbox \"91e4072603c67632a1050d5d7106379397a21952edc693dbdf3eedcf1e337cc8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:54:38.640285 env[1203]: time="2025-09-13T00:54:38.640230730Z" level=info msg="CreateContainer within sandbox \"91e4072603c67632a1050d5d7106379397a21952edc693dbdf3eedcf1e337cc8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"77f9545411f51565209d5ff66e634df5a4ccc85600f9a60f56a76db46b20cf6b\"" Sep 13 00:54:38.640731 env[1203]: time="2025-09-13T00:54:38.640697884Z" level=info msg="StartContainer for \"77f9545411f51565209d5ff66e634df5a4ccc85600f9a60f56a76db46b20cf6b\"" Sep 13 00:54:38.654022 systemd[1]: Started cri-containerd-77f9545411f51565209d5ff66e634df5a4ccc85600f9a60f56a76db46b20cf6b.scope. Sep 13 00:54:38.678241 env[1203]: time="2025-09-13T00:54:38.678179604Z" level=info msg="StartContainer for \"77f9545411f51565209d5ff66e634df5a4ccc85600f9a60f56a76db46b20cf6b\" returns successfully" Sep 13 00:54:38.680296 systemd[1]: cri-containerd-77f9545411f51565209d5ff66e634df5a4ccc85600f9a60f56a76db46b20cf6b.scope: Deactivated successfully. Sep 13 00:54:38.699463 env[1203]: time="2025-09-13T00:54:38.699410483Z" level=info msg="shim disconnected" id=77f9545411f51565209d5ff66e634df5a4ccc85600f9a60f56a76db46b20cf6b Sep 13 00:54:38.699640 env[1203]: time="2025-09-13T00:54:38.699466419Z" level=warning msg="cleaning up after shim disconnected" id=77f9545411f51565209d5ff66e634df5a4ccc85600f9a60f56a76db46b20cf6b namespace=k8s.io Sep 13 00:54:38.699640 env[1203]: time="2025-09-13T00:54:38.699478111Z" level=info msg="cleaning up dead shim" Sep 13 00:54:38.705506 env[1203]: time="2025-09-13T00:54:38.705470797Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:54:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3948 runtime=io.containerd.runc.v2\n" Sep 13 00:54:38.905770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d73b87eef200442c8ed6516a8dfc82ed6903b93a97675d742ac800e1fa6ccc2-rootfs.mount: Deactivated successfully. Sep 13 00:54:39.625257 kubelet[1898]: E0913 00:54:39.625228 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:39.626729 env[1203]: time="2025-09-13T00:54:39.626692295Z" level=info msg="CreateContainer within sandbox \"91e4072603c67632a1050d5d7106379397a21952edc693dbdf3eedcf1e337cc8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:54:39.643072 env[1203]: time="2025-09-13T00:54:39.640564962Z" level=info msg="CreateContainer within sandbox \"91e4072603c67632a1050d5d7106379397a21952edc693dbdf3eedcf1e337cc8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"52b9805054fe9b7d4bbed821c1c29aa8284448bdcea89d7080e848bc80d60689\"" Sep 13 00:54:39.643072 env[1203]: time="2025-09-13T00:54:39.642211251Z" level=info msg="StartContainer for \"52b9805054fe9b7d4bbed821c1c29aa8284448bdcea89d7080e848bc80d60689\"" Sep 13 00:54:39.659887 systemd[1]: Started cri-containerd-52b9805054fe9b7d4bbed821c1c29aa8284448bdcea89d7080e848bc80d60689.scope. Sep 13 00:54:39.679103 systemd[1]: cri-containerd-52b9805054fe9b7d4bbed821c1c29aa8284448bdcea89d7080e848bc80d60689.scope: Deactivated successfully. Sep 13 00:54:39.680077 env[1203]: time="2025-09-13T00:54:39.680013649Z" level=info msg="StartContainer for \"52b9805054fe9b7d4bbed821c1c29aa8284448bdcea89d7080e848bc80d60689\" returns successfully" Sep 13 00:54:39.698314 env[1203]: time="2025-09-13T00:54:39.698265110Z" level=info msg="shim disconnected" id=52b9805054fe9b7d4bbed821c1c29aa8284448bdcea89d7080e848bc80d60689 Sep 13 00:54:39.698314 env[1203]: time="2025-09-13T00:54:39.698304405Z" level=warning msg="cleaning up after shim disconnected" id=52b9805054fe9b7d4bbed821c1c29aa8284448bdcea89d7080e848bc80d60689 namespace=k8s.io Sep 13 00:54:39.698314 env[1203]: time="2025-09-13T00:54:39.698312260Z" level=info msg="cleaning up dead shim" Sep 13 00:54:39.704348 env[1203]: time="2025-09-13T00:54:39.704295441Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:54:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4001 runtime=io.containerd.runc.v2\n" Sep 13 00:54:39.905285 systemd[1]: run-containerd-runc-k8s.io-52b9805054fe9b7d4bbed821c1c29aa8284448bdcea89d7080e848bc80d60689-runc.AyxlFO.mount: Deactivated successfully. Sep 13 00:54:39.905364 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52b9805054fe9b7d4bbed821c1c29aa8284448bdcea89d7080e848bc80d60689-rootfs.mount: Deactivated successfully. Sep 13 00:54:40.470159 kubelet[1898]: E0913 00:54:40.470111 1898 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:54:40.628747 kubelet[1898]: E0913 00:54:40.628715 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:40.630298 env[1203]: time="2025-09-13T00:54:40.630258757Z" level=info msg="CreateContainer within sandbox \"91e4072603c67632a1050d5d7106379397a21952edc693dbdf3eedcf1e337cc8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:54:40.648192 env[1203]: time="2025-09-13T00:54:40.648140235Z" level=info msg="CreateContainer within sandbox \"91e4072603c67632a1050d5d7106379397a21952edc693dbdf3eedcf1e337cc8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2d54c0bd4133fc23dc6ba24c66cfcf86da83f29eaca359d712743d9c030cbef5\"" Sep 13 00:54:40.648646 env[1203]: time="2025-09-13T00:54:40.648614882Z" level=info msg="StartContainer for \"2d54c0bd4133fc23dc6ba24c66cfcf86da83f29eaca359d712743d9c030cbef5\"" Sep 13 00:54:40.666026 systemd[1]: Started cri-containerd-2d54c0bd4133fc23dc6ba24c66cfcf86da83f29eaca359d712743d9c030cbef5.scope. Sep 13 00:54:40.693079 env[1203]: time="2025-09-13T00:54:40.693015216Z" level=info msg="StartContainer for \"2d54c0bd4133fc23dc6ba24c66cfcf86da83f29eaca359d712743d9c030cbef5\" returns successfully" Sep 13 00:54:40.948086 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 13 00:54:41.411827 kubelet[1898]: E0913 00:54:41.411777 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:41.633731 kubelet[1898]: E0913 00:54:41.633689 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:42.034831 kubelet[1898]: I0913 00:54:42.034757 1898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-njcdh" podStartSLOduration=6.034739797 podStartE2EDuration="6.034739797s" podCreationTimestamp="2025-09-13 00:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:54:42.034536469 +0000 UTC m=+91.747927919" watchObservedRunningTime="2025-09-13 00:54:42.034739797 +0000 UTC m=+91.748131247" Sep 13 00:54:42.622016 kubelet[1898]: I0913 00:54:42.621957 1898 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-13T00:54:42Z","lastTransitionTime":"2025-09-13T00:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 13 00:54:42.953938 kubelet[1898]: E0913 00:54:42.953846 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:43.448156 systemd[1]: run-containerd-runc-k8s.io-2d54c0bd4133fc23dc6ba24c66cfcf86da83f29eaca359d712743d9c030cbef5-runc.g8xydU.mount: Deactivated successfully. Sep 13 00:54:43.461384 systemd-networkd[1023]: lxc_health: Link UP Sep 13 00:54:43.475083 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 00:54:43.474764 systemd-networkd[1023]: lxc_health: Gained carrier Sep 13 00:54:44.953785 kubelet[1898]: E0913 00:54:44.953736 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:45.186244 systemd-networkd[1023]: lxc_health: Gained IPv6LL Sep 13 00:54:45.640526 kubelet[1898]: E0913 00:54:45.640492 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:46.642303 kubelet[1898]: E0913 00:54:46.642260 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:47.411330 kubelet[1898]: E0913 00:54:47.411284 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:47.411330 kubelet[1898]: E0913 00:54:47.411288 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:47.728553 systemd[1]: run-containerd-runc-k8s.io-2d54c0bd4133fc23dc6ba24c66cfcf86da83f29eaca359d712743d9c030cbef5-runc.qYuVAB.mount: Deactivated successfully. Sep 13 00:54:49.410988 kubelet[1898]: E0913 00:54:49.410949 1898 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:49.851334 sshd[3718]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:49.853612 systemd[1]: sshd@25-10.0.0.130:22-10.0.0.1:53170.service: Deactivated successfully. Sep 13 00:54:49.854519 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 00:54:49.855101 systemd-logind[1194]: Session 26 logged out. Waiting for processes to exit. Sep 13 00:54:49.855758 systemd-logind[1194]: Removed session 26.