Aug 13 00:53:02.871922 kernel: Linux version 5.15.189-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Aug 12 23:01:50 -00 2025 Aug 13 00:53:02.871950 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:53:02.871963 kernel: BIOS-provided physical RAM map: Aug 13 00:53:02.871971 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 13 00:53:02.871978 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Aug 13 00:53:02.871985 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Aug 13 00:53:02.871995 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Aug 13 00:53:02.872002 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Aug 13 00:53:02.872010 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Aug 13 00:53:02.872019 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Aug 13 00:53:02.872027 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Aug 13 00:53:02.872035 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Aug 13 00:53:02.872043 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Aug 13 00:53:02.872050 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Aug 13 00:53:02.872060 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Aug 13 00:53:02.872070 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Aug 13 00:53:02.872079 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Aug 13 00:53:02.872087 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 00:53:02.872112 kernel: NX (Execute Disable) protection: active Aug 13 00:53:02.872121 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Aug 13 00:53:02.872129 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Aug 13 00:53:02.872137 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Aug 13 00:53:02.872146 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Aug 13 00:53:02.872153 kernel: extended physical RAM map: Aug 13 00:53:02.872161 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 13 00:53:02.872172 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Aug 13 00:53:02.872181 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Aug 13 00:53:02.872189 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Aug 13 00:53:02.872197 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Aug 13 00:53:02.872205 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Aug 13 00:53:02.872213 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Aug 13 00:53:02.872221 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b438017] usable Aug 13 00:53:02.872230 kernel: reserve setup_data: [mem 0x000000009b438018-0x000000009b474e57] usable Aug 13 00:53:02.872238 kernel: reserve setup_data: [mem 0x000000009b474e58-0x000000009b475017] usable Aug 13 00:53:02.872246 kernel: reserve setup_data: [mem 0x000000009b475018-0x000000009b47ec57] usable Aug 13 00:53:02.872254 kernel: reserve setup_data: [mem 0x000000009b47ec58-0x000000009c8eefff] usable Aug 13 00:53:02.872266 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Aug 13 00:53:02.872274 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Aug 13 00:53:02.872282 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Aug 13 00:53:02.872291 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Aug 13 00:53:02.872303 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Aug 13 00:53:02.872312 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Aug 13 00:53:02.872321 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 00:53:02.872333 kernel: efi: EFI v2.70 by EDK II Aug 13 00:53:02.872342 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b673018 RNG=0x9cb73018 Aug 13 00:53:02.872351 kernel: random: crng init done Aug 13 00:53:02.872360 kernel: SMBIOS 2.8 present. Aug 13 00:53:02.872369 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Aug 13 00:53:02.872377 kernel: Hypervisor detected: KVM Aug 13 00:53:02.872386 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 00:53:02.872395 kernel: kvm-clock: cpu 0, msr e19e001, primary cpu clock Aug 13 00:53:02.872403 kernel: kvm-clock: using sched offset of 5308708056 cycles Aug 13 00:53:02.872420 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 00:53:02.872441 kernel: tsc: Detected 2794.750 MHz processor Aug 13 00:53:02.872451 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 00:53:02.872460 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 00:53:02.872469 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Aug 13 00:53:02.872478 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 00:53:02.872487 kernel: Using GB pages for direct mapping Aug 13 00:53:02.872496 kernel: Secure boot disabled Aug 13 00:53:02.872505 kernel: ACPI: Early table checksum verification disabled Aug 13 00:53:02.872517 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Aug 13 00:53:02.872526 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Aug 13 00:53:02.872535 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:53:02.872545 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:53:02.872557 kernel: ACPI: FACS 0x000000009CBDD000 000040 Aug 13 00:53:02.872566 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:53:02.872575 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:53:02.872587 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:53:02.872597 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:53:02.872608 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Aug 13 00:53:02.872627 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Aug 13 00:53:02.872636 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Aug 13 00:53:02.872645 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Aug 13 00:53:02.872654 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Aug 13 00:53:02.872663 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Aug 13 00:53:02.872672 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Aug 13 00:53:02.872681 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Aug 13 00:53:02.872690 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Aug 13 00:53:02.872701 kernel: No NUMA configuration found Aug 13 00:53:02.872710 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Aug 13 00:53:02.872720 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Aug 13 00:53:02.872729 kernel: Zone ranges: Aug 13 00:53:02.872738 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 00:53:02.872747 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Aug 13 00:53:02.872756 kernel: Normal empty Aug 13 00:53:02.872765 kernel: Movable zone start for each node Aug 13 00:53:02.872774 kernel: Early memory node ranges Aug 13 00:53:02.872785 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Aug 13 00:53:02.872794 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Aug 13 00:53:02.872803 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Aug 13 00:53:02.872812 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Aug 13 00:53:02.872821 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Aug 13 00:53:02.872830 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Aug 13 00:53:02.872839 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Aug 13 00:53:02.872848 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 00:53:02.872857 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Aug 13 00:53:02.872866 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Aug 13 00:53:02.872877 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 00:53:02.872886 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Aug 13 00:53:02.872895 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Aug 13 00:53:02.872904 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Aug 13 00:53:02.872913 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 00:53:02.872922 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 00:53:02.872931 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 00:53:02.872940 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 00:53:02.872950 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 00:53:02.872961 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 00:53:02.872970 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 00:53:02.872979 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 00:53:02.872992 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 00:53:02.873004 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 00:53:02.873013 kernel: TSC deadline timer available Aug 13 00:53:02.873022 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Aug 13 00:53:02.873031 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 00:53:02.873040 kernel: kvm-guest: setup PV sched yield Aug 13 00:53:02.873051 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Aug 13 00:53:02.873061 kernel: Booting paravirtualized kernel on KVM Aug 13 00:53:02.873076 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 00:53:02.873088 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Aug 13 00:53:02.873146 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Aug 13 00:53:02.873156 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Aug 13 00:53:02.873184 kernel: pcpu-alloc: [0] 0 1 2 3 Aug 13 00:53:02.873213 kernel: kvm-guest: setup async PF for cpu 0 Aug 13 00:53:02.873223 kernel: kvm-guest: stealtime: cpu 0, msr 9b21c0c0 Aug 13 00:53:02.873232 kernel: kvm-guest: PV spinlocks enabled Aug 13 00:53:02.873241 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 00:53:02.873250 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Aug 13 00:53:02.873263 kernel: Policy zone: DMA32 Aug 13 00:53:02.873274 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:53:02.873284 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:53:02.873293 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:53:02.873305 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:53:02.873314 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:53:02.873324 kernel: Memory: 2397432K/2567000K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47488K init, 4092K bss, 169308K reserved, 0K cma-reserved) Aug 13 00:53:02.873333 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 13 00:53:02.873343 kernel: ftrace: allocating 34608 entries in 136 pages Aug 13 00:53:02.873352 kernel: ftrace: allocated 136 pages with 2 groups Aug 13 00:53:02.873362 kernel: rcu: Hierarchical RCU implementation. Aug 13 00:53:02.873372 kernel: rcu: RCU event tracing is enabled. Aug 13 00:53:02.873387 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 13 00:53:02.873397 kernel: Rude variant of Tasks RCU enabled. Aug 13 00:53:02.873407 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:53:02.873419 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:53:02.873431 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 13 00:53:02.873442 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Aug 13 00:53:02.873452 kernel: Console: colour dummy device 80x25 Aug 13 00:53:02.873462 kernel: printk: console [ttyS0] enabled Aug 13 00:53:02.873471 kernel: ACPI: Core revision 20210730 Aug 13 00:53:02.873480 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 00:53:02.873493 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 00:53:02.873502 kernel: x2apic enabled Aug 13 00:53:02.873512 kernel: Switched APIC routing to physical x2apic. Aug 13 00:53:02.873522 kernel: kvm-guest: setup PV IPIs Aug 13 00:53:02.873531 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 00:53:02.873540 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Aug 13 00:53:02.873550 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Aug 13 00:53:02.873559 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 00:53:02.873575 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 00:53:02.878402 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 00:53:02.878414 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 00:53:02.878425 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 00:53:02.878436 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 00:53:02.878446 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Aug 13 00:53:02.878456 kernel: RETBleed: Mitigation: untrained return thunk Aug 13 00:53:02.878466 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 00:53:02.878502 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Aug 13 00:53:02.878519 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 00:53:02.878529 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 00:53:02.878539 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 00:53:02.878550 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 00:53:02.878560 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 13 00:53:02.878570 kernel: Freeing SMP alternatives memory: 32K Aug 13 00:53:02.878588 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:53:02.878604 kernel: LSM: Security Framework initializing Aug 13 00:53:02.878625 kernel: SELinux: Initializing. Aug 13 00:53:02.878639 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:53:02.878650 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:53:02.878661 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Aug 13 00:53:02.878671 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 00:53:02.878681 kernel: ... version: 0 Aug 13 00:53:02.878691 kernel: ... bit width: 48 Aug 13 00:53:02.878715 kernel: ... generic registers: 6 Aug 13 00:53:02.878726 kernel: ... value mask: 0000ffffffffffff Aug 13 00:53:02.878736 kernel: ... max period: 00007fffffffffff Aug 13 00:53:02.878749 kernel: ... fixed-purpose events: 0 Aug 13 00:53:02.878759 kernel: ... event mask: 000000000000003f Aug 13 00:53:02.878769 kernel: signal: max sigframe size: 1776 Aug 13 00:53:02.878787 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:53:02.878803 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:53:02.878813 kernel: x86: Booting SMP configuration: Aug 13 00:53:02.878823 kernel: .... node #0, CPUs: #1 Aug 13 00:53:02.878834 kernel: kvm-clock: cpu 1, msr e19e041, secondary cpu clock Aug 13 00:53:02.878844 kernel: kvm-guest: setup async PF for cpu 1 Aug 13 00:53:02.878854 kernel: kvm-guest: stealtime: cpu 1, msr 9b29c0c0 Aug 13 00:53:02.878881 kernel: #2 Aug 13 00:53:02.878892 kernel: kvm-clock: cpu 2, msr e19e081, secondary cpu clock Aug 13 00:53:02.878902 kernel: kvm-guest: setup async PF for cpu 2 Aug 13 00:53:02.878912 kernel: kvm-guest: stealtime: cpu 2, msr 9b31c0c0 Aug 13 00:53:02.878922 kernel: #3 Aug 13 00:53:02.878946 kernel: kvm-clock: cpu 3, msr e19e0c1, secondary cpu clock Aug 13 00:53:02.878956 kernel: kvm-guest: setup async PF for cpu 3 Aug 13 00:53:02.878966 kernel: kvm-guest: stealtime: cpu 3, msr 9b39c0c0 Aug 13 00:53:02.878989 kernel: smp: Brought up 1 node, 4 CPUs Aug 13 00:53:02.879007 kernel: smpboot: Max logical packages: 1 Aug 13 00:53:02.879017 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Aug 13 00:53:02.879027 kernel: devtmpfs: initialized Aug 13 00:53:02.879038 kernel: x86/mm: Memory block size: 128MB Aug 13 00:53:02.879061 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Aug 13 00:53:02.879072 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Aug 13 00:53:02.879082 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Aug 13 00:53:02.879103 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Aug 13 00:53:02.879128 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Aug 13 00:53:02.879142 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:53:02.879152 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 13 00:53:02.879172 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:53:02.879186 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:53:02.879197 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:53:02.879208 kernel: audit: type=2000 audit(1755046382.061:1): state=initialized audit_enabled=0 res=1 Aug 13 00:53:02.879218 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:53:02.879237 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 00:53:02.879249 kernel: cpuidle: using governor menu Aug 13 00:53:02.879262 kernel: ACPI: bus type PCI registered Aug 13 00:53:02.879272 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:53:02.879282 kernel: dca service started, version 1.12.1 Aug 13 00:53:02.879292 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Aug 13 00:53:02.879303 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Aug 13 00:53:02.879313 kernel: PCI: Using configuration type 1 for base access Aug 13 00:53:02.879323 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 00:53:02.879334 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:53:02.879344 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:53:02.879358 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:53:02.879368 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:53:02.879378 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:53:02.879388 kernel: ACPI: Added _OSI(Linux-Dell-Video) Aug 13 00:53:02.879398 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Aug 13 00:53:02.879408 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Aug 13 00:53:02.879418 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:53:02.879428 kernel: ACPI: Interpreter enabled Aug 13 00:53:02.879438 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 00:53:02.879449 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 00:53:02.879458 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 00:53:02.879468 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 00:53:02.879478 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 00:53:02.879690 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:53:02.879789 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 00:53:02.879882 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 00:53:02.879899 kernel: PCI host bridge to bus 0000:00 Aug 13 00:53:02.880010 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 00:53:02.880110 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 00:53:02.880208 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 00:53:02.880333 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Aug 13 00:53:02.880432 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 00:53:02.880528 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Aug 13 00:53:02.880638 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 00:53:02.880779 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Aug 13 00:53:02.880913 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Aug 13 00:53:02.881022 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Aug 13 00:53:02.881147 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Aug 13 00:53:02.881271 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Aug 13 00:53:02.881415 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Aug 13 00:53:02.881578 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 00:53:02.881723 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Aug 13 00:53:02.881837 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Aug 13 00:53:02.881943 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Aug 13 00:53:02.882047 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Aug 13 00:53:02.882187 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Aug 13 00:53:02.882298 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Aug 13 00:53:02.882405 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Aug 13 00:53:02.882516 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Aug 13 00:53:02.882656 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 13 00:53:02.882765 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Aug 13 00:53:02.882870 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Aug 13 00:53:02.882975 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Aug 13 00:53:02.883084 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Aug 13 00:53:02.883233 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Aug 13 00:53:02.883341 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 00:53:02.883467 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Aug 13 00:53:02.883572 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Aug 13 00:53:02.883687 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Aug 13 00:53:02.883812 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Aug 13 00:53:02.883922 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Aug 13 00:53:02.883936 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 00:53:02.883946 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 00:53:02.883956 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 00:53:02.883965 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 00:53:02.883975 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 00:53:02.883984 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 00:53:02.883994 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 00:53:02.884006 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 00:53:02.884016 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 00:53:02.884025 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 00:53:02.884035 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 00:53:02.884044 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 00:53:02.884054 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 00:53:02.884063 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 00:53:02.884073 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 00:53:02.884082 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 00:53:02.884107 kernel: iommu: Default domain type: Translated Aug 13 00:53:02.884117 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 00:53:02.884223 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 00:53:02.884325 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 00:53:02.884429 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 00:53:02.884446 kernel: vgaarb: loaded Aug 13 00:53:02.884458 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 00:53:02.884468 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 00:53:02.884478 kernel: PTP clock support registered Aug 13 00:53:02.884490 kernel: Registered efivars operations Aug 13 00:53:02.884500 kernel: PCI: Using ACPI for IRQ routing Aug 13 00:53:02.884510 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 00:53:02.884520 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Aug 13 00:53:02.884529 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Aug 13 00:53:02.884538 kernel: e820: reserve RAM buffer [mem 0x9b438018-0x9bffffff] Aug 13 00:53:02.884548 kernel: e820: reserve RAM buffer [mem 0x9b475018-0x9bffffff] Aug 13 00:53:02.884557 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Aug 13 00:53:02.884567 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Aug 13 00:53:02.884578 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 00:53:02.884588 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 00:53:02.884598 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 00:53:02.884607 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:53:02.884628 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:53:02.884638 kernel: pnp: PnP ACPI init Aug 13 00:53:02.884767 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 00:53:02.884784 kernel: pnp: PnP ACPI: found 6 devices Aug 13 00:53:02.884797 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 00:53:02.884807 kernel: NET: Registered PF_INET protocol family Aug 13 00:53:02.884817 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:53:02.884826 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 00:53:02.884836 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:53:02.884846 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:53:02.884855 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Aug 13 00:53:02.884865 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 00:53:02.884875 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:53:02.884886 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:53:02.884897 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:53:02.884906 kernel: NET: Registered PF_XDP protocol family Aug 13 00:53:02.885046 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Aug 13 00:53:02.885169 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Aug 13 00:53:02.885273 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 00:53:02.885350 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 00:53:02.885431 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 00:53:02.885512 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Aug 13 00:53:02.885587 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 00:53:02.885675 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Aug 13 00:53:02.885687 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:53:02.885696 kernel: Initialise system trusted keyrings Aug 13 00:53:02.885705 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 00:53:02.885713 kernel: Key type asymmetric registered Aug 13 00:53:02.885721 kernel: Asymmetric key parser 'x509' registered Aug 13 00:53:02.885732 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 13 00:53:02.885741 kernel: io scheduler mq-deadline registered Aug 13 00:53:02.885750 kernel: io scheduler kyber registered Aug 13 00:53:02.885768 kernel: io scheduler bfq registered Aug 13 00:53:02.885779 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 00:53:02.885789 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 00:53:02.885798 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 00:53:02.885807 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Aug 13 00:53:02.885816 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:53:02.885826 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 00:53:02.885835 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 00:53:02.885844 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 00:53:02.885853 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 00:53:02.885957 kernel: rtc_cmos 00:04: RTC can wake from S4 Aug 13 00:53:02.885970 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 00:53:02.886165 kernel: rtc_cmos 00:04: registered as rtc0 Aug 13 00:53:02.886267 kernel: rtc_cmos 00:04: setting system clock to 2025-08-13T00:53:02 UTC (1755046382) Aug 13 00:53:02.886373 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 00:53:02.886388 kernel: efifb: probing for efifb Aug 13 00:53:02.886398 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Aug 13 00:53:02.886408 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Aug 13 00:53:02.886418 kernel: efifb: scrolling: redraw Aug 13 00:53:02.886428 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Aug 13 00:53:02.886438 kernel: Console: switching to colour frame buffer device 160x50 Aug 13 00:53:02.886448 kernel: fb0: EFI VGA frame buffer device Aug 13 00:53:02.886458 kernel: pstore: Registered efi as persistent store backend Aug 13 00:53:02.886471 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:53:02.886481 kernel: Segment Routing with IPv6 Aug 13 00:53:02.886491 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:53:02.886502 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:53:02.886514 kernel: Key type dns_resolver registered Aug 13 00:53:02.886523 kernel: IPI shorthand broadcast: enabled Aug 13 00:53:02.886535 kernel: sched_clock: Marking stable (567540800, 125182921)->(787626969, -94903248) Aug 13 00:53:02.886545 kernel: registered taskstats version 1 Aug 13 00:53:02.886555 kernel: Loading compiled-in X.509 certificates Aug 13 00:53:02.886565 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.189-flatcar: 1d5a64b5798e654719a8bd91d683e7e9894bd433' Aug 13 00:53:02.886575 kernel: Key type .fscrypt registered Aug 13 00:53:02.886584 kernel: Key type fscrypt-provisioning registered Aug 13 00:53:02.886595 kernel: pstore: Using crash dump compression: deflate Aug 13 00:53:02.886605 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:53:02.886627 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:53:02.886637 kernel: ima: No architecture policies found Aug 13 00:53:02.886647 kernel: clk: Disabling unused clocks Aug 13 00:53:02.886657 kernel: Freeing unused kernel image (initmem) memory: 47488K Aug 13 00:53:02.886668 kernel: Write protecting the kernel read-only data: 28672k Aug 13 00:53:02.886678 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Aug 13 00:53:02.886688 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Aug 13 00:53:02.886698 kernel: Run /init as init process Aug 13 00:53:02.886708 kernel: with arguments: Aug 13 00:53:02.886718 kernel: /init Aug 13 00:53:02.886730 kernel: with environment: Aug 13 00:53:02.886739 kernel: HOME=/ Aug 13 00:53:02.886749 kernel: TERM=linux Aug 13 00:53:02.886759 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:53:02.886772 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 00:53:02.886785 systemd[1]: Detected virtualization kvm. Aug 13 00:53:02.886796 systemd[1]: Detected architecture x86-64. Aug 13 00:53:02.886808 systemd[1]: Running in initrd. Aug 13 00:53:02.886818 systemd[1]: No hostname configured, using default hostname. Aug 13 00:53:02.886828 systemd[1]: Hostname set to . Aug 13 00:53:02.886839 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:53:02.886850 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:53:02.886861 systemd[1]: Started systemd-ask-password-console.path. Aug 13 00:53:02.886871 systemd[1]: Reached target cryptsetup.target. Aug 13 00:53:02.886881 systemd[1]: Reached target paths.target. Aug 13 00:53:02.886892 systemd[1]: Reached target slices.target. Aug 13 00:53:02.886904 systemd[1]: Reached target swap.target. Aug 13 00:53:02.886915 systemd[1]: Reached target timers.target. Aug 13 00:53:02.886928 systemd[1]: Listening on iscsid.socket. Aug 13 00:53:02.886939 systemd[1]: Listening on iscsiuio.socket. Aug 13 00:53:02.886949 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 00:53:02.886960 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 00:53:02.886971 systemd[1]: Listening on systemd-journald.socket. Aug 13 00:53:02.886990 systemd[1]: Listening on systemd-networkd.socket. Aug 13 00:53:02.887001 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 00:53:02.887012 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 00:53:02.887022 systemd[1]: Reached target sockets.target. Aug 13 00:53:02.887033 systemd[1]: Starting kmod-static-nodes.service... Aug 13 00:53:02.887044 systemd[1]: Finished network-cleanup.service. Aug 13 00:53:02.887054 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:53:02.887065 systemd[1]: Starting systemd-journald.service... Aug 13 00:53:02.887076 systemd[1]: Starting systemd-modules-load.service... Aug 13 00:53:02.887105 systemd[1]: Starting systemd-resolved.service... Aug 13 00:53:02.887116 systemd[1]: Starting systemd-vconsole-setup.service... Aug 13 00:53:02.887127 systemd[1]: Finished kmod-static-nodes.service. Aug 13 00:53:02.887137 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:53:02.887148 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 00:53:02.887158 systemd[1]: Finished systemd-vconsole-setup.service. Aug 13 00:53:02.887169 kernel: audit: type=1130 audit(1755046382.881:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:02.887199 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 00:53:02.887214 systemd-journald[197]: Journal started Aug 13 00:53:02.887293 systemd-journald[197]: Runtime Journal (/run/log/journal/d95a0615226541939bc5bb7adffacff4) is 6.0M, max 48.4M, 42.4M free. Aug 13 00:53:02.887341 kernel: audit: type=1130 audit(1755046382.887:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:02.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:02.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:02.873768 systemd-modules-load[198]: Inserted module 'overlay' Aug 13 00:53:02.892837 systemd[1]: Starting dracut-cmdline-ask.service... Aug 13 00:53:02.892860 systemd[1]: Started systemd-journald.service. Aug 13 00:53:02.893153 kernel: audit: type=1130 audit(1755046382.891:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:02.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:02.902151 systemd-resolved[199]: Positive Trust Anchors: Aug 13 00:53:02.902180 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:53:02.902222 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 00:53:02.904458 systemd-resolved[199]: Defaulting to hostname 'linux'. Aug 13 00:53:02.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:02.905324 systemd[1]: Started systemd-resolved.service. Aug 13 00:53:02.906544 systemd[1]: Reached target nss-lookup.target. Aug 13 00:53:02.910841 kernel: audit: type=1130 audit(1755046382.902:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:02.910208 systemd[1]: Finished dracut-cmdline-ask.service. Aug 13 00:53:02.917983 kernel: audit: type=1130 audit(1755046382.911:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:02.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:02.915066 systemd[1]: Starting dracut-cmdline.service... Aug 13 00:53:02.923693 dracut-cmdline[215]: dracut-dracut-053 Aug 13 00:53:02.925739 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:53:02.948122 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:53:02.953666 systemd-modules-load[198]: Inserted module 'br_netfilter' Aug 13 00:53:02.954743 kernel: Bridge firewalling registered Aug 13 00:53:02.974125 kernel: SCSI subsystem initialized Aug 13 00:53:02.985263 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:53:02.985312 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:53:02.985323 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Aug 13 00:53:02.987123 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:53:02.989359 systemd-modules-load[198]: Inserted module 'dm_multipath' Aug 13 00:53:02.990904 systemd[1]: Finished systemd-modules-load.service. Aug 13 00:53:02.996357 kernel: audit: type=1130 audit(1755046382.991:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:02.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:02.992673 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:53:03.002342 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:53:03.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:03.007117 kernel: audit: type=1130 audit(1755046383.003:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:03.007142 kernel: iscsi: registered transport (tcp) Aug 13 00:53:03.029259 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:53:03.029318 kernel: QLogic iSCSI HBA Driver Aug 13 00:53:03.058888 systemd[1]: Finished dracut-cmdline.service. Aug 13 00:53:03.063838 kernel: audit: type=1130 audit(1755046383.059:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:03.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:03.060857 systemd[1]: Starting dracut-pre-udev.service... Aug 13 00:53:03.106131 kernel: raid6: avx2x4 gen() 30678 MB/s Aug 13 00:53:03.123118 kernel: raid6: avx2x4 xor() 8206 MB/s Aug 13 00:53:03.140124 kernel: raid6: avx2x2 gen() 32025 MB/s Aug 13 00:53:03.157129 kernel: raid6: avx2x2 xor() 18870 MB/s Aug 13 00:53:03.174118 kernel: raid6: avx2x1 gen() 26393 MB/s Aug 13 00:53:03.191130 kernel: raid6: avx2x1 xor() 15143 MB/s Aug 13 00:53:03.208127 kernel: raid6: sse2x4 gen() 14586 MB/s Aug 13 00:53:03.225131 kernel: raid6: sse2x4 xor() 7605 MB/s Aug 13 00:53:03.242124 kernel: raid6: sse2x2 gen() 16034 MB/s Aug 13 00:53:03.259130 kernel: raid6: sse2x2 xor() 9575 MB/s Aug 13 00:53:03.304142 kernel: raid6: sse2x1 gen() 12193 MB/s Aug 13 00:53:03.321478 kernel: raid6: sse2x1 xor() 7703 MB/s Aug 13 00:53:03.321536 kernel: raid6: using algorithm avx2x2 gen() 32025 MB/s Aug 13 00:53:03.321550 kernel: raid6: .... xor() 18870 MB/s, rmw enabled Aug 13 00:53:03.322147 kernel: raid6: using avx2x2 recovery algorithm Aug 13 00:53:03.335131 kernel: xor: automatically using best checksumming function avx Aug 13 00:53:03.430131 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Aug 13 00:53:03.437621 systemd[1]: Finished dracut-pre-udev.service. Aug 13 00:53:03.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:03.441000 audit: BPF prog-id=7 op=LOAD Aug 13 00:53:03.441000 audit: BPF prog-id=8 op=LOAD Aug 13 00:53:03.442169 kernel: audit: type=1130 audit(1755046383.438:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:03.442429 systemd[1]: Starting systemd-udevd.service... Aug 13 00:53:03.455168 systemd-udevd[400]: Using default interface naming scheme 'v252'. Aug 13 00:53:03.459214 systemd[1]: Started systemd-udevd.service. Aug 13 00:53:03.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:03.461719 systemd[1]: Starting dracut-pre-trigger.service... Aug 13 00:53:03.476289 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Aug 13 00:53:03.506033 systemd[1]: Finished dracut-pre-trigger.service. Aug 13 00:53:03.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:03.508737 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 00:53:03.548388 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 00:53:03.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:03.595132 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 13 00:53:03.601123 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 00:53:03.601157 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:53:03.601168 kernel: GPT:9289727 != 19775487 Aug 13 00:53:03.601177 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:53:03.601185 kernel: GPT:9289727 != 19775487 Aug 13 00:53:03.601193 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:53:03.601202 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:53:03.606113 kernel: libata version 3.00 loaded. Aug 13 00:53:03.612209 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 00:53:03.612264 kernel: AES CTR mode by8 optimization enabled Aug 13 00:53:03.616375 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 00:53:03.633757 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 00:53:03.633779 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Aug 13 00:53:03.633923 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 00:53:03.634042 kernel: scsi host0: ahci Aug 13 00:53:03.634214 kernel: scsi host1: ahci Aug 13 00:53:03.634394 kernel: scsi host2: ahci Aug 13 00:53:03.634553 kernel: scsi host3: ahci Aug 13 00:53:03.634689 kernel: scsi host4: ahci Aug 13 00:53:03.634798 kernel: scsi host5: ahci Aug 13 00:53:03.634926 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Aug 13 00:53:03.634941 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Aug 13 00:53:03.634953 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Aug 13 00:53:03.634981 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Aug 13 00:53:03.634995 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Aug 13 00:53:03.635008 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Aug 13 00:53:03.635020 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (472) Aug 13 00:53:03.629812 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Aug 13 00:53:03.638201 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Aug 13 00:53:03.650338 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Aug 13 00:53:03.655645 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 00:53:03.660414 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Aug 13 00:53:03.666182 systemd[1]: Starting disk-uuid.service... Aug 13 00:53:03.856041 disk-uuid[529]: Primary Header is updated. Aug 13 00:53:03.856041 disk-uuid[529]: Secondary Entries is updated. Aug 13 00:53:03.856041 disk-uuid[529]: Secondary Header is updated. Aug 13 00:53:03.861139 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:53:03.865130 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:53:03.941148 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 00:53:03.941221 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 00:53:03.949120 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 00:53:03.949172 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 00:53:03.950131 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Aug 13 00:53:03.951134 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 00:53:03.952123 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Aug 13 00:53:03.953112 kernel: ata3.00: applying bridge limits Aug 13 00:53:03.953155 kernel: ata3.00: configured for UDMA/100 Aug 13 00:53:03.954116 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Aug 13 00:53:03.989135 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Aug 13 00:53:04.005779 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 00:53:04.005796 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Aug 13 00:53:04.954687 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:53:04.957502 disk-uuid[530]: The operation has completed successfully. Aug 13 00:53:05.027630 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:53:05.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:05.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:05.027717 systemd[1]: Finished disk-uuid.service. Aug 13 00:53:05.030855 systemd[1]: Starting verity-setup.service... Aug 13 00:53:05.061298 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Aug 13 00:53:05.154918 systemd[1]: Found device dev-mapper-usr.device. Aug 13 00:53:05.158839 systemd[1]: Mounting sysusr-usr.mount... Aug 13 00:53:05.166822 systemd[1]: Finished verity-setup.service. Aug 13 00:53:05.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:05.440175 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Aug 13 00:53:05.439841 systemd[1]: Mounted sysusr-usr.mount. Aug 13 00:53:05.441971 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Aug 13 00:53:05.459759 systemd[1]: Starting ignition-setup.service... Aug 13 00:53:05.462737 systemd[1]: Starting parse-ip-for-networkd.service... Aug 13 00:53:05.483775 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:53:05.483844 kernel: BTRFS info (device vda6): using free space tree Aug 13 00:53:05.483859 kernel: BTRFS info (device vda6): has skinny extents Aug 13 00:53:05.516504 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 00:53:05.538556 systemd[1]: Finished ignition-setup.service. Aug 13 00:53:05.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:05.540001 systemd[1]: Starting ignition-fetch-offline.service... Aug 13 00:53:05.634656 systemd[1]: Finished parse-ip-for-networkd.service. Aug 13 00:53:05.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:05.649000 audit: BPF prog-id=9 op=LOAD Aug 13 00:53:05.650496 systemd[1]: Starting systemd-networkd.service... Aug 13 00:53:05.713421 systemd-networkd[720]: lo: Link UP Aug 13 00:53:05.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:05.713431 systemd-networkd[720]: lo: Gained carrier Aug 13 00:53:05.714403 systemd-networkd[720]: Enumeration completed Aug 13 00:53:05.714926 systemd-networkd[720]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:53:05.715317 systemd[1]: Started systemd-networkd.service. Aug 13 00:53:05.718466 systemd[1]: Reached target network.target. Aug 13 00:53:05.723346 systemd-networkd[720]: eth0: Link UP Aug 13 00:53:05.723353 systemd-networkd[720]: eth0: Gained carrier Aug 13 00:53:05.727891 systemd[1]: Starting iscsiuio.service... Aug 13 00:53:05.991506 systemd-networkd[720]: eth0: DHCPv4 address 10.0.0.79/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 00:53:05.995819 systemd[1]: Started iscsiuio.service. Aug 13 00:53:06.000675 ignition[659]: Ignition 2.14.0 Aug 13 00:53:05.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:06.000699 ignition[659]: Stage: fetch-offline Aug 13 00:53:06.000805 ignition[659]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:53:06.000820 ignition[659]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:53:06.000994 ignition[659]: parsed url from cmdline: "" Aug 13 00:53:06.000998 ignition[659]: no config URL provided Aug 13 00:53:06.001004 ignition[659]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:53:06.011681 systemd[1]: Starting iscsid.service... Aug 13 00:53:06.001012 ignition[659]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:53:06.022854 iscsid[726]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Aug 13 00:53:06.022854 iscsid[726]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Aug 13 00:53:06.022854 iscsid[726]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Aug 13 00:53:06.022854 iscsid[726]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Aug 13 00:53:06.022854 iscsid[726]: If using hardware iscsi like qla4xxx this message can be ignored. Aug 13 00:53:06.022854 iscsid[726]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Aug 13 00:53:06.022854 iscsid[726]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Aug 13 00:53:06.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:06.001046 ignition[659]: op(1): [started] loading QEMU firmware config module Aug 13 00:53:06.028388 systemd[1]: Started iscsid.service. Aug 13 00:53:06.001052 ignition[659]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 13 00:53:06.033285 systemd[1]: Starting dracut-initqueue.service... Aug 13 00:53:06.012624 ignition[659]: op(1): [finished] loading QEMU firmware config module Aug 13 00:53:06.057231 systemd[1]: Finished dracut-initqueue.service. Aug 13 00:53:06.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:06.058471 systemd[1]: Reached target remote-fs-pre.target. Aug 13 00:53:06.060234 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 00:53:06.062266 systemd[1]: Reached target remote-fs.target. Aug 13 00:53:06.063471 systemd[1]: Starting dracut-pre-mount.service... Aug 13 00:53:06.081652 systemd[1]: Finished dracut-pre-mount.service. Aug 13 00:53:06.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:06.098260 ignition[659]: parsing config with SHA512: 0c2a0082e68a240c6bd97df8868129978c2a5416d8cb3c634e4580eede2f7963fd2cb2e5207224bd2bb628aa08a1a554e3e8535423c7bfb54706e19173004b44 Aug 13 00:53:06.196435 unknown[659]: fetched base config from "system" Aug 13 00:53:06.196452 unknown[659]: fetched user config from "qemu" Aug 13 00:53:06.197329 ignition[659]: fetch-offline: fetch-offline passed Aug 13 00:53:06.197628 systemd-resolved[199]: Detected conflict on linux IN A 10.0.0.79 Aug 13 00:53:06.197431 ignition[659]: Ignition finished successfully Aug 13 00:53:06.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:06.197645 systemd-resolved[199]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Aug 13 00:53:06.200427 systemd[1]: Finished ignition-fetch-offline.service. Aug 13 00:53:06.204741 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 13 00:53:06.206160 systemd[1]: Starting ignition-kargs.service... Aug 13 00:53:06.242804 ignition[741]: Ignition 2.14.0 Aug 13 00:53:06.242826 ignition[741]: Stage: kargs Aug 13 00:53:06.243001 ignition[741]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:53:06.243015 ignition[741]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:53:06.244611 ignition[741]: kargs: kargs passed Aug 13 00:53:06.244674 ignition[741]: Ignition finished successfully Aug 13 00:53:06.253327 systemd[1]: Finished ignition-kargs.service. Aug 13 00:53:06.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:06.257886 systemd[1]: Starting ignition-disks.service... Aug 13 00:53:06.318875 ignition[747]: Ignition 2.14.0 Aug 13 00:53:06.318896 ignition[747]: Stage: disks Aug 13 00:53:06.319108 ignition[747]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:53:06.319124 ignition[747]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:53:06.320576 ignition[747]: disks: disks passed Aug 13 00:53:06.320632 ignition[747]: Ignition finished successfully Aug 13 00:53:06.330341 systemd[1]: Finished ignition-disks.service. Aug 13 00:53:06.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:06.331548 systemd[1]: Reached target initrd-root-device.target. Aug 13 00:53:06.333406 systemd[1]: Reached target local-fs-pre.target. Aug 13 00:53:06.335207 systemd[1]: Reached target local-fs.target. Aug 13 00:53:06.336882 systemd[1]: Reached target sysinit.target. Aug 13 00:53:06.340689 systemd[1]: Reached target basic.target. Aug 13 00:53:06.343458 systemd[1]: Starting systemd-fsck-root.service... Aug 13 00:53:06.373675 systemd-fsck[755]: ROOT: clean, 629/553520 files, 56027/553472 blocks Aug 13 00:53:06.620729 systemd[1]: Finished systemd-fsck-root.service. Aug 13 00:53:06.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:06.642849 systemd[1]: Mounting sysroot.mount... Aug 13 00:53:06.675761 systemd[1]: Mounted sysroot.mount. Aug 13 00:53:06.680821 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Aug 13 00:53:06.677810 systemd[1]: Reached target initrd-root-fs.target. Aug 13 00:53:06.684837 systemd[1]: Mounting sysroot-usr.mount... Aug 13 00:53:06.688812 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Aug 13 00:53:06.688879 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:53:06.688918 systemd[1]: Reached target ignition-diskful.target. Aug 13 00:53:06.694727 systemd[1]: Mounted sysroot-usr.mount. Aug 13 00:53:06.708901 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 13 00:53:06.716801 systemd[1]: Starting initrd-setup-root.service... Aug 13 00:53:06.726524 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (761) Aug 13 00:53:06.726553 initrd-setup-root[766]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:53:06.733708 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:53:06.733766 kernel: BTRFS info (device vda6): using free space tree Aug 13 00:53:06.733792 kernel: BTRFS info (device vda6): has skinny extents Aug 13 00:53:06.738167 initrd-setup-root[781]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:53:06.750672 initrd-setup-root[797]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:53:06.764771 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 13 00:53:06.777257 initrd-setup-root[808]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:53:06.870189 systemd[1]: Finished initrd-setup-root.service. Aug 13 00:53:06.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:06.872667 systemd[1]: Starting ignition-mount.service... Aug 13 00:53:06.875265 systemd[1]: Starting sysroot-boot.service... Aug 13 00:53:06.893630 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Aug 13 00:53:06.893760 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Aug 13 00:53:06.930438 systemd[1]: Finished sysroot-boot.service. Aug 13 00:53:06.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:07.035334 systemd-networkd[720]: eth0: Gained IPv6LL Aug 13 00:53:07.049309 ignition[829]: INFO : Ignition 2.14.0 Aug 13 00:53:07.049309 ignition[829]: INFO : Stage: mount Aug 13 00:53:07.054664 ignition[829]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:53:07.054664 ignition[829]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:53:07.054664 ignition[829]: INFO : mount: mount passed Aug 13 00:53:07.054664 ignition[829]: INFO : Ignition finished successfully Aug 13 00:53:07.066767 systemd[1]: Finished ignition-mount.service. Aug 13 00:53:07.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:07.073267 systemd[1]: Starting ignition-files.service... Aug 13 00:53:07.093742 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 13 00:53:07.113084 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (836) Aug 13 00:53:07.166812 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:53:07.166916 kernel: BTRFS info (device vda6): using free space tree Aug 13 00:53:07.166933 kernel: BTRFS info (device vda6): has skinny extents Aug 13 00:53:07.174692 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 13 00:53:07.197866 ignition[855]: INFO : Ignition 2.14.0 Aug 13 00:53:07.199229 ignition[855]: INFO : Stage: files Aug 13 00:53:07.199229 ignition[855]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:53:07.199229 ignition[855]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:53:07.204748 ignition[855]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:53:07.210789 ignition[855]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:53:07.210789 ignition[855]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:53:07.216255 ignition[855]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:53:07.216255 ignition[855]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:53:07.216255 ignition[855]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:53:07.216175 unknown[855]: wrote ssh authorized keys file for user: core Aug 13 00:53:07.226301 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Aug 13 00:53:07.226301 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Aug 13 00:53:07.297627 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 00:53:07.709153 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Aug 13 00:53:07.711882 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:53:07.711882 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 13 00:53:07.834945 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 00:53:08.300211 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:53:08.302979 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:53:08.302979 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:53:08.302979 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:53:08.312790 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:53:08.312790 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:53:08.312790 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:53:08.312790 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:53:08.312790 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:53:08.312790 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:53:08.312790 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:53:08.312790 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 00:53:08.312790 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 00:53:08.312790 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 00:53:08.312790 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Aug 13 00:53:08.734512 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 00:53:10.884321 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 00:53:10.884321 ignition[855]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 13 00:53:10.896580 ignition[855]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:53:10.896580 ignition[855]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:53:10.896580 ignition[855]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 13 00:53:10.896580 ignition[855]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Aug 13 00:53:10.896580 ignition[855]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 00:53:10.896580 ignition[855]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 00:53:10.896580 ignition[855]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Aug 13 00:53:10.896580 ignition[855]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:53:10.896580 ignition[855]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:53:10.896580 ignition[855]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Aug 13 00:53:10.896580 ignition[855]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 00:53:11.167483 ignition[855]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 00:53:11.167483 ignition[855]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Aug 13 00:53:11.173710 ignition[855]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:53:11.181244 ignition[855]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:53:11.181244 ignition[855]: INFO : files: files passed Aug 13 00:53:11.187974 ignition[855]: INFO : Ignition finished successfully Aug 13 00:53:11.207444 kernel: kauditd_printk_skb: 23 callbacks suppressed Aug 13 00:53:11.207480 kernel: audit: type=1130 audit(1755046391.189:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.183301 systemd[1]: Finished ignition-files.service. Aug 13 00:53:11.191144 systemd[1]: Starting initrd-setup-root-after-ignition.service... Aug 13 00:53:11.203943 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Aug 13 00:53:11.219654 initrd-setup-root-after-ignition[880]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Aug 13 00:53:11.235852 kernel: audit: type=1130 audit(1755046391.223:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.207604 systemd[1]: Starting ignition-quench.service... Aug 13 00:53:11.248720 kernel: audit: type=1130 audit(1755046391.236:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.248760 kernel: audit: type=1131 audit(1755046391.236:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.248896 initrd-setup-root-after-ignition[882]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:53:11.216433 systemd[1]: Finished initrd-setup-root-after-ignition.service. Aug 13 00:53:11.225000 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:53:11.225138 systemd[1]: Finished ignition-quench.service. Aug 13 00:53:11.239450 systemd[1]: Reached target ignition-complete.target. Aug 13 00:53:11.249919 systemd[1]: Starting initrd-parse-etc.service... Aug 13 00:53:11.271351 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:53:11.271498 systemd[1]: Finished initrd-parse-etc.service. Aug 13 00:53:11.287112 kernel: audit: type=1130 audit(1755046391.273:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.287149 kernel: audit: type=1131 audit(1755046391.273:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.274152 systemd[1]: Reached target initrd-fs.target. Aug 13 00:53:11.282645 systemd[1]: Reached target initrd.target. Aug 13 00:53:11.283084 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Aug 13 00:53:11.284269 systemd[1]: Starting dracut-pre-pivot.service... Aug 13 00:53:11.304810 systemd[1]: Finished dracut-pre-pivot.service. Aug 13 00:53:11.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.307164 systemd[1]: Starting initrd-cleanup.service... Aug 13 00:53:11.312740 kernel: audit: type=1130 audit(1755046391.305:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.324461 systemd[1]: Stopped target nss-lookup.target. Aug 13 00:53:11.329687 systemd[1]: Stopped target remote-cryptsetup.target. Aug 13 00:53:11.333286 systemd[1]: Stopped target timers.target. Aug 13 00:53:11.335412 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:53:11.335590 systemd[1]: Stopped dracut-pre-pivot.service. Aug 13 00:53:11.338989 systemd[1]: Stopped target initrd.target. Aug 13 00:53:11.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.342346 systemd[1]: Stopped target basic.target. Aug 13 00:53:11.347485 kernel: audit: type=1131 audit(1755046391.338:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.347556 systemd[1]: Stopped target ignition-complete.target. Aug 13 00:53:11.349859 systemd[1]: Stopped target ignition-diskful.target. Aug 13 00:53:11.352113 systemd[1]: Stopped target initrd-root-device.target. Aug 13 00:53:11.354352 systemd[1]: Stopped target remote-fs.target. Aug 13 00:53:11.363437 systemd[1]: Stopped target remote-fs-pre.target. Aug 13 00:53:11.364658 systemd[1]: Stopped target sysinit.target. Aug 13 00:53:11.366784 systemd[1]: Stopped target local-fs.target. Aug 13 00:53:11.369291 systemd[1]: Stopped target local-fs-pre.target. Aug 13 00:53:11.371258 systemd[1]: Stopped target swap.target. Aug 13 00:53:11.372312 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:53:11.373276 systemd[1]: Stopped dracut-pre-mount.service. Aug 13 00:53:11.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.376560 systemd[1]: Stopped target cryptsetup.target. Aug 13 00:53:11.383998 kernel: audit: type=1131 audit(1755046391.375:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.384085 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:53:11.385594 systemd[1]: Stopped dracut-initqueue.service. Aug 13 00:53:11.388506 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:53:11.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.391374 systemd[1]: Stopped ignition-fetch-offline.service. Aug 13 00:53:11.397279 kernel: audit: type=1131 audit(1755046391.387:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.397999 systemd[1]: Stopped target paths.target. Aug 13 00:53:11.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.399837 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:53:11.405247 systemd[1]: Stopped systemd-ask-password-console.path. Aug 13 00:53:11.409153 systemd[1]: Stopped target slices.target. Aug 13 00:53:11.413315 systemd[1]: Stopped target sockets.target. Aug 13 00:53:11.415564 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:53:11.416936 systemd[1]: Closed iscsid.socket. Aug 13 00:53:11.419550 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:53:11.419665 systemd[1]: Closed iscsiuio.socket. Aug 13 00:53:11.420841 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:53:11.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.420999 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Aug 13 00:53:11.427228 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:53:11.429557 systemd[1]: Stopped ignition-files.service. Aug 13 00:53:11.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.436260 systemd[1]: Stopping ignition-mount.service... Aug 13 00:53:11.444752 systemd[1]: Stopping sysroot-boot.service... Aug 13 00:53:11.449034 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:53:11.454157 ignition[896]: INFO : Ignition 2.14.0 Aug 13 00:53:11.454157 ignition[896]: INFO : Stage: umount Aug 13 00:53:11.454157 ignition[896]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:53:11.454157 ignition[896]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:53:11.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.454052 systemd[1]: Stopped systemd-udev-trigger.service. Aug 13 00:53:11.470127 ignition[896]: INFO : umount: umount passed Aug 13 00:53:11.470127 ignition[896]: INFO : Ignition finished successfully Aug 13 00:53:11.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.455657 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:53:11.457734 systemd[1]: Stopped dracut-pre-trigger.service. Aug 13 00:53:11.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.468800 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:53:11.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.468964 systemd[1]: Stopped ignition-mount.service. Aug 13 00:53:11.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.474928 systemd[1]: Stopped target network.target. Aug 13 00:53:11.477495 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:53:11.477646 systemd[1]: Stopped ignition-disks.service. Aug 13 00:53:11.493684 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:53:11.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.493766 systemd[1]: Stopped ignition-kargs.service. Aug 13 00:53:11.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.501666 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:53:11.501744 systemd[1]: Stopped ignition-setup.service. Aug 13 00:53:11.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.504904 systemd[1]: Stopping systemd-networkd.service... Aug 13 00:53:11.508156 systemd[1]: Stopping systemd-resolved.service... Aug 13 00:53:11.513721 systemd-networkd[720]: eth0: DHCPv6 lease lost Aug 13 00:53:11.543000 audit: BPF prog-id=9 op=UNLOAD Aug 13 00:53:11.514064 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:53:11.515932 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:53:11.550000 audit: BPF prog-id=6 op=UNLOAD Aug 13 00:53:11.516858 systemd[1]: Finished initrd-cleanup.service. Aug 13 00:53:11.518303 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:53:11.518443 systemd[1]: Stopped systemd-networkd.service. Aug 13 00:53:11.525834 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:53:11.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.525967 systemd[1]: Stopped sysroot-boot.service. Aug 13 00:53:11.530072 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:53:11.530223 systemd[1]: Stopped systemd-resolved.service. Aug 13 00:53:11.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.534299 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:53:11.534350 systemd[1]: Closed systemd-networkd.socket. Aug 13 00:53:11.540269 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:53:11.540445 systemd[1]: Stopped initrd-setup-root.service. Aug 13 00:53:11.552900 systemd[1]: Stopping network-cleanup.service... Aug 13 00:53:11.555821 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:53:11.555928 systemd[1]: Stopped parse-ip-for-networkd.service. Aug 13 00:53:11.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.560187 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:53:11.560272 systemd[1]: Stopped systemd-sysctl.service. Aug 13 00:53:11.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.561506 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:53:11.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.561554 systemd[1]: Stopped systemd-modules-load.service. Aug 13 00:53:11.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.565444 systemd[1]: Stopping systemd-udevd.service... Aug 13 00:53:11.570488 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:53:11.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:11.572773 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:53:11.572946 systemd[1]: Stopped systemd-udevd.service. Aug 13 00:53:11.576653 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:53:11.576785 systemd[1]: Stopped network-cleanup.service. Aug 13 00:53:11.578823 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:53:11.578870 systemd[1]: Closed systemd-udevd-control.socket. Aug 13 00:53:11.580031 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:53:11.580070 systemd[1]: Closed systemd-udevd-kernel.socket. Aug 13 00:53:11.584415 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:53:11.584477 systemd[1]: Stopped dracut-pre-udev.service. Aug 13 00:53:11.589582 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:53:11.589648 systemd[1]: Stopped dracut-cmdline.service. Aug 13 00:53:11.591615 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:53:11.591683 systemd[1]: Stopped dracut-cmdline-ask.service. Aug 13 00:53:11.593933 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Aug 13 00:53:11.594983 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:53:11.595054 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Aug 13 00:53:11.603894 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:53:11.603979 systemd[1]: Stopped kmod-static-nodes.service. Aug 13 00:53:11.605292 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:53:11.605350 systemd[1]: Stopped systemd-vconsole-setup.service. Aug 13 00:53:11.607675 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 00:53:11.608237 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:53:11.608339 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Aug 13 00:53:11.610774 systemd[1]: Reached target initrd-switch-root.target. Aug 13 00:53:11.613710 systemd[1]: Starting initrd-switch-root.service... Aug 13 00:53:11.633882 systemd[1]: Switching root. Aug 13 00:53:11.659799 iscsid[726]: iscsid shutting down. Aug 13 00:53:11.660876 systemd-journald[197]: Received SIGTERM from PID 1 (systemd). Aug 13 00:53:11.660922 systemd-journald[197]: Journal stopped Aug 13 00:53:20.143345 kernel: SELinux: Class mctp_socket not defined in policy. Aug 13 00:53:20.143406 kernel: SELinux: Class anon_inode not defined in policy. Aug 13 00:53:20.143434 kernel: SELinux: the above unknown classes and permissions will be allowed Aug 13 00:53:20.143450 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:53:20.143463 kernel: SELinux: policy capability open_perms=1 Aug 13 00:53:20.143476 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:53:20.143490 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:53:20.143503 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:53:20.143517 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:53:20.143530 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:53:20.143543 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:53:20.143565 systemd[1]: Successfully loaded SELinux policy in 66.439ms. Aug 13 00:53:20.143591 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.602ms. Aug 13 00:53:20.143607 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 00:53:20.143623 systemd[1]: Detected virtualization kvm. Aug 13 00:53:20.143644 systemd[1]: Detected architecture x86-64. Aug 13 00:53:20.143658 systemd[1]: Detected first boot. Aug 13 00:53:20.143672 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:53:20.143687 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Aug 13 00:53:20.143702 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:53:20.143718 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:53:20.143737 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:53:20.143753 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:53:20.143775 kernel: kauditd_printk_skb: 46 callbacks suppressed Aug 13 00:53:20.143788 kernel: audit: type=1334 audit(1755046399.834:83): prog-id=12 op=LOAD Aug 13 00:53:20.143802 kernel: audit: type=1334 audit(1755046399.834:84): prog-id=3 op=UNLOAD Aug 13 00:53:20.143815 kernel: audit: type=1334 audit(1755046399.835:85): prog-id=13 op=LOAD Aug 13 00:53:20.143828 kernel: audit: type=1334 audit(1755046399.839:86): prog-id=14 op=LOAD Aug 13 00:53:20.143845 kernel: audit: type=1334 audit(1755046399.839:87): prog-id=4 op=UNLOAD Aug 13 00:53:20.143859 kernel: audit: type=1334 audit(1755046399.839:88): prog-id=5 op=UNLOAD Aug 13 00:53:20.143872 kernel: audit: type=1334 audit(1755046399.840:89): prog-id=15 op=LOAD Aug 13 00:53:20.143890 kernel: audit: type=1334 audit(1755046399.840:90): prog-id=12 op=UNLOAD Aug 13 00:53:20.143911 kernel: audit: type=1334 audit(1755046399.843:91): prog-id=16 op=LOAD Aug 13 00:53:20.143932 kernel: audit: type=1334 audit(1755046399.845:92): prog-id=17 op=LOAD Aug 13 00:53:20.143946 systemd[1]: iscsiuio.service: Deactivated successfully. Aug 13 00:53:20.143964 systemd[1]: Stopped iscsiuio.service. Aug 13 00:53:20.143979 systemd[1]: iscsid.service: Deactivated successfully. Aug 13 00:53:20.143993 systemd[1]: Stopped iscsid.service. Aug 13 00:53:20.144007 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 00:53:20.144023 systemd[1]: Stopped initrd-switch-root.service. Aug 13 00:53:20.144039 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 00:53:20.144064 systemd[1]: Created slice system-addon\x2dconfig.slice. Aug 13 00:53:20.144080 systemd[1]: Created slice system-addon\x2drun.slice. Aug 13 00:53:20.144134 systemd[1]: Created slice system-getty.slice. Aug 13 00:53:20.144163 systemd[1]: Created slice system-modprobe.slice. Aug 13 00:53:20.144182 systemd[1]: Created slice system-serial\x2dgetty.slice. Aug 13 00:53:20.144199 systemd[1]: Created slice system-system\x2dcloudinit.slice. Aug 13 00:53:20.144214 systemd[1]: Created slice system-systemd\x2dfsck.slice. Aug 13 00:53:20.144240 systemd[1]: Created slice user.slice. Aug 13 00:53:20.144258 systemd[1]: Started systemd-ask-password-console.path. Aug 13 00:53:20.144276 systemd[1]: Started systemd-ask-password-wall.path. Aug 13 00:53:20.144290 systemd[1]: Set up automount boot.automount. Aug 13 00:53:20.144304 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Aug 13 00:53:20.144319 systemd[1]: Stopped target initrd-switch-root.target. Aug 13 00:53:20.144333 systemd[1]: Stopped target initrd-fs.target. Aug 13 00:53:20.144347 systemd[1]: Stopped target initrd-root-fs.target. Aug 13 00:53:20.144360 systemd[1]: Reached target integritysetup.target. Aug 13 00:53:20.144375 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 00:53:20.144397 systemd[1]: Reached target remote-fs.target. Aug 13 00:53:20.144412 systemd[1]: Reached target slices.target. Aug 13 00:53:20.144427 systemd[1]: Reached target swap.target. Aug 13 00:53:20.144442 systemd[1]: Reached target torcx.target. Aug 13 00:53:20.144456 systemd[1]: Reached target veritysetup.target. Aug 13 00:53:20.144471 systemd[1]: Listening on systemd-coredump.socket. Aug 13 00:53:20.144485 systemd[1]: Listening on systemd-initctl.socket. Aug 13 00:53:20.144499 systemd[1]: Listening on systemd-networkd.socket. Aug 13 00:53:20.144513 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 00:53:20.144535 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 00:53:20.144550 systemd[1]: Listening on systemd-userdbd.socket. Aug 13 00:53:20.144564 systemd[1]: Mounting dev-hugepages.mount... Aug 13 00:53:20.144578 systemd[1]: Mounting dev-mqueue.mount... Aug 13 00:53:20.144592 systemd[1]: Mounting media.mount... Aug 13 00:53:20.144607 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:53:20.144621 systemd[1]: Mounting sys-kernel-debug.mount... Aug 13 00:53:20.144635 systemd[1]: Mounting sys-kernel-tracing.mount... Aug 13 00:53:20.144649 systemd[1]: Mounting tmp.mount... Aug 13 00:53:20.144670 systemd[1]: Starting flatcar-tmpfiles.service... Aug 13 00:53:20.144684 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:53:20.144698 systemd[1]: Starting kmod-static-nodes.service... Aug 13 00:53:20.144713 systemd[1]: Starting modprobe@configfs.service... Aug 13 00:53:20.144728 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:53:20.144742 systemd[1]: Starting modprobe@drm.service... Aug 13 00:53:20.144757 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:53:20.144770 systemd[1]: Starting modprobe@fuse.service... Aug 13 00:53:20.144784 systemd[1]: Starting modprobe@loop.service... Aug 13 00:53:20.144805 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:53:20.144819 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 00:53:20.144834 systemd[1]: Stopped systemd-fsck-root.service. Aug 13 00:53:20.144849 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 00:53:20.144863 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 00:53:20.144876 systemd[1]: Stopped systemd-journald.service. Aug 13 00:53:20.144890 kernel: fuse: init (API version 7.34) Aug 13 00:53:20.144904 kernel: loop: module loaded Aug 13 00:53:20.144918 systemd[1]: Starting systemd-journald.service... Aug 13 00:53:20.144939 systemd[1]: Starting systemd-modules-load.service... Aug 13 00:53:20.144960 systemd[1]: Starting systemd-network-generator.service... Aug 13 00:53:20.144984 systemd[1]: Starting systemd-remount-fs.service... Aug 13 00:53:20.145006 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 00:53:20.145021 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 00:53:20.145041 systemd[1]: Stopped verity-setup.service. Aug 13 00:53:20.145057 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:53:20.145072 systemd[1]: Mounted dev-hugepages.mount. Aug 13 00:53:20.145086 systemd[1]: Mounted dev-mqueue.mount. Aug 13 00:53:20.145149 systemd[1]: Mounted media.mount. Aug 13 00:53:20.145174 systemd[1]: Mounted sys-kernel-debug.mount. Aug 13 00:53:20.145194 systemd-journald[1017]: Journal started Aug 13 00:53:20.145250 systemd-journald[1017]: Runtime Journal (/run/log/journal/d95a0615226541939bc5bb7adffacff4) is 6.0M, max 48.4M, 42.4M free. Aug 13 00:53:11.781000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:53:12.060000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 00:53:12.062000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 00:53:12.068000 audit: BPF prog-id=10 op=LOAD Aug 13 00:53:12.068000 audit: BPF prog-id=10 op=UNLOAD Aug 13 00:53:12.070000 audit: BPF prog-id=11 op=LOAD Aug 13 00:53:12.070000 audit: BPF prog-id=11 op=UNLOAD Aug 13 00:53:12.267000 audit[932]: AVC avc: denied { associate } for pid=932 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Aug 13 00:53:12.267000 audit[932]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001c58b2 a1=c000146de0 a2=c00014f0c0 a3=32 items=0 ppid=915 pid=932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:12.267000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Aug 13 00:53:12.271000 audit[932]: AVC avc: denied { associate } for pid=932 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Aug 13 00:53:12.271000 audit[932]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001c5999 a2=1ed a3=0 items=2 ppid=915 pid=932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:12.271000 audit: CWD cwd="/" Aug 13 00:53:12.271000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:12.271000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:12.271000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Aug 13 00:53:19.834000 audit: BPF prog-id=12 op=LOAD Aug 13 00:53:19.834000 audit: BPF prog-id=3 op=UNLOAD Aug 13 00:53:19.835000 audit: BPF prog-id=13 op=LOAD Aug 13 00:53:19.839000 audit: BPF prog-id=14 op=LOAD Aug 13 00:53:19.839000 audit: BPF prog-id=4 op=UNLOAD Aug 13 00:53:19.839000 audit: BPF prog-id=5 op=UNLOAD Aug 13 00:53:19.840000 audit: BPF prog-id=15 op=LOAD Aug 13 00:53:19.840000 audit: BPF prog-id=12 op=UNLOAD Aug 13 00:53:19.843000 audit: BPF prog-id=16 op=LOAD Aug 13 00:53:19.845000 audit: BPF prog-id=17 op=LOAD Aug 13 00:53:19.845000 audit: BPF prog-id=13 op=UNLOAD Aug 13 00:53:19.845000 audit: BPF prog-id=14 op=UNLOAD Aug 13 00:53:19.848000 audit: BPF prog-id=18 op=LOAD Aug 13 00:53:19.848000 audit: BPF prog-id=15 op=UNLOAD Aug 13 00:53:19.850000 audit: BPF prog-id=19 op=LOAD Aug 13 00:53:19.852000 audit: BPF prog-id=20 op=LOAD Aug 13 00:53:19.852000 audit: BPF prog-id=16 op=UNLOAD Aug 13 00:53:19.852000 audit: BPF prog-id=17 op=UNLOAD Aug 13 00:53:19.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:19.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:19.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:19.864000 audit: BPF prog-id=18 op=UNLOAD Aug 13 00:53:19.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:19.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:20.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:20.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:20.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:20.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:20.074000 audit: BPF prog-id=21 op=LOAD Aug 13 00:53:20.090000 audit: BPF prog-id=22 op=LOAD Aug 13 00:53:20.093000 audit: BPF prog-id=23 op=LOAD Aug 13 00:53:20.093000 audit: BPF prog-id=19 op=UNLOAD Aug 13 00:53:20.094000 audit: BPF prog-id=20 op=UNLOAD Aug 13 00:53:20.129000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Aug 13 00:53:20.129000 audit[1017]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd692f07a0 a2=4000 a3=7ffd692f083c items=0 ppid=1 pid=1017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:20.129000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Aug 13 00:53:20.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:19.831895 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:53:12.252981 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-08-13T00:53:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:53:19.831910 systemd[1]: Unnecessary job was removed for dev-vda6.device. Aug 13 00:53:12.260023 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-08-13T00:53:12Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Aug 13 00:53:19.854135 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 00:53:12.260059 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-08-13T00:53:12Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Aug 13 00:53:12.260136 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-08-13T00:53:12Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Aug 13 00:53:12.260153 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-08-13T00:53:12Z" level=debug msg="skipped missing lower profile" missing profile=oem Aug 13 00:53:20.148450 systemd[1]: Started systemd-journald.service. Aug 13 00:53:12.260212 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-08-13T00:53:12Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Aug 13 00:53:12.260234 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-08-13T00:53:12Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Aug 13 00:53:12.260567 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-08-13T00:53:12Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Aug 13 00:53:12.260627 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-08-13T00:53:12Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Aug 13 00:53:12.260648 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-08-13T00:53:12Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Aug 13 00:53:12.266814 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-08-13T00:53:12Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Aug 13 00:53:20.149051 systemd[1]: Mounted sys-kernel-tracing.mount. Aug 13 00:53:12.266866 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-08-13T00:53:12Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Aug 13 00:53:12.266898 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-08-13T00:53:12Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Aug 13 00:53:12.266916 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-08-13T00:53:12Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Aug 13 00:53:12.266942 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-08-13T00:53:12Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Aug 13 00:53:12.266959 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-08-13T00:53:12Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Aug 13 00:53:19.184954 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-08-13T00:53:19Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 00:53:19.185364 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-08-13T00:53:19Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 00:53:19.185539 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-08-13T00:53:19Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 00:53:19.186451 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-08-13T00:53:19Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 00:53:19.186583 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-08-13T00:53:19Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Aug 13 00:53:19.186697 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-08-13T00:53:19Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Aug 13 00:53:20.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:20.150546 systemd[1]: Mounted tmp.mount. Aug 13 00:53:20.151639 systemd[1]: Finished flatcar-tmpfiles.service. Aug 13 00:53:20.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:20.152917 systemd[1]: Finished kmod-static-nodes.service. Aug 13 00:53:20.155589 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:53:20.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:20.158670 systemd[1]: Finished modprobe@configfs.service. Aug 13 00:53:20.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:20.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:20.163788 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:53:20.165379 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:53:20.166829 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:53:20.167016 systemd[1]: Finished modprobe@drm.service. Aug 13 00:53:20.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:20.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:20.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:20.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:20.168416 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:53:20.168591 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:53:20.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:20.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:20.170145 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:53:20.170349 systemd[1]: Finished modprobe@fuse.service. Aug 13 00:53:20.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:20.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:20.171766 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:53:20.171989 systemd[1]: Finished modprobe@loop.service. Aug 13 00:53:20.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:20.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:20.173498 systemd[1]: Finished systemd-modules-load.service. Aug 13 00:53:20.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:20.174988 systemd[1]: Finished systemd-network-generator.service. Aug 13 00:53:20.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:20.176716 systemd[1]: Finished systemd-remount-fs.service. Aug 13 00:53:20.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:20.178623 systemd[1]: Reached target network-pre.target. Aug 13 00:53:20.181288 systemd[1]: Mounting sys-fs-fuse-connections.mount... Aug 13 00:53:20.183988 systemd[1]: Mounting sys-kernel-config.mount... Aug 13 00:53:20.185423 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:53:20.188908 systemd[1]: Starting systemd-hwdb-update.service... Aug 13 00:53:20.193702 systemd[1]: Starting systemd-journal-flush.service... Aug 13 00:53:20.200195 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:53:20.250652 systemd[1]: Starting systemd-random-seed.service... Aug 13 00:53:20.254957 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:53:20.258563 systemd-journald[1017]: Time spent on flushing to /var/log/journal/d95a0615226541939bc5bb7adffacff4 is 38.512ms for 1190 entries. Aug 13 00:53:20.258563 systemd-journald[1017]: System Journal (/var/log/journal/d95a0615226541939bc5bb7adffacff4) is 8.0M, max 195.6M, 187.6M free. Aug 13 00:53:20.360986 systemd-journald[1017]: Received client request to flush runtime journal. Aug 13 00:53:20.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:20.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:20.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:20.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:20.260072 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:53:20.263573 systemd[1]: Starting systemd-sysusers.service... Aug 13 00:53:20.267383 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 00:53:20.362768 udevadm[1035]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 00:53:20.268777 systemd[1]: Mounted sys-fs-fuse-connections.mount. Aug 13 00:53:20.269935 systemd[1]: Mounted sys-kernel-config.mount. Aug 13 00:53:20.276141 systemd[1]: Starting systemd-udev-settle.service... Aug 13 00:53:20.293851 systemd[1]: Finished systemd-random-seed.service. Aug 13 00:53:20.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:20.295251 systemd[1]: Reached target first-boot-complete.target. Aug 13 00:53:20.314912 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:53:20.331083 systemd[1]: Finished systemd-sysusers.service. Aug 13 00:53:20.333894 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 00:53:20.362438 systemd[1]: Finished systemd-journal-flush.service. Aug 13 00:53:20.391584 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 00:53:20.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:21.485965 systemd[1]: Finished systemd-hwdb-update.service. Aug 13 00:53:21.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:21.488000 audit: BPF prog-id=24 op=LOAD Aug 13 00:53:21.488000 audit: BPF prog-id=25 op=LOAD Aug 13 00:53:21.488000 audit: BPF prog-id=7 op=UNLOAD Aug 13 00:53:21.488000 audit: BPF prog-id=8 op=UNLOAD Aug 13 00:53:21.489821 systemd[1]: Starting systemd-udevd.service... Aug 13 00:53:21.521563 systemd-udevd[1040]: Using default interface naming scheme 'v252'. Aug 13 00:53:21.556293 systemd[1]: Started systemd-udevd.service. Aug 13 00:53:21.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:21.559000 audit: BPF prog-id=26 op=LOAD Aug 13 00:53:21.564860 systemd[1]: Starting systemd-networkd.service... Aug 13 00:53:21.578000 audit: BPF prog-id=27 op=LOAD Aug 13 00:53:21.581000 audit: BPF prog-id=28 op=LOAD Aug 13 00:53:21.582000 audit: BPF prog-id=29 op=LOAD Aug 13 00:53:21.583349 systemd[1]: Starting systemd-userdbd.service... Aug 13 00:53:21.612034 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Aug 13 00:53:21.647661 systemd[1]: Started systemd-userdbd.service. Aug 13 00:53:21.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:21.666360 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 00:53:21.684129 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 13 00:53:21.696002 kernel: ACPI: button: Power Button [PWRF] Aug 13 00:53:21.711000 audit[1048]: AVC avc: denied { confidentiality } for pid=1048 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Aug 13 00:53:21.727638 systemd-networkd[1046]: lo: Link UP Aug 13 00:53:21.728114 systemd-networkd[1046]: lo: Gained carrier Aug 13 00:53:21.728770 systemd-networkd[1046]: Enumeration completed Aug 13 00:53:21.728995 systemd[1]: Started systemd-networkd.service. Aug 13 00:53:21.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:21.731366 systemd-networkd[1046]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:53:21.733033 systemd-networkd[1046]: eth0: Link UP Aug 13 00:53:21.733189 systemd-networkd[1046]: eth0: Gained carrier Aug 13 00:53:21.738431 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 13 00:53:21.711000 audit[1048]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5595e0333e00 a1=338ac a2=7f93f3225bc5 a3=5 items=110 ppid=1040 pid=1048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:21.711000 audit: CWD cwd="/" Aug 13 00:53:21.711000 audit: PATH item=0 name=(null) inode=50 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=1 name=(null) inode=14816 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=2 name=(null) inode=14816 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=3 name=(null) inode=14817 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=4 name=(null) inode=14816 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=5 name=(null) inode=14818 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=6 name=(null) inode=14816 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=7 name=(null) inode=14819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=8 name=(null) inode=14819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=9 name=(null) inode=14820 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=10 name=(null) inode=14819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=11 name=(null) inode=14821 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=12 name=(null) inode=14819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=13 name=(null) inode=14822 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=14 name=(null) inode=14819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=15 name=(null) inode=14823 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=16 name=(null) inode=14819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=17 name=(null) inode=14824 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=18 name=(null) inode=14816 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=19 name=(null) inode=14825 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=20 name=(null) inode=14825 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=21 name=(null) inode=14826 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=22 name=(null) inode=14825 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=23 name=(null) inode=14827 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=24 name=(null) inode=14825 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=25 name=(null) inode=14828 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=26 name=(null) inode=14825 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=27 name=(null) inode=14829 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=28 name=(null) inode=14825 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=29 name=(null) inode=14830 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=30 name=(null) inode=14816 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=31 name=(null) inode=14831 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=32 name=(null) inode=14831 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=33 name=(null) inode=14832 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=34 name=(null) inode=14831 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=35 name=(null) inode=14833 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=36 name=(null) inode=14831 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=37 name=(null) inode=14834 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=38 name=(null) inode=14831 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=39 name=(null) inode=14835 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=40 name=(null) inode=14831 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=41 name=(null) inode=14836 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=42 name=(null) inode=14816 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=43 name=(null) inode=14837 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=44 name=(null) inode=14837 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=45 name=(null) inode=14838 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=46 name=(null) inode=14837 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=47 name=(null) inode=14839 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=48 name=(null) inode=14837 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=49 name=(null) inode=14840 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=50 name=(null) inode=14837 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=51 name=(null) inode=14841 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=52 name=(null) inode=14837 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=53 name=(null) inode=14842 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=54 name=(null) inode=50 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=55 name=(null) inode=14843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=56 name=(null) inode=14843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=57 name=(null) inode=14844 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=58 name=(null) inode=14843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=59 name=(null) inode=14845 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=60 name=(null) inode=14843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=61 name=(null) inode=14846 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=62 name=(null) inode=14846 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=63 name=(null) inode=14847 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=64 name=(null) inode=14846 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=65 name=(null) inode=14848 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=66 name=(null) inode=14846 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=67 name=(null) inode=14849 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=68 name=(null) inode=14846 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=69 name=(null) inode=14850 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=70 name=(null) inode=14846 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=71 name=(null) inode=14851 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=72 name=(null) inode=14843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=73 name=(null) inode=14852 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=74 name=(null) inode=14852 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=75 name=(null) inode=14853 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=76 name=(null) inode=14852 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=77 name=(null) inode=14854 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=78 name=(null) inode=14852 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=79 name=(null) inode=14855 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=80 name=(null) inode=14852 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=81 name=(null) inode=14856 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=82 name=(null) inode=14852 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=83 name=(null) inode=14857 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=84 name=(null) inode=14843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=85 name=(null) inode=14858 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=86 name=(null) inode=14858 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=87 name=(null) inode=14859 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=88 name=(null) inode=14858 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=89 name=(null) inode=14860 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=90 name=(null) inode=14858 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=91 name=(null) inode=14861 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=92 name=(null) inode=14858 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=93 name=(null) inode=14862 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=94 name=(null) inode=14858 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=95 name=(null) inode=14863 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=96 name=(null) inode=14843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=97 name=(null) inode=14864 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=98 name=(null) inode=14864 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=99 name=(null) inode=14865 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=100 name=(null) inode=14864 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=101 name=(null) inode=14866 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=102 name=(null) inode=14864 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=103 name=(null) inode=14867 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=104 name=(null) inode=14864 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=105 name=(null) inode=14868 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=106 name=(null) inode=14864 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=107 name=(null) inode=14869 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PATH item=109 name=(null) inode=14870 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:21.711000 audit: PROCTITLE proctitle="(udev-worker)" Aug 13 00:53:21.757065 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Aug 13 00:53:21.768637 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 00:53:21.768820 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Aug 13 00:53:21.769009 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 00:53:21.770428 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:53:21.757363 systemd-networkd[1046]: eth0: DHCPv4 address 10.0.0.79/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 00:53:21.867451 kernel: kvm: Nested Virtualization enabled Aug 13 00:53:21.867628 kernel: SVM: kvm: Nested Paging enabled Aug 13 00:53:21.867653 kernel: SVM: Virtual VMLOAD VMSAVE supported Aug 13 00:53:21.868162 kernel: SVM: Virtual GIF supported Aug 13 00:53:21.905146 kernel: EDAC MC: Ver: 3.0.0 Aug 13 00:53:21.930793 systemd[1]: Finished systemd-udev-settle.service. Aug 13 00:53:21.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:21.934875 systemd[1]: Starting lvm2-activation-early.service... Aug 13 00:53:21.951511 lvm[1076]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:53:21.998775 systemd[1]: Finished lvm2-activation-early.service. Aug 13 00:53:22.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:22.000315 systemd[1]: Reached target cryptsetup.target. Aug 13 00:53:22.005160 systemd[1]: Starting lvm2-activation.service... Aug 13 00:53:22.009346 lvm[1077]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:53:22.049240 systemd[1]: Finished lvm2-activation.service. Aug 13 00:53:22.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:22.050492 systemd[1]: Reached target local-fs-pre.target. Aug 13 00:53:22.054071 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:53:22.054144 systemd[1]: Reached target local-fs.target. Aug 13 00:53:22.060042 systemd[1]: Reached target machines.target. Aug 13 00:53:22.068342 systemd[1]: Starting ldconfig.service... Aug 13 00:53:22.072663 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:53:22.073497 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:53:22.079177 systemd[1]: Starting systemd-boot-update.service... Aug 13 00:53:22.083451 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Aug 13 00:53:22.090591 systemd[1]: Starting systemd-machine-id-commit.service... Aug 13 00:53:22.094903 systemd[1]: Starting systemd-sysext.service... Aug 13 00:53:22.102302 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1079 (bootctl) Aug 13 00:53:22.104451 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Aug 13 00:53:22.112734 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Aug 13 00:53:22.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:22.135827 systemd[1]: Unmounting usr-share-oem.mount... Aug 13 00:53:22.148395 systemd[1]: usr-share-oem.mount: Deactivated successfully. Aug 13 00:53:22.148794 systemd[1]: Unmounted usr-share-oem.mount. Aug 13 00:53:22.174134 kernel: loop0: detected capacity change from 0 to 229808 Aug 13 00:53:22.686177 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:53:22.686964 systemd[1]: Finished systemd-machine-id-commit.service. Aug 13 00:53:22.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:22.690183 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:53:22.693555 systemd-fsck[1086]: fsck.fat 4.2 (2021-01-31) Aug 13 00:53:22.693555 systemd-fsck[1086]: /dev/vda1: 790 files, 119344/258078 clusters Aug 13 00:53:22.695878 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Aug 13 00:53:22.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:22.699413 systemd[1]: Mounting boot.mount... Aug 13 00:53:22.707422 systemd[1]: Mounted boot.mount. Aug 13 00:53:22.713114 kernel: loop1: detected capacity change from 0 to 229808 Aug 13 00:53:22.719070 (sd-sysext)[1092]: Using extensions 'kubernetes'. Aug 13 00:53:22.719567 (sd-sysext)[1092]: Merged extensions into '/usr'. Aug 13 00:53:22.721973 systemd[1]: Finished systemd-boot-update.service. Aug 13 00:53:22.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:22.739244 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:53:22.740952 systemd[1]: Mounting usr-share-oem.mount... Aug 13 00:53:22.742384 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:53:22.744528 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:53:22.747232 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:53:22.749652 systemd[1]: Starting modprobe@loop.service... Aug 13 00:53:22.750600 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:53:22.750817 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:53:22.751009 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:53:22.754969 systemd[1]: Mounted usr-share-oem.mount. Aug 13 00:53:22.756632 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:53:22.756782 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:53:22.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:22.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:22.758502 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:53:22.758668 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:53:22.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:22.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:22.760256 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:53:22.760396 systemd[1]: Finished modprobe@loop.service. Aug 13 00:53:22.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:22.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:22.762158 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:53:22.762302 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:53:22.763739 systemd[1]: Finished systemd-sysext.service. Aug 13 00:53:22.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:22.766527 systemd[1]: Starting ensure-sysext.service... Aug 13 00:53:22.767353 systemd-networkd[1046]: eth0: Gained IPv6LL Aug 13 00:53:22.770125 systemd[1]: Starting systemd-tmpfiles-setup.service... Aug 13 00:53:22.777046 systemd[1]: Reloading. Aug 13 00:53:22.780922 systemd-tmpfiles[1099]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Aug 13 00:53:22.781798 systemd-tmpfiles[1099]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:53:22.783418 systemd-tmpfiles[1099]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:53:22.931152 /usr/lib/systemd/system-generators/torcx-generator[1118]: time="2025-08-13T00:53:22Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:53:22.931196 /usr/lib/systemd/system-generators/torcx-generator[1118]: time="2025-08-13T00:53:22Z" level=info msg="torcx already run" Aug 13 00:53:22.933890 ldconfig[1078]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:53:23.084135 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:53:23.084151 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:53:23.101402 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:53:23.152000 audit: BPF prog-id=30 op=LOAD Aug 13 00:53:23.153000 audit: BPF prog-id=31 op=LOAD Aug 13 00:53:23.153000 audit: BPF prog-id=24 op=UNLOAD Aug 13 00:53:23.153000 audit: BPF prog-id=25 op=UNLOAD Aug 13 00:53:23.154000 audit: BPF prog-id=32 op=LOAD Aug 13 00:53:23.154000 audit: BPF prog-id=27 op=UNLOAD Aug 13 00:53:23.154000 audit: BPF prog-id=33 op=LOAD Aug 13 00:53:23.154000 audit: BPF prog-id=34 op=LOAD Aug 13 00:53:23.154000 audit: BPF prog-id=28 op=UNLOAD Aug 13 00:53:23.154000 audit: BPF prog-id=29 op=UNLOAD Aug 13 00:53:23.155000 audit: BPF prog-id=35 op=LOAD Aug 13 00:53:23.155000 audit: BPF prog-id=26 op=UNLOAD Aug 13 00:53:23.157000 audit: BPF prog-id=36 op=LOAD Aug 13 00:53:23.157000 audit: BPF prog-id=21 op=UNLOAD Aug 13 00:53:23.157000 audit: BPF prog-id=37 op=LOAD Aug 13 00:53:23.157000 audit: BPF prog-id=38 op=LOAD Aug 13 00:53:23.157000 audit: BPF prog-id=22 op=UNLOAD Aug 13 00:53:23.157000 audit: BPF prog-id=23 op=UNLOAD Aug 13 00:53:23.161909 systemd[1]: Finished ldconfig.service. Aug 13 00:53:23.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:23.163842 systemd[1]: Finished systemd-tmpfiles-setup.service. Aug 13 00:53:23.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:23.168299 systemd[1]: Starting audit-rules.service... Aug 13 00:53:23.170344 systemd[1]: Starting clean-ca-certificates.service... Aug 13 00:53:23.174000 audit: BPF prog-id=39 op=LOAD Aug 13 00:53:23.177000 audit: BPF prog-id=40 op=LOAD Aug 13 00:53:23.172439 systemd[1]: Starting systemd-journal-catalog-update.service... Aug 13 00:53:23.175527 systemd[1]: Starting systemd-resolved.service... Aug 13 00:53:23.178459 systemd[1]: Starting systemd-timesyncd.service... Aug 13 00:53:23.181042 systemd[1]: Starting systemd-update-utmp.service... Aug 13 00:53:23.183261 systemd[1]: Finished clean-ca-certificates.service. Aug 13 00:53:23.186647 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:53:23.188483 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:53:23.188703 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:53:23.190030 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:53:23.192183 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:53:23.194077 systemd[1]: Starting modprobe@loop.service... Aug 13 00:53:23.194929 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:53:23.195058 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:53:23.195173 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:53:23.195250 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:53:23.196190 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:53:23.196315 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:53:23.197600 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:53:23.197711 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:53:23.199196 systemd[1]: Finished systemd-journal-catalog-update.service. Aug 13 00:53:23.200704 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:53:23.200813 systemd[1]: Finished modprobe@loop.service. Aug 13 00:53:23.202382 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:53:23.202489 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:53:23.203901 systemd[1]: Starting systemd-update-done.service... Aug 13 00:53:23.206938 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:53:23.207175 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:53:23.208651 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:53:23.211307 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:53:23.213714 systemd[1]: Starting modprobe@loop.service... Aug 13 00:53:23.214749 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:53:23.214914 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:53:23.215067 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:53:23.215199 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:53:23.216604 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:53:23.216807 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:53:23.219267 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:53:23.219397 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:53:23.221260 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:53:23.221420 systemd[1]: Finished modprobe@loop.service. Aug 13 00:53:23.222913 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:53:23.223110 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:53:23.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:23.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:23.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:23.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:23.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:23.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:23.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:23.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:23.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:23.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:23.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:23.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:23.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:23.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:23.227625 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:53:23.227959 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:53:23.229000 audit[1172]: SYSTEM_BOOT pid=1172 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Aug 13 00:53:23.230595 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:53:23.234609 systemd[1]: Starting modprobe@drm.service... Aug 13 00:53:23.236827 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:53:23.239195 systemd[1]: Starting modprobe@loop.service... Aug 13 00:53:23.240276 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:53:23.240424 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:53:23.241959 systemd[1]: Starting systemd-networkd-wait-online.service... Aug 13 00:53:23.243304 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:53:23.243457 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:53:23.245519 systemd[1]: Finished systemd-update-done.service. Aug 13 00:53:23.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:23.248484 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:53:23.248595 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:53:23.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:23.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:23.250389 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:53:23.250550 systemd[1]: Finished modprobe@drm.service. Aug 13 00:53:23.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:23.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:23.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:23.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:23.252357 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:53:23.252499 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:53:23.254339 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:53:23.254496 systemd[1]: Finished modprobe@loop.service. Aug 13 00:53:23.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:23.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:23.255937 systemd[1]: Finished systemd-networkd-wait-online.service. Aug 13 00:53:23.256000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Aug 13 00:53:23.256657 augenrules[1194]: No rules Aug 13 00:53:23.256000 audit[1194]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc507f7fe0 a2=420 a3=0 items=0 ppid=1161 pid=1194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:23.256000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Aug 13 00:53:23.257610 systemd[1]: Finished audit-rules.service. Aug 13 00:53:23.260901 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:53:23.261110 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:53:23.262255 systemd[1]: Finished ensure-sysext.service. Aug 13 00:53:23.264553 systemd[1]: Finished systemd-update-utmp.service. Aug 13 00:53:23.294432 systemd[1]: Started systemd-timesyncd.service. Aug 13 00:53:23.295104 systemd-resolved[1167]: Positive Trust Anchors: Aug 13 00:53:23.295117 systemd-timesyncd[1168]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 13 00:53:23.295120 systemd-resolved[1167]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:53:23.295149 systemd-resolved[1167]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 00:53:23.295155 systemd-timesyncd[1168]: Initial clock synchronization to Wed 2025-08-13 00:53:23.449444 UTC. Aug 13 00:53:23.296330 systemd[1]: Reached target time-set.target. Aug 13 00:53:23.302789 systemd-resolved[1167]: Defaulting to hostname 'linux'. Aug 13 00:53:23.304266 systemd[1]: Started systemd-resolved.service. Aug 13 00:53:23.305492 systemd[1]: Reached target network.target. Aug 13 00:53:23.306512 systemd[1]: Reached target network-online.target. Aug 13 00:53:23.307642 systemd[1]: Reached target nss-lookup.target. Aug 13 00:53:23.308799 systemd[1]: Reached target sysinit.target. Aug 13 00:53:23.309933 systemd[1]: Started motdgen.path. Aug 13 00:53:23.310934 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Aug 13 00:53:23.312548 systemd[1]: Started logrotate.timer. Aug 13 00:53:23.313618 systemd[1]: Started mdadm.timer. Aug 13 00:53:23.314529 systemd[1]: Started systemd-tmpfiles-clean.timer. Aug 13 00:53:23.315626 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:53:23.315666 systemd[1]: Reached target paths.target. Aug 13 00:53:23.316662 systemd[1]: Reached target timers.target. Aug 13 00:53:23.318062 systemd[1]: Listening on dbus.socket. Aug 13 00:53:23.320400 systemd[1]: Starting docker.socket... Aug 13 00:53:23.323992 systemd[1]: Listening on sshd.socket. Aug 13 00:53:23.325157 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:53:23.325610 systemd[1]: Listening on docker.socket. Aug 13 00:53:23.326629 systemd[1]: Reached target sockets.target. Aug 13 00:53:23.327474 systemd[1]: Reached target basic.target. Aug 13 00:53:23.328334 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 00:53:23.328359 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 00:53:23.329400 systemd[1]: Starting containerd.service... Aug 13 00:53:23.331318 systemd[1]: Starting dbus.service... Aug 13 00:53:23.334131 systemd[1]: Starting enable-oem-cloudinit.service... Aug 13 00:53:23.336530 systemd[1]: Starting extend-filesystems.service... Aug 13 00:53:23.337601 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Aug 13 00:53:23.339251 systemd[1]: Starting kubelet.service... Aug 13 00:53:23.341366 systemd[1]: Starting motdgen.service... Aug 13 00:53:23.356648 extend-filesystems[1205]: Found loop1 Aug 13 00:53:23.356648 extend-filesystems[1205]: Found sr0 Aug 13 00:53:23.356648 extend-filesystems[1205]: Found vda Aug 13 00:53:23.356648 extend-filesystems[1205]: Found vda1 Aug 13 00:53:23.356648 extend-filesystems[1205]: Found vda2 Aug 13 00:53:23.356648 extend-filesystems[1205]: Found vda3 Aug 13 00:53:23.356648 extend-filesystems[1205]: Found usr Aug 13 00:53:23.356648 extend-filesystems[1205]: Found vda4 Aug 13 00:53:23.356648 extend-filesystems[1205]: Found vda6 Aug 13 00:53:23.356648 extend-filesystems[1205]: Found vda7 Aug 13 00:53:23.356648 extend-filesystems[1205]: Found vda9 Aug 13 00:53:23.356648 extend-filesystems[1205]: Checking size of /dev/vda9 Aug 13 00:53:23.433782 jq[1204]: false Aug 13 00:53:23.344167 systemd[1]: Starting prepare-helm.service... Aug 13 00:53:23.346179 systemd[1]: Starting ssh-key-proc-cmdline.service... Aug 13 00:53:23.348125 systemd[1]: Starting sshd-keygen.service... Aug 13 00:53:23.351532 systemd[1]: Starting systemd-logind.service... Aug 13 00:53:23.352493 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:53:23.352601 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:53:23.353219 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:53:23.354181 systemd[1]: Starting update-engine.service... Aug 13 00:53:23.420120 systemd[1]: Starting update-ssh-keys-after-ignition.service... Aug 13 00:53:23.423728 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:53:23.423966 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Aug 13 00:53:23.428167 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:53:23.428581 systemd[1]: Finished ssh-key-proc-cmdline.service. Aug 13 00:53:23.453419 tar[1224]: linux-amd64/LICENSE Aug 13 00:53:23.453419 tar[1224]: linux-amd64/helm Aug 13 00:53:23.455233 jq[1219]: true Aug 13 00:53:23.455809 dbus-daemon[1203]: [system] SELinux support is enabled Aug 13 00:53:23.469700 systemd[1]: Started dbus.service. Aug 13 00:53:23.530672 update_engine[1215]: I0813 00:53:23.530422 1215 main.cc:92] Flatcar Update Engine starting Aug 13 00:53:23.535010 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:53:23.535226 systemd[1]: Finished motdgen.service. Aug 13 00:53:23.537435 update_engine[1215]: I0813 00:53:23.537333 1215 update_check_scheduler.cc:74] Next update check in 2m25s Aug 13 00:53:23.538662 extend-filesystems[1205]: Resized partition /dev/vda9 Aug 13 00:53:23.542796 extend-filesystems[1240]: resize2fs 1.46.5 (30-Dec-2021) Aug 13 00:53:23.544913 systemd[1]: Started update-engine.service. Aug 13 00:53:23.551472 systemd[1]: Started locksmithd.service. Aug 13 00:53:23.555354 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:53:23.555423 systemd[1]: Reached target system-config.target. Aug 13 00:53:23.558981 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:53:23.559028 systemd[1]: Reached target user-config.target. Aug 13 00:53:23.581623 env[1226]: time="2025-08-13T00:53:23.581550798Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Aug 13 00:53:23.643816 env[1226]: time="2025-08-13T00:53:23.643681731Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 00:53:23.643973 env[1226]: time="2025-08-13T00:53:23.643929486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:53:23.645507 env[1226]: time="2025-08-13T00:53:23.645469252Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.189-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:53:23.645507 env[1226]: time="2025-08-13T00:53:23.645500391Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:53:23.645735 env[1226]: time="2025-08-13T00:53:23.645702349Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:53:23.645735 env[1226]: time="2025-08-13T00:53:23.645729450Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 00:53:23.645836 env[1226]: time="2025-08-13T00:53:23.645741883Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 13 00:53:23.645836 env[1226]: time="2025-08-13T00:53:23.645751161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 00:53:23.645836 env[1226]: time="2025-08-13T00:53:23.645813728Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:53:23.646444 env[1226]: time="2025-08-13T00:53:23.646054940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:53:23.646444 env[1226]: time="2025-08-13T00:53:23.646192939Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:53:23.646444 env[1226]: time="2025-08-13T00:53:23.646207105Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 00:53:23.646444 env[1226]: time="2025-08-13T00:53:23.646253172Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 13 00:53:23.646444 env[1226]: time="2025-08-13T00:53:23.646263932Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:53:23.653882 jq[1232]: true Aug 13 00:53:23.662339 systemd-logind[1213]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 00:53:23.662375 systemd-logind[1213]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 00:53:23.663821 systemd-logind[1213]: New seat seat0. Aug 13 00:53:23.668329 systemd[1]: Started systemd-logind.service. Aug 13 00:53:23.701127 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 13 00:53:23.734345 locksmithd[1241]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:53:23.741134 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 13 00:53:23.743032 env[1226]: time="2025-08-13T00:53:23.742965062Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 00:53:23.743032 env[1226]: time="2025-08-13T00:53:23.743026267Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 00:53:23.743131 env[1226]: time="2025-08-13T00:53:23.743039842Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 00:53:23.743131 env[1226]: time="2025-08-13T00:53:23.743087271Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 00:53:23.743131 env[1226]: time="2025-08-13T00:53:23.743115925Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 00:53:23.743131 env[1226]: time="2025-08-13T00:53:23.743128969Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 00:53:23.765692 env[1226]: time="2025-08-13T00:53:23.743142004Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 00:53:23.765692 env[1226]: time="2025-08-13T00:53:23.743154357Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 00:53:23.765692 env[1226]: time="2025-08-13T00:53:23.743165879Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Aug 13 00:53:23.765692 env[1226]: time="2025-08-13T00:53:23.743180075Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 00:53:23.765692 env[1226]: time="2025-08-13T00:53:23.743192138Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 00:53:23.765692 env[1226]: time="2025-08-13T00:53:23.743204271Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 00:53:23.765826 env[1226]: time="2025-08-13T00:53:23.765785933Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 00:53:23.766042 env[1226]: time="2025-08-13T00:53:23.766007018Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 00:53:23.766506 env[1226]: time="2025-08-13T00:53:23.766455148Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 00:53:23.766553 env[1226]: time="2025-08-13T00:53:23.766527263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 00:53:23.766553 env[1226]: time="2025-08-13T00:53:23.766541700Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 00:53:23.766660 env[1226]: time="2025-08-13T00:53:23.766633913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 00:53:23.766660 env[1226]: time="2025-08-13T00:53:23.766656876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 00:53:23.766775 env[1226]: time="2025-08-13T00:53:23.766671463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 00:53:23.766775 env[1226]: time="2025-08-13T00:53:23.766763636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 00:53:23.766775 env[1226]: time="2025-08-13T00:53:23.766775949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 00:53:23.766918 extend-filesystems[1240]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 00:53:23.766918 extend-filesystems[1240]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 00:53:23.766918 extend-filesystems[1240]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 13 00:53:23.774801 env[1226]: time="2025-08-13T00:53:23.766788813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 00:53:23.774801 env[1226]: time="2025-08-13T00:53:23.766801196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 00:53:23.774801 env[1226]: time="2025-08-13T00:53:23.766811756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 00:53:23.774801 env[1226]: time="2025-08-13T00:53:23.766828598Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 00:53:23.774801 env[1226]: time="2025-08-13T00:53:23.767228227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 00:53:23.774801 env[1226]: time="2025-08-13T00:53:23.767252582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 00:53:23.774801 env[1226]: time="2025-08-13T00:53:23.767267691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 00:53:23.774801 env[1226]: time="2025-08-13T00:53:23.767282749Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 00:53:23.774801 env[1226]: time="2025-08-13T00:53:23.767298789Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Aug 13 00:53:23.774801 env[1226]: time="2025-08-13T00:53:23.767309920Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 00:53:23.774801 env[1226]: time="2025-08-13T00:53:23.767336690Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Aug 13 00:53:23.774801 env[1226]: time="2025-08-13T00:53:23.767377567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 00:53:23.775318 bash[1261]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:53:23.775451 extend-filesystems[1205]: Resized filesystem in /dev/vda9 Aug 13 00:53:23.769279 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:53:23.769455 systemd[1]: Finished extend-filesystems.service. Aug 13 00:53:23.779253 env[1226]: time="2025-08-13T00:53:23.767731310Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 00:53:23.779253 env[1226]: time="2025-08-13T00:53:23.768611911Z" level=info msg="Connect containerd service" Aug 13 00:53:23.779253 env[1226]: time="2025-08-13T00:53:23.768674468Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 00:53:23.779253 env[1226]: time="2025-08-13T00:53:23.769523660Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:53:23.779253 env[1226]: time="2025-08-13T00:53:23.769779439Z" level=info msg="Start subscribing containerd event" Aug 13 00:53:23.779253 env[1226]: time="2025-08-13T00:53:23.769866823Z" level=info msg="Start recovering state" Aug 13 00:53:23.779253 env[1226]: time="2025-08-13T00:53:23.769869108Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:53:23.779253 env[1226]: time="2025-08-13T00:53:23.769908321Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:53:23.779253 env[1226]: time="2025-08-13T00:53:23.769950330Z" level=info msg="containerd successfully booted in 0.189077s" Aug 13 00:53:23.779253 env[1226]: time="2025-08-13T00:53:23.769959888Z" level=info msg="Start event monitor" Aug 13 00:53:23.779253 env[1226]: time="2025-08-13T00:53:23.770004091Z" level=info msg="Start snapshots syncer" Aug 13 00:53:23.779253 env[1226]: time="2025-08-13T00:53:23.770014891Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:53:23.779253 env[1226]: time="2025-08-13T00:53:23.770021213Z" level=info msg="Start streaming server" Aug 13 00:53:23.772609 systemd[1]: Started containerd.service. Aug 13 00:53:23.775514 systemd[1]: Finished update-ssh-keys-after-ignition.service. Aug 13 00:53:24.013830 sshd_keygen[1230]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:53:24.072500 systemd[1]: Finished sshd-keygen.service. Aug 13 00:53:24.123833 systemd[1]: Starting issuegen.service... Aug 13 00:53:24.130057 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:53:24.130215 systemd[1]: Finished issuegen.service. Aug 13 00:53:24.132410 systemd[1]: Starting systemd-user-sessions.service... Aug 13 00:53:24.142290 systemd[1]: Finished systemd-user-sessions.service. Aug 13 00:53:24.146279 systemd[1]: Started getty@tty1.service. Aug 13 00:53:24.149443 systemd[1]: Started serial-getty@ttyS0.service. Aug 13 00:53:24.150685 systemd[1]: Reached target getty.target. Aug 13 00:53:24.378932 tar[1224]: linux-amd64/README.md Aug 13 00:53:24.383911 systemd[1]: Finished prepare-helm.service. Aug 13 00:53:25.245612 systemd[1]: Started kubelet.service. Aug 13 00:53:25.247206 systemd[1]: Reached target multi-user.target. Aug 13 00:53:25.249450 systemd[1]: Starting systemd-update-utmp-runlevel.service... Aug 13 00:53:25.256232 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Aug 13 00:53:25.256421 systemd[1]: Finished systemd-update-utmp-runlevel.service. Aug 13 00:53:25.257699 systemd[1]: Startup finished in 762ms (kernel) + 8.984s (initrd) + 13.554s (userspace) = 23.302s. Aug 13 00:53:25.786021 kubelet[1285]: E0813 00:53:25.785946 1285 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:53:25.787723 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:53:25.787845 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:53:25.788145 systemd[1]: kubelet.service: Consumed 1.962s CPU time. Aug 13 00:53:32.818901 systemd[1]: Created slice system-sshd.slice. Aug 13 00:53:32.820149 systemd[1]: Started sshd@0-10.0.0.79:22-10.0.0.1:52276.service. Aug 13 00:53:32.860538 sshd[1294]: Accepted publickey for core from 10.0.0.1 port 52276 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:53:32.862259 sshd[1294]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:32.872328 systemd-logind[1213]: New session 1 of user core. Aug 13 00:53:32.873949 systemd[1]: Created slice user-500.slice. Aug 13 00:53:32.876288 systemd[1]: Starting user-runtime-dir@500.service... Aug 13 00:53:32.887246 systemd[1]: Finished user-runtime-dir@500.service. Aug 13 00:53:32.889159 systemd[1]: Starting user@500.service... Aug 13 00:53:32.892462 (systemd)[1297]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:32.970618 systemd[1297]: Queued start job for default target default.target. Aug 13 00:53:32.971182 systemd[1297]: Reached target paths.target. Aug 13 00:53:32.971210 systemd[1297]: Reached target sockets.target. Aug 13 00:53:32.971222 systemd[1297]: Reached target timers.target. Aug 13 00:53:32.971233 systemd[1297]: Reached target basic.target. Aug 13 00:53:32.971275 systemd[1297]: Reached target default.target. Aug 13 00:53:32.971305 systemd[1297]: Startup finished in 72ms. Aug 13 00:53:32.971406 systemd[1]: Started user@500.service. Aug 13 00:53:32.972518 systemd[1]: Started session-1.scope. Aug 13 00:53:33.030172 systemd[1]: Started sshd@1-10.0.0.79:22-10.0.0.1:52282.service. Aug 13 00:53:33.068573 sshd[1306]: Accepted publickey for core from 10.0.0.1 port 52282 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:53:33.070025 sshd[1306]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:33.074216 systemd-logind[1213]: New session 2 of user core. Aug 13 00:53:33.075519 systemd[1]: Started session-2.scope. Aug 13 00:53:33.129127 sshd[1306]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:33.131840 systemd[1]: sshd@1-10.0.0.79:22-10.0.0.1:52282.service: Deactivated successfully. Aug 13 00:53:33.132450 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:53:33.132978 systemd-logind[1213]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:53:33.134063 systemd[1]: Started sshd@2-10.0.0.79:22-10.0.0.1:52286.service. Aug 13 00:53:33.134844 systemd-logind[1213]: Removed session 2. Aug 13 00:53:33.166722 sshd[1312]: Accepted publickey for core from 10.0.0.1 port 52286 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:53:33.167773 sshd[1312]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:33.171154 systemd-logind[1213]: New session 3 of user core. Aug 13 00:53:33.172164 systemd[1]: Started session-3.scope. Aug 13 00:53:33.223884 sshd[1312]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:33.227218 systemd[1]: sshd@2-10.0.0.79:22-10.0.0.1:52286.service: Deactivated successfully. Aug 13 00:53:33.227845 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:53:33.228359 systemd-logind[1213]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:53:33.229561 systemd[1]: Started sshd@3-10.0.0.79:22-10.0.0.1:52300.service. Aug 13 00:53:33.230336 systemd-logind[1213]: Removed session 3. Aug 13 00:53:33.265745 sshd[1318]: Accepted publickey for core from 10.0.0.1 port 52300 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:53:33.267026 sshd[1318]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:33.270339 systemd-logind[1213]: New session 4 of user core. Aug 13 00:53:33.271253 systemd[1]: Started session-4.scope. Aug 13 00:53:33.325111 sshd[1318]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:33.327471 systemd[1]: sshd@3-10.0.0.79:22-10.0.0.1:52300.service: Deactivated successfully. Aug 13 00:53:33.328015 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:53:33.328549 systemd-logind[1213]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:53:33.329554 systemd[1]: Started sshd@4-10.0.0.79:22-10.0.0.1:52306.service. Aug 13 00:53:33.330221 systemd-logind[1213]: Removed session 4. Aug 13 00:53:33.363995 sshd[1324]: Accepted publickey for core from 10.0.0.1 port 52306 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:53:33.365093 sshd[1324]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:33.368285 systemd-logind[1213]: New session 5 of user core. Aug 13 00:53:33.368964 systemd[1]: Started session-5.scope. Aug 13 00:53:33.425079 sudo[1327]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:53:33.425283 sudo[1327]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 00:53:33.464736 systemd[1]: Starting docker.service... Aug 13 00:53:33.578089 env[1339]: time="2025-08-13T00:53:33.577936231Z" level=info msg="Starting up" Aug 13 00:53:33.579632 env[1339]: time="2025-08-13T00:53:33.579578447Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 00:53:33.579632 env[1339]: time="2025-08-13T00:53:33.579618401Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 00:53:33.579801 env[1339]: time="2025-08-13T00:53:33.579658487Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 00:53:33.579801 env[1339]: time="2025-08-13T00:53:33.579679487Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 00:53:33.582288 env[1339]: time="2025-08-13T00:53:33.582254382Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 00:53:33.582288 env[1339]: time="2025-08-13T00:53:33.582274002Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 00:53:33.582367 env[1339]: time="2025-08-13T00:53:33.582302560Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 00:53:33.582367 env[1339]: time="2025-08-13T00:53:33.582311609Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 00:53:33.587335 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2982937983-merged.mount: Deactivated successfully. Aug 13 00:53:34.645764 env[1339]: time="2025-08-13T00:53:34.645689998Z" level=info msg="Loading containers: start." Aug 13 00:53:34.827157 kernel: Initializing XFRM netlink socket Aug 13 00:53:34.857885 env[1339]: time="2025-08-13T00:53:34.857825257Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Aug 13 00:53:34.909348 systemd-networkd[1046]: docker0: Link UP Aug 13 00:53:34.928948 env[1339]: time="2025-08-13T00:53:34.928885883Z" level=info msg="Loading containers: done." Aug 13 00:53:34.946478 env[1339]: time="2025-08-13T00:53:34.946420621Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:53:34.946680 env[1339]: time="2025-08-13T00:53:34.946651266Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Aug 13 00:53:34.946793 env[1339]: time="2025-08-13T00:53:34.946771719Z" level=info msg="Daemon has completed initialization" Aug 13 00:53:34.965720 systemd[1]: Started docker.service. Aug 13 00:53:35.018594 env[1339]: time="2025-08-13T00:53:35.018504583Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:53:35.783403 env[1226]: time="2025-08-13T00:53:35.783195496Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\"" Aug 13 00:53:35.791716 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:53:35.791942 systemd[1]: Stopped kubelet.service. Aug 13 00:53:35.791991 systemd[1]: kubelet.service: Consumed 1.962s CPU time. Aug 13 00:53:35.793847 systemd[1]: Starting kubelet.service... Aug 13 00:53:35.996896 systemd[1]: Started kubelet.service. Aug 13 00:53:36.434961 kubelet[1473]: E0813 00:53:36.434892 1473 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:53:36.437897 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:53:36.438025 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:53:36.952857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3084916961.mount: Deactivated successfully. Aug 13 00:53:38.609560 env[1226]: time="2025-08-13T00:53:38.609470529Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.33.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:38.611189 env[1226]: time="2025-08-13T00:53:38.611152867Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:38.613619 env[1226]: time="2025-08-13T00:53:38.613567655Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.33.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:38.617225 env[1226]: time="2025-08-13T00:53:38.617154533Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:38.617966 env[1226]: time="2025-08-13T00:53:38.617915947Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\" returns image reference \"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\"" Aug 13 00:53:38.619050 env[1226]: time="2025-08-13T00:53:38.618955725Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\"" Aug 13 00:53:41.774717 env[1226]: time="2025-08-13T00:53:41.774630736Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.33.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:41.776741 env[1226]: time="2025-08-13T00:53:41.776704913Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:41.778902 env[1226]: time="2025-08-13T00:53:41.778863638Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.33.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:41.780736 env[1226]: time="2025-08-13T00:53:41.780690150Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:41.781462 env[1226]: time="2025-08-13T00:53:41.781416645Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\" returns image reference \"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\"" Aug 13 00:53:41.782064 env[1226]: time="2025-08-13T00:53:41.782003240Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\"" Aug 13 00:53:46.050143 env[1226]: time="2025-08-13T00:53:46.050033101Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.33.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:46.307418 env[1226]: time="2025-08-13T00:53:46.307235451Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:46.309556 env[1226]: time="2025-08-13T00:53:46.309474431Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.33.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:46.311539 env[1226]: time="2025-08-13T00:53:46.311490236Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:46.312318 env[1226]: time="2025-08-13T00:53:46.312281336Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\" returns image reference \"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\"" Aug 13 00:53:46.313714 env[1226]: time="2025-08-13T00:53:46.313677386Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\"" Aug 13 00:53:46.542024 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:53:46.542279 systemd[1]: Stopped kubelet.service. Aug 13 00:53:46.543933 systemd[1]: Starting kubelet.service... Aug 13 00:53:46.663839 systemd[1]: Started kubelet.service. Aug 13 00:53:46.925271 kubelet[1486]: E0813 00:53:46.924985 1486 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:53:46.927077 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:53:46.927253 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:53:47.997473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2747640592.mount: Deactivated successfully. Aug 13 00:53:48.708846 env[1226]: time="2025-08-13T00:53:48.708769876Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.33.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:48.710709 env[1226]: time="2025-08-13T00:53:48.710659393Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:48.712069 env[1226]: time="2025-08-13T00:53:48.712022403Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.33.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:48.713332 env[1226]: time="2025-08-13T00:53:48.713301189Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:48.713655 env[1226]: time="2025-08-13T00:53:48.713622923Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\" returns image reference \"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\"" Aug 13 00:53:48.714207 env[1226]: time="2025-08-13T00:53:48.714164201Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Aug 13 00:53:49.402685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount455223916.mount: Deactivated successfully. Aug 13 00:53:52.210738 env[1226]: time="2025-08-13T00:53:52.210675723Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:52.212899 env[1226]: time="2025-08-13T00:53:52.212833134Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:52.215150 env[1226]: time="2025-08-13T00:53:52.215121863Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:52.217004 env[1226]: time="2025-08-13T00:53:52.216956783Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:52.217949 env[1226]: time="2025-08-13T00:53:52.217904133Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Aug 13 00:53:52.218693 env[1226]: time="2025-08-13T00:53:52.218637784Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:53:52.732819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3075113672.mount: Deactivated successfully. Aug 13 00:53:52.741015 env[1226]: time="2025-08-13T00:53:52.740968747Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:52.742919 env[1226]: time="2025-08-13T00:53:52.742883313Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:52.744459 env[1226]: time="2025-08-13T00:53:52.744403609Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:52.745773 env[1226]: time="2025-08-13T00:53:52.745734203Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:52.746360 env[1226]: time="2025-08-13T00:53:52.746324771Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 00:53:52.746959 env[1226]: time="2025-08-13T00:53:52.746918996Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Aug 13 00:53:53.993359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1653019449.mount: Deactivated successfully. Aug 13 00:53:57.041852 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 00:53:57.042085 systemd[1]: Stopped kubelet.service. Aug 13 00:53:57.043708 systemd[1]: Starting kubelet.service... Aug 13 00:53:57.153188 systemd[1]: Started kubelet.service. Aug 13 00:53:57.236188 kubelet[1498]: E0813 00:53:57.236137 1498 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:53:57.238371 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:53:57.238613 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:53:58.980006 env[1226]: time="2025-08-13T00:53:58.979937288Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:58.982064 env[1226]: time="2025-08-13T00:53:58.982014963Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:58.984122 env[1226]: time="2025-08-13T00:53:58.984046868Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:58.986110 env[1226]: time="2025-08-13T00:53:58.986050402Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:58.986825 env[1226]: time="2025-08-13T00:53:58.986781664Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Aug 13 00:54:02.465622 systemd[1]: Stopped kubelet.service. Aug 13 00:54:02.469087 systemd[1]: Starting kubelet.service... Aug 13 00:54:02.614396 systemd[1]: Reloading. Aug 13 00:54:02.689071 /usr/lib/systemd/system-generators/torcx-generator[1555]: time="2025-08-13T00:54:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:54:02.689135 /usr/lib/systemd/system-generators/torcx-generator[1555]: time="2025-08-13T00:54:02Z" level=info msg="torcx already run" Aug 13 00:54:03.298068 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:54:03.298088 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:54:03.315716 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:54:03.393315 systemd[1]: Started kubelet.service. Aug 13 00:54:03.395213 systemd[1]: Stopping kubelet.service... Aug 13 00:54:03.395507 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:54:03.395722 systemd[1]: Stopped kubelet.service. Aug 13 00:54:03.397509 systemd[1]: Starting kubelet.service... Aug 13 00:54:03.488285 systemd[1]: Started kubelet.service. Aug 13 00:54:03.525291 kubelet[1604]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:54:03.525743 kubelet[1604]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:54:03.525743 kubelet[1604]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:54:03.525918 kubelet[1604]: I0813 00:54:03.525787 1604 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:54:04.862954 kubelet[1604]: I0813 00:54:04.862899 1604 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 00:54:04.862954 kubelet[1604]: I0813 00:54:04.862933 1604 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:54:04.863363 kubelet[1604]: I0813 00:54:04.863181 1604 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 00:54:04.890266 kubelet[1604]: E0813 00:54:04.890213 1604 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.79:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Aug 13 00:54:04.892345 kubelet[1604]: I0813 00:54:04.892316 1604 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:54:04.898439 kubelet[1604]: E0813 00:54:04.898415 1604 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:54:04.898439 kubelet[1604]: I0813 00:54:04.898440 1604 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:54:04.902839 kubelet[1604]: I0813 00:54:04.902811 1604 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:54:04.903079 kubelet[1604]: I0813 00:54:04.903047 1604 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:54:04.903247 kubelet[1604]: I0813 00:54:04.903066 1604 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:54:04.903370 kubelet[1604]: I0813 00:54:04.903253 1604 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:54:04.903370 kubelet[1604]: I0813 00:54:04.903264 1604 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 00:54:04.903421 kubelet[1604]: I0813 00:54:04.903389 1604 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:54:04.907653 kubelet[1604]: I0813 00:54:04.907624 1604 kubelet.go:480] "Attempting to sync node with API server" Aug 13 00:54:04.907653 kubelet[1604]: I0813 00:54:04.907653 1604 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:54:04.907760 kubelet[1604]: I0813 00:54:04.907684 1604 kubelet.go:386] "Adding apiserver pod source" Aug 13 00:54:04.914064 kubelet[1604]: I0813 00:54:04.914042 1604 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:54:04.922535 kubelet[1604]: E0813 00:54:04.922500 1604 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 00:54:04.923947 kubelet[1604]: E0813 00:54:04.923923 1604 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 00:54:04.926388 kubelet[1604]: I0813 00:54:04.926367 1604 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 00:54:04.926858 kubelet[1604]: I0813 00:54:04.926844 1604 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 00:54:04.928341 kubelet[1604]: W0813 00:54:04.928325 1604 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:54:04.930701 kubelet[1604]: I0813 00:54:04.930682 1604 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:54:04.930758 kubelet[1604]: I0813 00:54:04.930726 1604 server.go:1289] "Started kubelet" Aug 13 00:54:04.938413 kubelet[1604]: I0813 00:54:04.938391 1604 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:54:04.939247 kubelet[1604]: I0813 00:54:04.939233 1604 server.go:317] "Adding debug handlers to kubelet server" Aug 13 00:54:04.939739 kubelet[1604]: I0813 00:54:04.938370 1604 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:54:04.940049 kubelet[1604]: I0813 00:54:04.940033 1604 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:54:04.941214 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Aug 13 00:54:04.941489 kubelet[1604]: I0813 00:54:04.941468 1604 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:54:04.943408 kubelet[1604]: E0813 00:54:04.943385 1604 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:04.943553 kubelet[1604]: I0813 00:54:04.943535 1604 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:54:04.943741 kubelet[1604]: I0813 00:54:04.943725 1604 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:54:04.943798 kubelet[1604]: I0813 00:54:04.943776 1604 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:54:04.944709 kubelet[1604]: E0813 00:54:04.944151 1604 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 00:54:04.944709 kubelet[1604]: I0813 00:54:04.944424 1604 factory.go:223] Registration of the systemd container factory successfully Aug 13 00:54:04.944709 kubelet[1604]: I0813 00:54:04.944493 1604 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:54:04.946572 kubelet[1604]: I0813 00:54:04.946550 1604 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:54:04.947616 kubelet[1604]: E0813 00:54:04.942026 1604 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.79:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.79:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b2d716a681b2d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 00:54:04.930702125 +0000 UTC m=+1.439073444,LastTimestamp:2025-08-13 00:54:04.930702125 +0000 UTC m=+1.439073444,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 00:54:04.948621 kubelet[1604]: E0813 00:54:04.948576 1604 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="200ms" Aug 13 00:54:04.948872 kubelet[1604]: E0813 00:54:04.948856 1604 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:54:04.950876 kubelet[1604]: I0813 00:54:04.950855 1604 factory.go:223] Registration of the containerd container factory successfully Aug 13 00:54:04.961243 kubelet[1604]: I0813 00:54:04.961189 1604 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 00:54:04.962144 kubelet[1604]: I0813 00:54:04.962117 1604 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 00:54:04.962144 kubelet[1604]: I0813 00:54:04.962144 1604 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 00:54:04.962237 kubelet[1604]: I0813 00:54:04.962161 1604 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:54:04.962237 kubelet[1604]: I0813 00:54:04.962169 1604 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 00:54:04.962237 kubelet[1604]: E0813 00:54:04.962203 1604 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:54:04.962960 kubelet[1604]: E0813 00:54:04.962627 1604 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 13 00:54:04.966752 kubelet[1604]: I0813 00:54:04.966730 1604 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:54:04.966752 kubelet[1604]: I0813 00:54:04.966746 1604 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:54:04.966752 kubelet[1604]: I0813 00:54:04.966764 1604 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:54:04.969588 kubelet[1604]: I0813 00:54:04.969560 1604 policy_none.go:49] "None policy: Start" Aug 13 00:54:04.969588 kubelet[1604]: I0813 00:54:04.969585 1604 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:54:04.969692 kubelet[1604]: I0813 00:54:04.969599 1604 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:54:04.975296 systemd[1]: Created slice kubepods.slice. Aug 13 00:54:04.979442 systemd[1]: Created slice kubepods-burstable.slice. Aug 13 00:54:04.981797 systemd[1]: Created slice kubepods-besteffort.slice. Aug 13 00:54:04.988967 kubelet[1604]: E0813 00:54:04.988919 1604 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 00:54:04.989174 kubelet[1604]: I0813 00:54:04.989116 1604 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:54:04.989174 kubelet[1604]: I0813 00:54:04.989131 1604 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:54:04.989660 kubelet[1604]: I0813 00:54:04.989449 1604 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:54:04.999292 kubelet[1604]: E0813 00:54:04.999268 1604 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:54:04.999522 kubelet[1604]: E0813 00:54:04.999504 1604 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 13 00:54:05.074732 systemd[1]: Created slice kubepods-burstable-pod2ecad8ded054ba99d192f2339d87cb91.slice. Aug 13 00:54:05.080741 kubelet[1604]: E0813 00:54:05.080704 1604 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:54:05.082714 systemd[1]: Created slice kubepods-burstable-podee495458985854145bfdfbfdfe0cc6b2.slice. Aug 13 00:54:05.084913 kubelet[1604]: E0813 00:54:05.084880 1604 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:54:05.086346 systemd[1]: Created slice kubepods-burstable-pod9f30683e4d57ebf2ca7dbf4704079d65.slice. Aug 13 00:54:05.087919 kubelet[1604]: E0813 00:54:05.087884 1604 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:54:05.090654 kubelet[1604]: I0813 00:54:05.090609 1604 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 00:54:05.091127 kubelet[1604]: E0813 00:54:05.091076 1604 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Aug 13 00:54:05.150352 kubelet[1604]: E0813 00:54:05.150264 1604 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="400ms" Aug 13 00:54:05.244655 kubelet[1604]: I0813 00:54:05.244571 1604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ecad8ded054ba99d192f2339d87cb91-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2ecad8ded054ba99d192f2339d87cb91\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:54:05.244655 kubelet[1604]: I0813 00:54:05.244637 1604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:05.244655 kubelet[1604]: I0813 00:54:05.244657 1604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:05.244655 kubelet[1604]: I0813 00:54:05.244674 1604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:05.244962 kubelet[1604]: I0813 00:54:05.244690 1604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:05.244962 kubelet[1604]: I0813 00:54:05.244816 1604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f30683e4d57ebf2ca7dbf4704079d65-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9f30683e4d57ebf2ca7dbf4704079d65\") " pod="kube-system/kube-scheduler-localhost" Aug 13 00:54:05.244962 kubelet[1604]: I0813 00:54:05.244836 1604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ecad8ded054ba99d192f2339d87cb91-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2ecad8ded054ba99d192f2339d87cb91\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:54:05.244962 kubelet[1604]: I0813 00:54:05.244851 1604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ecad8ded054ba99d192f2339d87cb91-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2ecad8ded054ba99d192f2339d87cb91\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:54:05.244962 kubelet[1604]: I0813 00:54:05.244864 1604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:05.292959 kubelet[1604]: I0813 00:54:05.292914 1604 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 00:54:05.293357 kubelet[1604]: E0813 00:54:05.293329 1604 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Aug 13 00:54:05.381250 kubelet[1604]: E0813 00:54:05.381204 1604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:05.381973 env[1226]: time="2025-08-13T00:54:05.381937271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2ecad8ded054ba99d192f2339d87cb91,Namespace:kube-system,Attempt:0,}" Aug 13 00:54:05.386016 kubelet[1604]: E0813 00:54:05.385999 1604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:05.386389 env[1226]: time="2025-08-13T00:54:05.386352467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ee495458985854145bfdfbfdfe0cc6b2,Namespace:kube-system,Attempt:0,}" Aug 13 00:54:05.388586 kubelet[1604]: E0813 00:54:05.388553 1604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:05.388974 env[1226]: time="2025-08-13T00:54:05.388934896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9f30683e4d57ebf2ca7dbf4704079d65,Namespace:kube-system,Attempt:0,}" Aug 13 00:54:05.550882 kubelet[1604]: E0813 00:54:05.550846 1604 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="800ms" Aug 13 00:54:05.694652 kubelet[1604]: I0813 00:54:05.694613 1604 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 00:54:05.694939 kubelet[1604]: E0813 00:54:05.694914 1604 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Aug 13 00:54:06.031250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2998574969.mount: Deactivated successfully. Aug 13 00:54:06.037321 env[1226]: time="2025-08-13T00:54:06.037263703Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:06.038071 env[1226]: time="2025-08-13T00:54:06.038025083Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:06.039899 env[1226]: time="2025-08-13T00:54:06.039869535Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:06.042112 env[1226]: time="2025-08-13T00:54:06.042041011Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:06.044115 env[1226]: time="2025-08-13T00:54:06.044060052Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:06.045441 env[1226]: time="2025-08-13T00:54:06.045407430Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:06.047202 env[1226]: time="2025-08-13T00:54:06.047169011Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:06.049226 env[1226]: time="2025-08-13T00:54:06.049192532Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:06.051469 env[1226]: time="2025-08-13T00:54:06.051428401Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:06.052978 env[1226]: time="2025-08-13T00:54:06.052951591Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:06.054638 env[1226]: time="2025-08-13T00:54:06.054572732Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:06.055373 env[1226]: time="2025-08-13T00:54:06.055333661Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:06.101645 env[1226]: time="2025-08-13T00:54:06.101534676Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:06.101806 env[1226]: time="2025-08-13T00:54:06.101655455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:06.101806 env[1226]: time="2025-08-13T00:54:06.101680367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:06.101863 env[1226]: time="2025-08-13T00:54:06.101823482Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c3ab2546dffd5160f9353e36af7b12881e81297756b87037d4ef86b2185418a6 pid=1667 runtime=io.containerd.runc.v2 Aug 13 00:54:06.102244 env[1226]: time="2025-08-13T00:54:06.102075752Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:06.102244 env[1226]: time="2025-08-13T00:54:06.102122086Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:06.102244 env[1226]: time="2025-08-13T00:54:06.102132829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:06.102661 env[1226]: time="2025-08-13T00:54:06.102313060Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/34c6f9a1ba7f8f1184c701c3724483404632372813282b39101389448f688aa9 pid=1668 runtime=io.containerd.runc.v2 Aug 13 00:54:06.127746 systemd[1]: Started cri-containerd-34c6f9a1ba7f8f1184c701c3724483404632372813282b39101389448f688aa9.scope. Aug 13 00:54:06.147527 systemd[1]: Started cri-containerd-c3ab2546dffd5160f9353e36af7b12881e81297756b87037d4ef86b2185418a6.scope. Aug 13 00:54:06.165060 kubelet[1604]: E0813 00:54:06.165028 1604 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 00:54:06.168222 kubelet[1604]: E0813 00:54:06.168021 1604 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 13 00:54:06.248335 kubelet[1604]: E0813 00:54:06.248281 1604 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 00:54:06.257246 kubelet[1604]: E0813 00:54:06.257162 1604 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 00:54:06.351639 kubelet[1604]: E0813 00:54:06.351580 1604 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="1.6s" Aug 13 00:54:06.372314 env[1226]: time="2025-08-13T00:54:06.363943815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:06.372314 env[1226]: time="2025-08-13T00:54:06.364020222Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:06.372314 env[1226]: time="2025-08-13T00:54:06.364030844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:06.372314 env[1226]: time="2025-08-13T00:54:06.364262611Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f39576b201cd6a91eff9264904b9eb21b3e2f47061abb66b0f4fbde8a3ab455 pid=1650 runtime=io.containerd.runc.v2 Aug 13 00:54:06.419346 systemd[1]: Started cri-containerd-8f39576b201cd6a91eff9264904b9eb21b3e2f47061abb66b0f4fbde8a3ab455.scope. Aug 13 00:54:06.481390 env[1226]: time="2025-08-13T00:54:06.480564547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2ecad8ded054ba99d192f2339d87cb91,Namespace:kube-system,Attempt:0,} returns sandbox id \"34c6f9a1ba7f8f1184c701c3724483404632372813282b39101389448f688aa9\"" Aug 13 00:54:06.481754 kubelet[1604]: E0813 00:54:06.481725 1604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:06.491703 env[1226]: time="2025-08-13T00:54:06.491649507Z" level=info msg="CreateContainer within sandbox \"34c6f9a1ba7f8f1184c701c3724483404632372813282b39101389448f688aa9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:54:06.496958 kubelet[1604]: I0813 00:54:06.496587 1604 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 00:54:06.496958 kubelet[1604]: E0813 00:54:06.496923 1604 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Aug 13 00:54:06.501035 env[1226]: time="2025-08-13T00:54:06.500994570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ee495458985854145bfdfbfdfe0cc6b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3ab2546dffd5160f9353e36af7b12881e81297756b87037d4ef86b2185418a6\"" Aug 13 00:54:06.501680 kubelet[1604]: E0813 00:54:06.501653 1604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:06.505681 env[1226]: time="2025-08-13T00:54:06.505622159Z" level=info msg="CreateContainer within sandbox \"c3ab2546dffd5160f9353e36af7b12881e81297756b87037d4ef86b2185418a6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:54:06.512781 env[1226]: time="2025-08-13T00:54:06.512736955Z" level=info msg="CreateContainer within sandbox \"34c6f9a1ba7f8f1184c701c3724483404632372813282b39101389448f688aa9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"49d83c61cce949760882f1a438b867b29fa9651abb433cd9e75103323fb9771d\"" Aug 13 00:54:06.513387 env[1226]: time="2025-08-13T00:54:06.513312201Z" level=info msg="StartContainer for \"49d83c61cce949760882f1a438b867b29fa9651abb433cd9e75103323fb9771d\"" Aug 13 00:54:06.518857 env[1226]: time="2025-08-13T00:54:06.518817959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9f30683e4d57ebf2ca7dbf4704079d65,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f39576b201cd6a91eff9264904b9eb21b3e2f47061abb66b0f4fbde8a3ab455\"" Aug 13 00:54:06.519877 kubelet[1604]: E0813 00:54:06.519713 1604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:06.524420 env[1226]: time="2025-08-13T00:54:06.524385405Z" level=info msg="CreateContainer within sandbox \"8f39576b201cd6a91eff9264904b9eb21b3e2f47061abb66b0f4fbde8a3ab455\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:54:06.525585 env[1226]: time="2025-08-13T00:54:06.525552021Z" level=info msg="CreateContainer within sandbox \"c3ab2546dffd5160f9353e36af7b12881e81297756b87037d4ef86b2185418a6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"de9ac6af563c2d29b6315674747f1d5445446e87d7c2c3e55ba749b259725364\"" Aug 13 00:54:06.525866 env[1226]: time="2025-08-13T00:54:06.525835956Z" level=info msg="StartContainer for \"de9ac6af563c2d29b6315674747f1d5445446e87d7c2c3e55ba749b259725364\"" Aug 13 00:54:06.530388 systemd[1]: Started cri-containerd-49d83c61cce949760882f1a438b867b29fa9651abb433cd9e75103323fb9771d.scope. Aug 13 00:54:06.543076 env[1226]: time="2025-08-13T00:54:06.538183568Z" level=info msg="CreateContainer within sandbox \"8f39576b201cd6a91eff9264904b9eb21b3e2f47061abb66b0f4fbde8a3ab455\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"21583d44a6b40171348ca755470917b1dc936dace2b1f9bad313e89dceb48e5c\"" Aug 13 00:54:06.543076 env[1226]: time="2025-08-13T00:54:06.538721497Z" level=info msg="StartContainer for \"21583d44a6b40171348ca755470917b1dc936dace2b1f9bad313e89dceb48e5c\"" Aug 13 00:54:06.548828 systemd[1]: Started cri-containerd-de9ac6af563c2d29b6315674747f1d5445446e87d7c2c3e55ba749b259725364.scope. Aug 13 00:54:06.567623 systemd[1]: Started cri-containerd-21583d44a6b40171348ca755470917b1dc936dace2b1f9bad313e89dceb48e5c.scope. Aug 13 00:54:06.584044 env[1226]: time="2025-08-13T00:54:06.583977815Z" level=info msg="StartContainer for \"49d83c61cce949760882f1a438b867b29fa9651abb433cd9e75103323fb9771d\" returns successfully" Aug 13 00:54:06.605605 env[1226]: time="2025-08-13T00:54:06.605459395Z" level=info msg="StartContainer for \"de9ac6af563c2d29b6315674747f1d5445446e87d7c2c3e55ba749b259725364\" returns successfully" Aug 13 00:54:06.620857 env[1226]: time="2025-08-13T00:54:06.620184187Z" level=info msg="StartContainer for \"21583d44a6b40171348ca755470917b1dc936dace2b1f9bad313e89dceb48e5c\" returns successfully" Aug 13 00:54:06.969150 kubelet[1604]: E0813 00:54:06.969007 1604 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:54:06.969328 kubelet[1604]: E0813 00:54:06.969232 1604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:06.971816 kubelet[1604]: E0813 00:54:06.971790 1604 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:54:06.971924 kubelet[1604]: E0813 00:54:06.971901 1604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:06.973389 kubelet[1604]: E0813 00:54:06.973364 1604 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:54:06.973467 kubelet[1604]: E0813 00:54:06.973449 1604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:07.976172 kubelet[1604]: E0813 00:54:07.976136 1604 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:54:07.976543 kubelet[1604]: E0813 00:54:07.976249 1604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:07.976543 kubelet[1604]: E0813 00:54:07.976422 1604 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:54:07.976543 kubelet[1604]: E0813 00:54:07.976496 1604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:08.081845 kubelet[1604]: E0813 00:54:08.081797 1604 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 13 00:54:08.098396 kubelet[1604]: I0813 00:54:08.098355 1604 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 00:54:08.334271 kubelet[1604]: I0813 00:54:08.332318 1604 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 13 00:54:08.334271 kubelet[1604]: E0813 00:54:08.332385 1604 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Aug 13 00:54:08.347353 kubelet[1604]: E0813 00:54:08.347310 1604 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:08.447789 kubelet[1604]: E0813 00:54:08.447720 1604 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:08.548787 kubelet[1604]: E0813 00:54:08.548757 1604 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:08.649862 kubelet[1604]: E0813 00:54:08.649631 1604 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:08.750530 kubelet[1604]: E0813 00:54:08.750453 1604 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:08.851396 kubelet[1604]: E0813 00:54:08.851321 1604 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:08.952042 kubelet[1604]: E0813 00:54:08.951899 1604 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:08.958285 update_engine[1215]: I0813 00:54:08.958215 1215 update_attempter.cc:509] Updating boot flags... Aug 13 00:54:09.049522 kubelet[1604]: I0813 00:54:09.048758 1604 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 00:54:09.058164 kubelet[1604]: E0813 00:54:09.058132 1604 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Aug 13 00:54:09.058244 kubelet[1604]: I0813 00:54:09.058168 1604 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 00:54:09.061577 kubelet[1604]: E0813 00:54:09.060588 1604 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Aug 13 00:54:09.061577 kubelet[1604]: I0813 00:54:09.060621 1604 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:09.063942 kubelet[1604]: E0813 00:54:09.063528 1604 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:09.921520 kubelet[1604]: I0813 00:54:09.921455 1604 apiserver.go:52] "Watching apiserver" Aug 13 00:54:09.944148 kubelet[1604]: I0813 00:54:09.944067 1604 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:54:10.435368 systemd[1]: Reloading. Aug 13 00:54:10.509472 /usr/lib/systemd/system-generators/torcx-generator[1924]: time="2025-08-13T00:54:10Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:54:10.509508 /usr/lib/systemd/system-generators/torcx-generator[1924]: time="2025-08-13T00:54:10Z" level=info msg="torcx already run" Aug 13 00:54:10.575329 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:54:10.575350 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:54:10.593728 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:54:10.688959 kubelet[1604]: I0813 00:54:10.688854 1604 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:54:10.689045 systemd[1]: Stopping kubelet.service... Aug 13 00:54:10.709585 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:54:10.709808 systemd[1]: Stopped kubelet.service. Aug 13 00:54:10.709882 systemd[1]: kubelet.service: Consumed 1.584s CPU time. Aug 13 00:54:10.711702 systemd[1]: Starting kubelet.service... Aug 13 00:54:10.808877 systemd[1]: Started kubelet.service. Aug 13 00:54:10.855557 kubelet[1969]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:54:10.855999 kubelet[1969]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:54:10.856077 kubelet[1969]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:54:10.856222 kubelet[1969]: I0813 00:54:10.856193 1969 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:54:10.862602 kubelet[1969]: I0813 00:54:10.862566 1969 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 00:54:10.862602 kubelet[1969]: I0813 00:54:10.862590 1969 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:54:10.862784 kubelet[1969]: I0813 00:54:10.862772 1969 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 00:54:10.863903 kubelet[1969]: I0813 00:54:10.863849 1969 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Aug 13 00:54:10.975321 kubelet[1969]: I0813 00:54:10.866048 1969 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:54:10.979515 kubelet[1969]: E0813 00:54:10.979489 1969 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:54:10.979515 kubelet[1969]: I0813 00:54:10.979514 1969 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:54:10.982795 kubelet[1969]: I0813 00:54:10.982770 1969 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:54:10.982958 kubelet[1969]: I0813 00:54:10.982928 1969 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:54:10.983083 kubelet[1969]: I0813 00:54:10.982948 1969 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:54:10.983187 kubelet[1969]: I0813 00:54:10.983084 1969 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:54:10.983187 kubelet[1969]: I0813 00:54:10.983106 1969 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 00:54:10.983187 kubelet[1969]: I0813 00:54:10.983146 1969 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:54:10.983281 kubelet[1969]: I0813 00:54:10.983271 1969 kubelet.go:480] "Attempting to sync node with API server" Aug 13 00:54:10.983308 kubelet[1969]: I0813 00:54:10.983285 1969 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:54:10.983308 kubelet[1969]: I0813 00:54:10.983302 1969 kubelet.go:386] "Adding apiserver pod source" Aug 13 00:54:10.983686 kubelet[1969]: I0813 00:54:10.983665 1969 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:54:10.986200 kubelet[1969]: I0813 00:54:10.986182 1969 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 00:54:10.986616 kubelet[1969]: I0813 00:54:10.986597 1969 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 00:54:10.988867 kubelet[1969]: I0813 00:54:10.988771 1969 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:54:10.988867 kubelet[1969]: I0813 00:54:10.988814 1969 server.go:1289] "Started kubelet" Aug 13 00:54:10.989922 kubelet[1969]: I0813 00:54:10.989885 1969 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:54:10.990897 kubelet[1969]: I0813 00:54:10.990404 1969 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:54:10.991481 kubelet[1969]: I0813 00:54:10.991460 1969 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:54:10.991584 kubelet[1969]: I0813 00:54:10.991565 1969 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:54:10.995388 kubelet[1969]: I0813 00:54:10.995358 1969 server.go:317] "Adding debug handlers to kubelet server" Aug 13 00:54:10.996152 kubelet[1969]: I0813 00:54:10.996138 1969 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:54:10.996283 kubelet[1969]: I0813 00:54:10.996259 1969 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:54:10.997604 kubelet[1969]: I0813 00:54:10.997587 1969 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:54:10.997906 kubelet[1969]: I0813 00:54:10.997891 1969 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:54:10.999972 kubelet[1969]: I0813 00:54:10.999952 1969 factory.go:223] Registration of the systemd container factory successfully Aug 13 00:54:11.000404 kubelet[1969]: I0813 00:54:11.000380 1969 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:54:11.001315 kubelet[1969]: I0813 00:54:11.001299 1969 factory.go:223] Registration of the containerd container factory successfully Aug 13 00:54:11.001444 kubelet[1969]: E0813 00:54:11.001421 1969 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:54:11.005549 kubelet[1969]: I0813 00:54:11.005505 1969 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 00:54:11.006360 kubelet[1969]: I0813 00:54:11.006335 1969 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 00:54:11.006360 kubelet[1969]: I0813 00:54:11.006356 1969 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 00:54:11.006426 kubelet[1969]: I0813 00:54:11.006382 1969 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:54:11.006426 kubelet[1969]: I0813 00:54:11.006390 1969 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 00:54:11.006476 kubelet[1969]: E0813 00:54:11.006431 1969 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:54:11.029063 kubelet[1969]: I0813 00:54:11.029030 1969 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:54:11.029063 kubelet[1969]: I0813 00:54:11.029046 1969 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:54:11.029063 kubelet[1969]: I0813 00:54:11.029064 1969 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:54:11.029285 kubelet[1969]: I0813 00:54:11.029215 1969 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:54:11.029285 kubelet[1969]: I0813 00:54:11.029229 1969 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:54:11.029285 kubelet[1969]: I0813 00:54:11.029245 1969 policy_none.go:49] "None policy: Start" Aug 13 00:54:11.029285 kubelet[1969]: I0813 00:54:11.029253 1969 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:54:11.029285 kubelet[1969]: I0813 00:54:11.029261 1969 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:54:11.029427 kubelet[1969]: I0813 00:54:11.029350 1969 state_mem.go:75] "Updated machine memory state" Aug 13 00:54:11.032936 kubelet[1969]: E0813 00:54:11.032905 1969 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 00:54:11.033063 kubelet[1969]: I0813 00:54:11.033044 1969 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:54:11.033122 kubelet[1969]: I0813 00:54:11.033059 1969 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:54:11.033300 kubelet[1969]: I0813 00:54:11.033250 1969 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:54:11.034650 kubelet[1969]: E0813 00:54:11.034627 1969 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:54:11.107876 kubelet[1969]: I0813 00:54:11.107832 1969 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 00:54:11.108085 kubelet[1969]: I0813 00:54:11.107980 1969 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 00:54:11.108085 kubelet[1969]: I0813 00:54:11.108009 1969 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:11.140055 kubelet[1969]: I0813 00:54:11.140030 1969 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 00:54:11.200232 kubelet[1969]: I0813 00:54:11.200195 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:11.200297 kubelet[1969]: I0813 00:54:11.200234 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:11.200297 kubelet[1969]: I0813 00:54:11.200259 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ecad8ded054ba99d192f2339d87cb91-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2ecad8ded054ba99d192f2339d87cb91\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:54:11.200297 kubelet[1969]: I0813 00:54:11.200281 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ecad8ded054ba99d192f2339d87cb91-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2ecad8ded054ba99d192f2339d87cb91\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:54:11.200371 kubelet[1969]: I0813 00:54:11.200343 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:11.200397 kubelet[1969]: I0813 00:54:11.200372 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:11.200397 kubelet[1969]: I0813 00:54:11.200391 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:11.200452 kubelet[1969]: I0813 00:54:11.200408 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f30683e4d57ebf2ca7dbf4704079d65-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9f30683e4d57ebf2ca7dbf4704079d65\") " pod="kube-system/kube-scheduler-localhost" Aug 13 00:54:11.200452 kubelet[1969]: I0813 00:54:11.200425 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ecad8ded054ba99d192f2339d87cb91-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2ecad8ded054ba99d192f2339d87cb91\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:54:11.429909 kubelet[1969]: E0813 00:54:11.429858 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:11.430131 kubelet[1969]: E0813 00:54:11.430111 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:11.430211 kubelet[1969]: E0813 00:54:11.430127 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:11.616685 kubelet[1969]: I0813 00:54:11.616632 1969 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Aug 13 00:54:11.616852 kubelet[1969]: I0813 00:54:11.616720 1969 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 13 00:54:11.984737 kubelet[1969]: I0813 00:54:11.984696 1969 apiserver.go:52] "Watching apiserver" Aug 13 00:54:11.998171 kubelet[1969]: I0813 00:54:11.998140 1969 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:54:12.019742 kubelet[1969]: I0813 00:54:12.019707 1969 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 00:54:12.019892 kubelet[1969]: I0813 00:54:12.019868 1969 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 00:54:12.020079 kubelet[1969]: I0813 00:54:12.020056 1969 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:12.887794 kubelet[1969]: E0813 00:54:12.887729 1969 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Aug 13 00:54:12.888220 kubelet[1969]: E0813 00:54:12.887928 1969 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 13 00:54:12.888467 kubelet[1969]: E0813 00:54:12.888444 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:12.888734 kubelet[1969]: E0813 00:54:12.888704 1969 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:12.888892 kubelet[1969]: E0813 00:54:12.888869 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:12.888996 kubelet[1969]: E0813 00:54:12.888975 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:13.001857 kubelet[1969]: I0813 00:54:13.001786 1969 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.001769384 podStartE2EDuration="2.001769384s" podCreationTimestamp="2025-08-13 00:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:54:13.001506787 +0000 UTC m=+2.189110095" watchObservedRunningTime="2025-08-13 00:54:13.001769384 +0000 UTC m=+2.189372702" Aug 13 00:54:13.002767 sudo[2008]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 00:54:13.003384 sudo[2008]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Aug 13 00:54:13.020994 kubelet[1969]: E0813 00:54:13.020956 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:13.020994 kubelet[1969]: E0813 00:54:13.020979 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:13.021201 kubelet[1969]: E0813 00:54:13.021059 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:13.024980 kubelet[1969]: I0813 00:54:13.024913 1969 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.024895757 podStartE2EDuration="2.024895757s" podCreationTimestamp="2025-08-13 00:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:54:13.014598446 +0000 UTC m=+2.202201764" watchObservedRunningTime="2025-08-13 00:54:13.024895757 +0000 UTC m=+2.212499075" Aug 13 00:54:13.025171 kubelet[1969]: I0813 00:54:13.025045 1969 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.02504136 podStartE2EDuration="2.02504136s" podCreationTimestamp="2025-08-13 00:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:54:13.023525748 +0000 UTC m=+2.211129056" watchObservedRunningTime="2025-08-13 00:54:13.02504136 +0000 UTC m=+2.212644678" Aug 13 00:54:13.591874 sudo[2008]: pam_unix(sudo:session): session closed for user root Aug 13 00:54:14.971505 sudo[1327]: pam_unix(sudo:session): session closed for user root Aug 13 00:54:14.973317 sshd[1324]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:14.975637 systemd[1]: sshd@4-10.0.0.79:22-10.0.0.1:52306.service: Deactivated successfully. Aug 13 00:54:14.976333 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:54:14.976461 systemd[1]: session-5.scope: Consumed 5.969s CPU time. Aug 13 00:54:14.976899 systemd-logind[1213]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:54:14.977742 systemd-logind[1213]: Removed session 5. Aug 13 00:54:15.127669 kubelet[1969]: E0813 00:54:15.127629 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:15.722907 kubelet[1969]: I0813 00:54:15.722874 1969 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:54:15.723216 env[1226]: time="2025-08-13T00:54:15.723171229Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:54:15.723502 kubelet[1969]: I0813 00:54:15.723323 1969 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:54:16.022047 kubelet[1969]: E0813 00:54:16.021906 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:16.027156 kubelet[1969]: E0813 00:54:16.027084 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:16.387378 systemd[1]: Created slice kubepods-besteffort-pod7388b277_a640_4407_9296_9a0723305e59.slice. Aug 13 00:54:16.401440 systemd[1]: Created slice kubepods-burstable-pod5cdb1c13_6e2f_4139_8655_d48371cf2856.slice. Aug 13 00:54:16.434872 kubelet[1969]: I0813 00:54:16.434808 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7388b277-a640-4407-9296-9a0723305e59-xtables-lock\") pod \"kube-proxy-lzkkg\" (UID: \"7388b277-a640-4407-9296-9a0723305e59\") " pod="kube-system/kube-proxy-lzkkg" Aug 13 00:54:16.434872 kubelet[1969]: I0813 00:54:16.434857 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-cilium-run\") pod \"cilium-6n7xp\" (UID: \"5cdb1c13-6e2f-4139-8655-d48371cf2856\") " pod="kube-system/cilium-6n7xp" Aug 13 00:54:16.434872 kubelet[1969]: I0813 00:54:16.434889 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-cilium-cgroup\") pod \"cilium-6n7xp\" (UID: \"5cdb1c13-6e2f-4139-8655-d48371cf2856\") " pod="kube-system/cilium-6n7xp" Aug 13 00:54:16.435373 kubelet[1969]: I0813 00:54:16.434904 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-xtables-lock\") pod \"cilium-6n7xp\" (UID: \"5cdb1c13-6e2f-4139-8655-d48371cf2856\") " pod="kube-system/cilium-6n7xp" Aug 13 00:54:16.435373 kubelet[1969]: I0813 00:54:16.434918 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-host-proc-sys-net\") pod \"cilium-6n7xp\" (UID: \"5cdb1c13-6e2f-4139-8655-d48371cf2856\") " pod="kube-system/cilium-6n7xp" Aug 13 00:54:16.435373 kubelet[1969]: I0813 00:54:16.434932 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7388b277-a640-4407-9296-9a0723305e59-lib-modules\") pod \"kube-proxy-lzkkg\" (UID: \"7388b277-a640-4407-9296-9a0723305e59\") " pod="kube-system/kube-proxy-lzkkg" Aug 13 00:54:16.435373 kubelet[1969]: I0813 00:54:16.434947 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scxnk\" (UniqueName: \"kubernetes.io/projected/7388b277-a640-4407-9296-9a0723305e59-kube-api-access-scxnk\") pod \"kube-proxy-lzkkg\" (UID: \"7388b277-a640-4407-9296-9a0723305e59\") " pod="kube-system/kube-proxy-lzkkg" Aug 13 00:54:16.435373 kubelet[1969]: I0813 00:54:16.434985 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-etc-cni-netd\") pod \"cilium-6n7xp\" (UID: \"5cdb1c13-6e2f-4139-8655-d48371cf2856\") " pod="kube-system/cilium-6n7xp" Aug 13 00:54:16.435515 kubelet[1969]: I0813 00:54:16.435030 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5cdb1c13-6e2f-4139-8655-d48371cf2856-clustermesh-secrets\") pod \"cilium-6n7xp\" (UID: \"5cdb1c13-6e2f-4139-8655-d48371cf2856\") " pod="kube-system/cilium-6n7xp" Aug 13 00:54:16.435515 kubelet[1969]: I0813 00:54:16.435058 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5cdb1c13-6e2f-4139-8655-d48371cf2856-cilium-config-path\") pod \"cilium-6n7xp\" (UID: \"5cdb1c13-6e2f-4139-8655-d48371cf2856\") " pod="kube-system/cilium-6n7xp" Aug 13 00:54:16.435515 kubelet[1969]: I0813 00:54:16.435079 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-host-proc-sys-kernel\") pod \"cilium-6n7xp\" (UID: \"5cdb1c13-6e2f-4139-8655-d48371cf2856\") " pod="kube-system/cilium-6n7xp" Aug 13 00:54:16.435515 kubelet[1969]: I0813 00:54:16.435122 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5cdb1c13-6e2f-4139-8655-d48371cf2856-hubble-tls\") pod \"cilium-6n7xp\" (UID: \"5cdb1c13-6e2f-4139-8655-d48371cf2856\") " pod="kube-system/cilium-6n7xp" Aug 13 00:54:16.435515 kubelet[1969]: I0813 00:54:16.435145 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-cni-path\") pod \"cilium-6n7xp\" (UID: \"5cdb1c13-6e2f-4139-8655-d48371cf2856\") " pod="kube-system/cilium-6n7xp" Aug 13 00:54:16.435638 kubelet[1969]: I0813 00:54:16.435182 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j86n5\" (UniqueName: \"kubernetes.io/projected/5cdb1c13-6e2f-4139-8655-d48371cf2856-kube-api-access-j86n5\") pod \"cilium-6n7xp\" (UID: \"5cdb1c13-6e2f-4139-8655-d48371cf2856\") " pod="kube-system/cilium-6n7xp" Aug 13 00:54:16.435638 kubelet[1969]: I0813 00:54:16.435224 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7388b277-a640-4407-9296-9a0723305e59-kube-proxy\") pod \"kube-proxy-lzkkg\" (UID: \"7388b277-a640-4407-9296-9a0723305e59\") " pod="kube-system/kube-proxy-lzkkg" Aug 13 00:54:16.435638 kubelet[1969]: I0813 00:54:16.435244 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-bpf-maps\") pod \"cilium-6n7xp\" (UID: \"5cdb1c13-6e2f-4139-8655-d48371cf2856\") " pod="kube-system/cilium-6n7xp" Aug 13 00:54:16.435638 kubelet[1969]: I0813 00:54:16.435261 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-hostproc\") pod \"cilium-6n7xp\" (UID: \"5cdb1c13-6e2f-4139-8655-d48371cf2856\") " pod="kube-system/cilium-6n7xp" Aug 13 00:54:16.435638 kubelet[1969]: I0813 00:54:16.435277 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-lib-modules\") pod \"cilium-6n7xp\" (UID: \"5cdb1c13-6e2f-4139-8655-d48371cf2856\") " pod="kube-system/cilium-6n7xp" Aug 13 00:54:16.537074 kubelet[1969]: I0813 00:54:16.537019 1969 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Aug 13 00:54:16.587687 kubelet[1969]: E0813 00:54:16.587643 1969 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 13 00:54:16.587687 kubelet[1969]: E0813 00:54:16.587679 1969 projected.go:194] Error preparing data for projected volume kube-api-access-scxnk for pod kube-system/kube-proxy-lzkkg: configmap "kube-root-ca.crt" not found Aug 13 00:54:16.588029 kubelet[1969]: E0813 00:54:16.587765 1969 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7388b277-a640-4407-9296-9a0723305e59-kube-api-access-scxnk podName:7388b277-a640-4407-9296-9a0723305e59 nodeName:}" failed. No retries permitted until 2025-08-13 00:54:17.087739268 +0000 UTC m=+6.275342586 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-scxnk" (UniqueName: "kubernetes.io/projected/7388b277-a640-4407-9296-9a0723305e59-kube-api-access-scxnk") pod "kube-proxy-lzkkg" (UID: "7388b277-a640-4407-9296-9a0723305e59") : configmap "kube-root-ca.crt" not found Aug 13 00:54:16.588249 kubelet[1969]: E0813 00:54:16.588206 1969 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 13 00:54:16.588249 kubelet[1969]: E0813 00:54:16.588242 1969 projected.go:194] Error preparing data for projected volume kube-api-access-j86n5 for pod kube-system/cilium-6n7xp: configmap "kube-root-ca.crt" not found Aug 13 00:54:16.588357 kubelet[1969]: E0813 00:54:16.588319 1969 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5cdb1c13-6e2f-4139-8655-d48371cf2856-kube-api-access-j86n5 podName:5cdb1c13-6e2f-4139-8655-d48371cf2856 nodeName:}" failed. No retries permitted until 2025-08-13 00:54:17.088294683 +0000 UTC m=+6.275897991 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-j86n5" (UniqueName: "kubernetes.io/projected/5cdb1c13-6e2f-4139-8655-d48371cf2856-kube-api-access-j86n5") pod "cilium-6n7xp" (UID: "5cdb1c13-6e2f-4139-8655-d48371cf2856") : configmap "kube-root-ca.crt" not found Aug 13 00:54:16.925813 systemd[1]: Created slice kubepods-besteffort-pod7b35d23d_6b85_4838_972f_ee61b825d323.slice. Aug 13 00:54:16.939164 kubelet[1969]: I0813 00:54:16.939090 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7b35d23d-6b85-4838-972f-ee61b825d323-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-pmg92\" (UID: \"7b35d23d-6b85-4838-972f-ee61b825d323\") " pod="kube-system/cilium-operator-6c4d7847fc-pmg92" Aug 13 00:54:16.939164 kubelet[1969]: I0813 00:54:16.939167 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxf2s\" (UniqueName: \"kubernetes.io/projected/7b35d23d-6b85-4838-972f-ee61b825d323-kube-api-access-dxf2s\") pod \"cilium-operator-6c4d7847fc-pmg92\" (UID: \"7b35d23d-6b85-4838-972f-ee61b825d323\") " pod="kube-system/cilium-operator-6c4d7847fc-pmg92" Aug 13 00:54:17.228905 kubelet[1969]: E0813 00:54:17.228627 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:17.229157 env[1226]: time="2025-08-13T00:54:17.229110478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-pmg92,Uid:7b35d23d-6b85-4838-972f-ee61b825d323,Namespace:kube-system,Attempt:0,}" Aug 13 00:54:17.297763 kubelet[1969]: E0813 00:54:17.297735 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:17.298204 env[1226]: time="2025-08-13T00:54:17.298164989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lzkkg,Uid:7388b277-a640-4407-9296-9a0723305e59,Namespace:kube-system,Attempt:0,}" Aug 13 00:54:17.305458 kubelet[1969]: E0813 00:54:17.305429 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:17.305772 env[1226]: time="2025-08-13T00:54:17.305749292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6n7xp,Uid:5cdb1c13-6e2f-4139-8655-d48371cf2856,Namespace:kube-system,Attempt:0,}" Aug 13 00:54:18.158336 env[1226]: time="2025-08-13T00:54:18.158241561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:18.158485 env[1226]: time="2025-08-13T00:54:18.158336930Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:18.158485 env[1226]: time="2025-08-13T00:54:18.158370315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:18.158665 env[1226]: time="2025-08-13T00:54:18.158624108Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2e3246a8ba0a3073232709c00babd223e16822fb580839395e56dfc008a2380f pid=2069 runtime=io.containerd.runc.v2 Aug 13 00:54:18.170028 env[1226]: time="2025-08-13T00:54:18.169944794Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:18.170028 env[1226]: time="2025-08-13T00:54:18.170022969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:18.170199 env[1226]: time="2025-08-13T00:54:18.170046264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:18.170254 env[1226]: time="2025-08-13T00:54:18.170221622Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7cb61ca920f16648d83389ebe14ef50e5b6ce78e4ab837d9fa1fa3f9918d76a9 pid=2085 runtime=io.containerd.runc.v2 Aug 13 00:54:18.185181 systemd[1]: Started cri-containerd-2e3246a8ba0a3073232709c00babd223e16822fb580839395e56dfc008a2380f.scope. Aug 13 00:54:18.187593 systemd[1]: Started cri-containerd-7cb61ca920f16648d83389ebe14ef50e5b6ce78e4ab837d9fa1fa3f9918d76a9.scope. Aug 13 00:54:18.195223 env[1226]: time="2025-08-13T00:54:18.195044593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:18.195223 env[1226]: time="2025-08-13T00:54:18.195084021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:18.195223 env[1226]: time="2025-08-13T00:54:18.195151324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:18.196366 env[1226]: time="2025-08-13T00:54:18.196333886Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/30e79f034101e1c2e514a40647896e6f5c606feb5e658a3d19b0a4eaf94edb52 pid=2120 runtime=io.containerd.runc.v2 Aug 13 00:54:18.209420 systemd[1]: Started cri-containerd-30e79f034101e1c2e514a40647896e6f5c606feb5e658a3d19b0a4eaf94edb52.scope. Aug 13 00:54:18.220833 env[1226]: time="2025-08-13T00:54:18.220763628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lzkkg,Uid:7388b277-a640-4407-9296-9a0723305e59,Namespace:kube-system,Attempt:0,} returns sandbox id \"7cb61ca920f16648d83389ebe14ef50e5b6ce78e4ab837d9fa1fa3f9918d76a9\"" Aug 13 00:54:18.221721 kubelet[1969]: E0813 00:54:18.221693 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:18.239229 env[1226]: time="2025-08-13T00:54:18.238381623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-pmg92,Uid:7b35d23d-6b85-4838-972f-ee61b825d323,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e3246a8ba0a3073232709c00babd223e16822fb580839395e56dfc008a2380f\"" Aug 13 00:54:18.243329 env[1226]: time="2025-08-13T00:54:18.243286014Z" level=info msg="CreateContainer within sandbox \"7cb61ca920f16648d83389ebe14ef50e5b6ce78e4ab837d9fa1fa3f9918d76a9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:54:18.244277 kubelet[1969]: E0813 00:54:18.244252 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:18.245535 env[1226]: time="2025-08-13T00:54:18.245276886Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 00:54:18.252132 env[1226]: time="2025-08-13T00:54:18.252061571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6n7xp,Uid:5cdb1c13-6e2f-4139-8655-d48371cf2856,Namespace:kube-system,Attempt:0,} returns sandbox id \"30e79f034101e1c2e514a40647896e6f5c606feb5e658a3d19b0a4eaf94edb52\"" Aug 13 00:54:18.252668 kubelet[1969]: E0813 00:54:18.252615 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:18.263657 env[1226]: time="2025-08-13T00:54:18.263610308Z" level=info msg="CreateContainer within sandbox \"7cb61ca920f16648d83389ebe14ef50e5b6ce78e4ab837d9fa1fa3f9918d76a9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"365467c19a2da64efe8371978e2a0b2caffa2da6a32010fa7e71f600c5942cd7\"" Aug 13 00:54:18.264281 env[1226]: time="2025-08-13T00:54:18.264252309Z" level=info msg="StartContainer for \"365467c19a2da64efe8371978e2a0b2caffa2da6a32010fa7e71f600c5942cd7\"" Aug 13 00:54:18.279715 systemd[1]: Started cri-containerd-365467c19a2da64efe8371978e2a0b2caffa2da6a32010fa7e71f600c5942cd7.scope. Aug 13 00:54:18.311581 env[1226]: time="2025-08-13T00:54:18.310683727Z" level=info msg="StartContainer for \"365467c19a2da64efe8371978e2a0b2caffa2da6a32010fa7e71f600c5942cd7\" returns successfully" Aug 13 00:54:19.033909 kubelet[1969]: E0813 00:54:19.033597 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:19.050964 kubelet[1969]: I0813 00:54:19.050893 1969 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lzkkg" podStartSLOduration=3.050870817 podStartE2EDuration="3.050870817s" podCreationTimestamp="2025-08-13 00:54:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:54:19.047331845 +0000 UTC m=+8.234935193" watchObservedRunningTime="2025-08-13 00:54:19.050870817 +0000 UTC m=+8.238474135" Aug 13 00:54:19.559821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2161907278.mount: Deactivated successfully. Aug 13 00:54:19.825457 kubelet[1969]: E0813 00:54:19.825264 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:20.036384 kubelet[1969]: E0813 00:54:20.036346 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:20.662449 env[1226]: time="2025-08-13T00:54:20.662366335Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:20.664351 env[1226]: time="2025-08-13T00:54:20.664287142Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:20.665983 env[1226]: time="2025-08-13T00:54:20.665950873Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:20.666501 env[1226]: time="2025-08-13T00:54:20.666434706Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 00:54:20.667589 env[1226]: time="2025-08-13T00:54:20.667547861Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 00:54:20.674284 env[1226]: time="2025-08-13T00:54:20.674236378Z" level=info msg="CreateContainer within sandbox \"2e3246a8ba0a3073232709c00babd223e16822fb580839395e56dfc008a2380f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 00:54:20.689256 env[1226]: time="2025-08-13T00:54:20.689192176Z" level=info msg="CreateContainer within sandbox \"2e3246a8ba0a3073232709c00babd223e16822fb580839395e56dfc008a2380f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c4cdc8320bfa76eaf754eeb35152ba150f385acd4ba3da383e24e104ed02889a\"" Aug 13 00:54:20.689775 env[1226]: time="2025-08-13T00:54:20.689713455Z" level=info msg="StartContainer for \"c4cdc8320bfa76eaf754eeb35152ba150f385acd4ba3da383e24e104ed02889a\"" Aug 13 00:54:20.712196 systemd[1]: Started cri-containerd-c4cdc8320bfa76eaf754eeb35152ba150f385acd4ba3da383e24e104ed02889a.scope. Aug 13 00:54:20.741383 env[1226]: time="2025-08-13T00:54:20.741280246Z" level=info msg="StartContainer for \"c4cdc8320bfa76eaf754eeb35152ba150f385acd4ba3da383e24e104ed02889a\" returns successfully" Aug 13 00:54:21.038529 kubelet[1969]: E0813 00:54:21.038386 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:21.687245 systemd[1]: run-containerd-runc-k8s.io-c4cdc8320bfa76eaf754eeb35152ba150f385acd4ba3da383e24e104ed02889a-runc.qbnrm8.mount: Deactivated successfully. Aug 13 00:54:22.040401 kubelet[1969]: E0813 00:54:22.040286 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:25.129623 kubelet[1969]: E0813 00:54:25.129532 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:25.149231 kubelet[1969]: I0813 00:54:25.149161 1969 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-pmg92" podStartSLOduration=6.726737959 podStartE2EDuration="9.149142419s" podCreationTimestamp="2025-08-13 00:54:16 +0000 UTC" firstStartedPulling="2025-08-13 00:54:18.24497052 +0000 UTC m=+7.432573838" lastFinishedPulling="2025-08-13 00:54:20.66737497 +0000 UTC m=+9.854978298" observedRunningTime="2025-08-13 00:54:21.380045683 +0000 UTC m=+10.567649001" watchObservedRunningTime="2025-08-13 00:54:25.149142419 +0000 UTC m=+14.336745737" Aug 13 00:54:26.048116 kubelet[1969]: E0813 00:54:26.048071 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:28.884231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3058719651.mount: Deactivated successfully. Aug 13 00:54:33.555209 env[1226]: time="2025-08-13T00:54:33.555134329Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:33.557970 env[1226]: time="2025-08-13T00:54:33.557896949Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:33.563587 env[1226]: time="2025-08-13T00:54:33.563523789Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:33.564482 env[1226]: time="2025-08-13T00:54:33.564414554Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 00:54:33.575647 env[1226]: time="2025-08-13T00:54:33.575595472Z" level=info msg="CreateContainer within sandbox \"30e79f034101e1c2e514a40647896e6f5c606feb5e658a3d19b0a4eaf94edb52\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:54:33.588552 env[1226]: time="2025-08-13T00:54:33.588494677Z" level=info msg="CreateContainer within sandbox \"30e79f034101e1c2e514a40647896e6f5c606feb5e658a3d19b0a4eaf94edb52\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ee7a32e691e26867b2c6183ed92a8446d09ac58e8d5d0f6355dc761ab7d71003\"" Aug 13 00:54:33.589285 env[1226]: time="2025-08-13T00:54:33.589205925Z" level=info msg="StartContainer for \"ee7a32e691e26867b2c6183ed92a8446d09ac58e8d5d0f6355dc761ab7d71003\"" Aug 13 00:54:33.611375 systemd[1]: Started cri-containerd-ee7a32e691e26867b2c6183ed92a8446d09ac58e8d5d0f6355dc761ab7d71003.scope. Aug 13 00:54:33.645666 env[1226]: time="2025-08-13T00:54:33.645590459Z" level=info msg="StartContainer for \"ee7a32e691e26867b2c6183ed92a8446d09ac58e8d5d0f6355dc761ab7d71003\" returns successfully" Aug 13 00:54:33.655205 systemd[1]: cri-containerd-ee7a32e691e26867b2c6183ed92a8446d09ac58e8d5d0f6355dc761ab7d71003.scope: Deactivated successfully. Aug 13 00:54:33.867314 env[1226]: time="2025-08-13T00:54:33.867159633Z" level=info msg="shim disconnected" id=ee7a32e691e26867b2c6183ed92a8446d09ac58e8d5d0f6355dc761ab7d71003 Aug 13 00:54:33.867314 env[1226]: time="2025-08-13T00:54:33.867241422Z" level=warning msg="cleaning up after shim disconnected" id=ee7a32e691e26867b2c6183ed92a8446d09ac58e8d5d0f6355dc761ab7d71003 namespace=k8s.io Aug 13 00:54:33.867314 env[1226]: time="2025-08-13T00:54:33.867258495Z" level=info msg="cleaning up dead shim" Aug 13 00:54:33.874995 env[1226]: time="2025-08-13T00:54:33.874932168Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:54:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2446 runtime=io.containerd.runc.v2\n" Aug 13 00:54:34.062905 kubelet[1969]: E0813 00:54:34.062857 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:34.067699 env[1226]: time="2025-08-13T00:54:34.067634767Z" level=info msg="CreateContainer within sandbox \"30e79f034101e1c2e514a40647896e6f5c606feb5e658a3d19b0a4eaf94edb52\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:54:34.081933 env[1226]: time="2025-08-13T00:54:34.081877892Z" level=info msg="CreateContainer within sandbox \"30e79f034101e1c2e514a40647896e6f5c606feb5e658a3d19b0a4eaf94edb52\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cbf4c402beb939d40fe9ffb052a9eadef604df30078b029d3ea6cb07b87ca1fd\"" Aug 13 00:54:34.084729 env[1226]: time="2025-08-13T00:54:34.084685707Z" level=info msg="StartContainer for \"cbf4c402beb939d40fe9ffb052a9eadef604df30078b029d3ea6cb07b87ca1fd\"" Aug 13 00:54:34.099969 systemd[1]: Started cri-containerd-cbf4c402beb939d40fe9ffb052a9eadef604df30078b029d3ea6cb07b87ca1fd.scope. Aug 13 00:54:34.130488 env[1226]: time="2025-08-13T00:54:34.129223993Z" level=info msg="StartContainer for \"cbf4c402beb939d40fe9ffb052a9eadef604df30078b029d3ea6cb07b87ca1fd\" returns successfully" Aug 13 00:54:34.139508 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:54:34.139785 systemd[1]: Stopped systemd-sysctl.service. Aug 13 00:54:34.139991 systemd[1]: Stopping systemd-sysctl.service... Aug 13 00:54:34.141480 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:54:34.144471 systemd[1]: cri-containerd-cbf4c402beb939d40fe9ffb052a9eadef604df30078b029d3ea6cb07b87ca1fd.scope: Deactivated successfully. Aug 13 00:54:34.157042 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:54:34.171734 env[1226]: time="2025-08-13T00:54:34.171691831Z" level=info msg="shim disconnected" id=cbf4c402beb939d40fe9ffb052a9eadef604df30078b029d3ea6cb07b87ca1fd Aug 13 00:54:34.171734 env[1226]: time="2025-08-13T00:54:34.171732090Z" level=warning msg="cleaning up after shim disconnected" id=cbf4c402beb939d40fe9ffb052a9eadef604df30078b029d3ea6cb07b87ca1fd namespace=k8s.io Aug 13 00:54:34.171929 env[1226]: time="2025-08-13T00:54:34.171743110Z" level=info msg="cleaning up dead shim" Aug 13 00:54:34.179284 env[1226]: time="2025-08-13T00:54:34.179243827Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:54:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2510 runtime=io.containerd.runc.v2\n" Aug 13 00:54:34.584427 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee7a32e691e26867b2c6183ed92a8446d09ac58e8d5d0f6355dc761ab7d71003-rootfs.mount: Deactivated successfully. Aug 13 00:54:35.064903 kubelet[1969]: E0813 00:54:35.064875 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:35.068904 env[1226]: time="2025-08-13T00:54:35.068861859Z" level=info msg="CreateContainer within sandbox \"30e79f034101e1c2e514a40647896e6f5c606feb5e658a3d19b0a4eaf94edb52\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:54:35.087994 env[1226]: time="2025-08-13T00:54:35.087935633Z" level=info msg="CreateContainer within sandbox \"30e79f034101e1c2e514a40647896e6f5c606feb5e658a3d19b0a4eaf94edb52\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d4cb11e681ca21d10636ad1ac45bdeedf5bea060f3aaac2c1a2ab999e8abf2e6\"" Aug 13 00:54:35.089557 env[1226]: time="2025-08-13T00:54:35.088507148Z" level=info msg="StartContainer for \"d4cb11e681ca21d10636ad1ac45bdeedf5bea060f3aaac2c1a2ab999e8abf2e6\"" Aug 13 00:54:35.105494 systemd[1]: Started cri-containerd-d4cb11e681ca21d10636ad1ac45bdeedf5bea060f3aaac2c1a2ab999e8abf2e6.scope. Aug 13 00:54:35.132285 env[1226]: time="2025-08-13T00:54:35.132234236Z" level=info msg="StartContainer for \"d4cb11e681ca21d10636ad1ac45bdeedf5bea060f3aaac2c1a2ab999e8abf2e6\" returns successfully" Aug 13 00:54:35.133448 systemd[1]: cri-containerd-d4cb11e681ca21d10636ad1ac45bdeedf5bea060f3aaac2c1a2ab999e8abf2e6.scope: Deactivated successfully. Aug 13 00:54:35.157880 env[1226]: time="2025-08-13T00:54:35.157826527Z" level=info msg="shim disconnected" id=d4cb11e681ca21d10636ad1ac45bdeedf5bea060f3aaac2c1a2ab999e8abf2e6 Aug 13 00:54:35.157880 env[1226]: time="2025-08-13T00:54:35.157870331Z" level=warning msg="cleaning up after shim disconnected" id=d4cb11e681ca21d10636ad1ac45bdeedf5bea060f3aaac2c1a2ab999e8abf2e6 namespace=k8s.io Aug 13 00:54:35.157880 env[1226]: time="2025-08-13T00:54:35.157878667Z" level=info msg="cleaning up dead shim" Aug 13 00:54:35.166526 env[1226]: time="2025-08-13T00:54:35.166459210Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:54:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2568 runtime=io.containerd.runc.v2\n" Aug 13 00:54:35.584545 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4cb11e681ca21d10636ad1ac45bdeedf5bea060f3aaac2c1a2ab999e8abf2e6-rootfs.mount: Deactivated successfully. Aug 13 00:54:36.068875 kubelet[1969]: E0813 00:54:36.068838 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:36.289372 env[1226]: time="2025-08-13T00:54:36.289304693Z" level=info msg="CreateContainer within sandbox \"30e79f034101e1c2e514a40647896e6f5c606feb5e658a3d19b0a4eaf94edb52\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:54:36.387639 env[1226]: time="2025-08-13T00:54:36.387503195Z" level=info msg="CreateContainer within sandbox \"30e79f034101e1c2e514a40647896e6f5c606feb5e658a3d19b0a4eaf94edb52\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"87e7a53bfa8d11998990e06a3d1c08e44ebce678871160a4f8a80825f5e45be8\"" Aug 13 00:54:36.388323 env[1226]: time="2025-08-13T00:54:36.388257182Z" level=info msg="StartContainer for \"87e7a53bfa8d11998990e06a3d1c08e44ebce678871160a4f8a80825f5e45be8\"" Aug 13 00:54:36.404755 systemd[1]: Started cri-containerd-87e7a53bfa8d11998990e06a3d1c08e44ebce678871160a4f8a80825f5e45be8.scope. Aug 13 00:54:36.427830 systemd[1]: cri-containerd-87e7a53bfa8d11998990e06a3d1c08e44ebce678871160a4f8a80825f5e45be8.scope: Deactivated successfully. Aug 13 00:54:36.429192 env[1226]: time="2025-08-13T00:54:36.429142546Z" level=info msg="StartContainer for \"87e7a53bfa8d11998990e06a3d1c08e44ebce678871160a4f8a80825f5e45be8\" returns successfully" Aug 13 00:54:36.451704 env[1226]: time="2025-08-13T00:54:36.451619298Z" level=info msg="shim disconnected" id=87e7a53bfa8d11998990e06a3d1c08e44ebce678871160a4f8a80825f5e45be8 Aug 13 00:54:36.451704 env[1226]: time="2025-08-13T00:54:36.451699523Z" level=warning msg="cleaning up after shim disconnected" id=87e7a53bfa8d11998990e06a3d1c08e44ebce678871160a4f8a80825f5e45be8 namespace=k8s.io Aug 13 00:54:36.451704 env[1226]: time="2025-08-13T00:54:36.451710004Z" level=info msg="cleaning up dead shim" Aug 13 00:54:36.461115 env[1226]: time="2025-08-13T00:54:36.461031565Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:54:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2624 runtime=io.containerd.runc.v2\n" Aug 13 00:54:36.585303 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87e7a53bfa8d11998990e06a3d1c08e44ebce678871160a4f8a80825f5e45be8-rootfs.mount: Deactivated successfully. Aug 13 00:54:37.074054 kubelet[1969]: E0813 00:54:37.073980 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:37.078204 env[1226]: time="2025-08-13T00:54:37.078132305Z" level=info msg="CreateContainer within sandbox \"30e79f034101e1c2e514a40647896e6f5c606feb5e658a3d19b0a4eaf94edb52\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:54:37.096782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2144138594.mount: Deactivated successfully. Aug 13 00:54:37.102233 env[1226]: time="2025-08-13T00:54:37.102168308Z" level=info msg="CreateContainer within sandbox \"30e79f034101e1c2e514a40647896e6f5c606feb5e658a3d19b0a4eaf94edb52\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9d58d61ce49feb7b31924766110c74cffbfcb779bd3f5923f489725079ef6400\"" Aug 13 00:54:37.102841 env[1226]: time="2025-08-13T00:54:37.102797282Z" level=info msg="StartContainer for \"9d58d61ce49feb7b31924766110c74cffbfcb779bd3f5923f489725079ef6400\"" Aug 13 00:54:37.124050 systemd[1]: Started cri-containerd-9d58d61ce49feb7b31924766110c74cffbfcb779bd3f5923f489725079ef6400.scope. Aug 13 00:54:37.158976 env[1226]: time="2025-08-13T00:54:37.158908712Z" level=info msg="StartContainer for \"9d58d61ce49feb7b31924766110c74cffbfcb779bd3f5923f489725079ef6400\" returns successfully" Aug 13 00:54:37.227827 kubelet[1969]: I0813 00:54:37.227783 1969 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 00:54:37.585433 systemd[1]: run-containerd-runc-k8s.io-9d58d61ce49feb7b31924766110c74cffbfcb779bd3f5923f489725079ef6400-runc.1RfRLZ.mount: Deactivated successfully. Aug 13 00:54:37.719641 systemd[1]: Created slice kubepods-burstable-pod1af790d1_6932_46c3_a46b_364922a312e9.slice. Aug 13 00:54:37.731936 systemd[1]: Created slice kubepods-burstable-pod38ba63ac_0d67_41a3_8ffe_0fd803d8474e.slice. Aug 13 00:54:37.782399 kubelet[1969]: I0813 00:54:37.782349 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38ba63ac-0d67-41a3-8ffe-0fd803d8474e-config-volume\") pod \"coredns-674b8bbfcf-495zb\" (UID: \"38ba63ac-0d67-41a3-8ffe-0fd803d8474e\") " pod="kube-system/coredns-674b8bbfcf-495zb" Aug 13 00:54:37.782399 kubelet[1969]: I0813 00:54:37.782402 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9f42\" (UniqueName: \"kubernetes.io/projected/38ba63ac-0d67-41a3-8ffe-0fd803d8474e-kube-api-access-l9f42\") pod \"coredns-674b8bbfcf-495zb\" (UID: \"38ba63ac-0d67-41a3-8ffe-0fd803d8474e\") " pod="kube-system/coredns-674b8bbfcf-495zb" Aug 13 00:54:37.782630 kubelet[1969]: I0813 00:54:37.782441 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1af790d1-6932-46c3-a46b-364922a312e9-config-volume\") pod \"coredns-674b8bbfcf-q796w\" (UID: \"1af790d1-6932-46c3-a46b-364922a312e9\") " pod="kube-system/coredns-674b8bbfcf-q796w" Aug 13 00:54:37.782630 kubelet[1969]: I0813 00:54:37.782479 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7vbl\" (UniqueName: \"kubernetes.io/projected/1af790d1-6932-46c3-a46b-364922a312e9-kube-api-access-p7vbl\") pod \"coredns-674b8bbfcf-q796w\" (UID: \"1af790d1-6932-46c3-a46b-364922a312e9\") " pod="kube-system/coredns-674b8bbfcf-q796w" Aug 13 00:54:38.022966 kubelet[1969]: E0813 00:54:38.022844 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:38.023781 env[1226]: time="2025-08-13T00:54:38.023712871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q796w,Uid:1af790d1-6932-46c3-a46b-364922a312e9,Namespace:kube-system,Attempt:0,}" Aug 13 00:54:38.034265 kubelet[1969]: E0813 00:54:38.034234 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:38.034682 env[1226]: time="2025-08-13T00:54:38.034650938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-495zb,Uid:38ba63ac-0d67-41a3-8ffe-0fd803d8474e,Namespace:kube-system,Attempt:0,}" Aug 13 00:54:38.079193 kubelet[1969]: E0813 00:54:38.079045 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:38.095939 kubelet[1969]: I0813 00:54:38.095709 1969 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6n7xp" podStartSLOduration=6.781916349 podStartE2EDuration="22.095663413s" podCreationTimestamp="2025-08-13 00:54:16 +0000 UTC" firstStartedPulling="2025-08-13 00:54:18.253009287 +0000 UTC m=+7.440612595" lastFinishedPulling="2025-08-13 00:54:33.566756341 +0000 UTC m=+22.754359659" observedRunningTime="2025-08-13 00:54:38.094981196 +0000 UTC m=+27.282584514" watchObservedRunningTime="2025-08-13 00:54:38.095663413 +0000 UTC m=+27.283266731" Aug 13 00:54:39.080626 kubelet[1969]: E0813 00:54:39.080581 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:39.304043 systemd-networkd[1046]: cilium_host: Link UP Aug 13 00:54:39.304196 systemd-networkd[1046]: cilium_net: Link UP Aug 13 00:54:39.304199 systemd-networkd[1046]: cilium_net: Gained carrier Aug 13 00:54:39.304344 systemd-networkd[1046]: cilium_host: Gained carrier Aug 13 00:54:39.308587 systemd-networkd[1046]: cilium_host: Gained IPv6LL Aug 13 00:54:39.309126 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Aug 13 00:54:39.388581 systemd-networkd[1046]: cilium_vxlan: Link UP Aug 13 00:54:39.388591 systemd-networkd[1046]: cilium_vxlan: Gained carrier Aug 13 00:54:39.611138 kernel: NET: Registered PF_ALG protocol family Aug 13 00:54:39.950337 systemd-networkd[1046]: cilium_net: Gained IPv6LL Aug 13 00:54:40.084949 kubelet[1969]: E0813 00:54:40.084921 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:40.181070 systemd-networkd[1046]: lxc_health: Link UP Aug 13 00:54:40.191311 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Aug 13 00:54:40.194816 systemd-networkd[1046]: lxc_health: Gained carrier Aug 13 00:54:40.568746 systemd-networkd[1046]: lxc8ce4d7c793a7: Link UP Aug 13 00:54:40.579692 kernel: eth0: renamed from tmpd1abe Aug 13 00:54:40.585562 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 00:54:40.585610 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc8ce4d7c793a7: link becomes ready Aug 13 00:54:40.585729 systemd-networkd[1046]: lxc8ce4d7c793a7: Gained carrier Aug 13 00:54:40.596893 systemd-networkd[1046]: lxc2228f7cf6c78: Link UP Aug 13 00:54:40.612134 kernel: eth0: renamed from tmpadff0 Aug 13 00:54:40.627943 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2228f7cf6c78: link becomes ready Aug 13 00:54:40.626085 systemd-networkd[1046]: lxc2228f7cf6c78: Gained carrier Aug 13 00:54:40.635000 systemd[1]: Started sshd@5-10.0.0.79:22-10.0.0.1:51976.service. Aug 13 00:54:40.688411 sshd[3164]: Accepted publickey for core from 10.0.0.1 port 51976 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:54:40.689996 sshd[3164]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:40.695129 systemd-logind[1213]: New session 6 of user core. Aug 13 00:54:40.695439 systemd[1]: Started session-6.scope. Aug 13 00:54:40.875288 sshd[3164]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:40.878411 systemd-logind[1213]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:54:40.878736 systemd[1]: sshd@5-10.0.0.79:22-10.0.0.1:51976.service: Deactivated successfully. Aug 13 00:54:40.879365 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:54:40.880486 systemd-logind[1213]: Removed session 6. Aug 13 00:54:41.295299 systemd-networkd[1046]: cilium_vxlan: Gained IPv6LL Aug 13 00:54:41.311763 kubelet[1969]: E0813 00:54:41.311717 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:41.550253 systemd-networkd[1046]: lxc_health: Gained IPv6LL Aug 13 00:54:41.870402 systemd-networkd[1046]: lxc2228f7cf6c78: Gained IPv6LL Aug 13 00:54:42.086807 kubelet[1969]: E0813 00:54:42.086761 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:42.446310 systemd-networkd[1046]: lxc8ce4d7c793a7: Gained IPv6LL Aug 13 00:54:43.088600 kubelet[1969]: E0813 00:54:43.088541 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:44.036237 env[1226]: time="2025-08-13T00:54:44.036150063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:44.036237 env[1226]: time="2025-08-13T00:54:44.036189459Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:44.036237 env[1226]: time="2025-08-13T00:54:44.036198856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:44.036677 env[1226]: time="2025-08-13T00:54:44.036347262Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d1abe885063ee3425b50762c96b25ac5219f8896e4667927e92d99fff6e8ae92 pid=3210 runtime=io.containerd.runc.v2 Aug 13 00:54:44.043251 env[1226]: time="2025-08-13T00:54:44.043188883Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:44.043363 env[1226]: time="2025-08-13T00:54:44.043230693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:44.043363 env[1226]: time="2025-08-13T00:54:44.043240381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:44.043472 env[1226]: time="2025-08-13T00:54:44.043349701Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/adff0d6e848b152c5c74b929620697b934313cc9f5df7946f1c4b23405e15523 pid=3228 runtime=io.containerd.runc.v2 Aug 13 00:54:44.051667 systemd[1]: Started cri-containerd-d1abe885063ee3425b50762c96b25ac5219f8896e4667927e92d99fff6e8ae92.scope. Aug 13 00:54:44.059970 systemd[1]: run-containerd-runc-k8s.io-adff0d6e848b152c5c74b929620697b934313cc9f5df7946f1c4b23405e15523-runc.mYMAxr.mount: Deactivated successfully. Aug 13 00:54:44.064983 systemd[1]: Started cri-containerd-adff0d6e848b152c5c74b929620697b934313cc9f5df7946f1c4b23405e15523.scope. Aug 13 00:54:44.070646 systemd-resolved[1167]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:54:44.079394 systemd-resolved[1167]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:54:44.099559 env[1226]: time="2025-08-13T00:54:44.099491814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q796w,Uid:1af790d1-6932-46c3-a46b-364922a312e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1abe885063ee3425b50762c96b25ac5219f8896e4667927e92d99fff6e8ae92\"" Aug 13 00:54:44.100403 kubelet[1969]: E0813 00:54:44.100378 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:44.105969 env[1226]: time="2025-08-13T00:54:44.105902656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-495zb,Uid:38ba63ac-0d67-41a3-8ffe-0fd803d8474e,Namespace:kube-system,Attempt:0,} returns sandbox id \"adff0d6e848b152c5c74b929620697b934313cc9f5df7946f1c4b23405e15523\"" Aug 13 00:54:44.106993 kubelet[1969]: E0813 00:54:44.106971 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:44.129232 env[1226]: time="2025-08-13T00:54:44.129177266Z" level=info msg="CreateContainer within sandbox \"d1abe885063ee3425b50762c96b25ac5219f8896e4667927e92d99fff6e8ae92\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:54:44.133297 env[1226]: time="2025-08-13T00:54:44.133253808Z" level=info msg="CreateContainer within sandbox \"adff0d6e848b152c5c74b929620697b934313cc9f5df7946f1c4b23405e15523\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:54:44.150310 env[1226]: time="2025-08-13T00:54:44.150256473Z" level=info msg="CreateContainer within sandbox \"d1abe885063ee3425b50762c96b25ac5219f8896e4667927e92d99fff6e8ae92\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5a62455f8ced28ca8fa5cf2b1f689934e7550456cd989d49c82a7dea641ee959\"" Aug 13 00:54:44.150868 env[1226]: time="2025-08-13T00:54:44.150820447Z" level=info msg="StartContainer for \"5a62455f8ced28ca8fa5cf2b1f689934e7550456cd989d49c82a7dea641ee959\"" Aug 13 00:54:44.152434 env[1226]: time="2025-08-13T00:54:44.152386751Z" level=info msg="CreateContainer within sandbox \"adff0d6e848b152c5c74b929620697b934313cc9f5df7946f1c4b23405e15523\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f4350864d9c042ec6c80de3cfb089d7f182f4843ed68df8810d34a54eaf57b79\"" Aug 13 00:54:44.153140 env[1226]: time="2025-08-13T00:54:44.153111654Z" level=info msg="StartContainer for \"f4350864d9c042ec6c80de3cfb089d7f182f4843ed68df8810d34a54eaf57b79\"" Aug 13 00:54:44.169136 systemd[1]: Started cri-containerd-5a62455f8ced28ca8fa5cf2b1f689934e7550456cd989d49c82a7dea641ee959.scope. Aug 13 00:54:44.173304 systemd[1]: Started cri-containerd-f4350864d9c042ec6c80de3cfb089d7f182f4843ed68df8810d34a54eaf57b79.scope. Aug 13 00:54:44.207164 env[1226]: time="2025-08-13T00:54:44.207075417Z" level=info msg="StartContainer for \"f4350864d9c042ec6c80de3cfb089d7f182f4843ed68df8810d34a54eaf57b79\" returns successfully" Aug 13 00:54:44.209327 env[1226]: time="2025-08-13T00:54:44.209284757Z" level=info msg="StartContainer for \"5a62455f8ced28ca8fa5cf2b1f689934e7550456cd989d49c82a7dea641ee959\" returns successfully" Aug 13 00:54:45.093428 kubelet[1969]: E0813 00:54:45.093393 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:45.095820 kubelet[1969]: E0813 00:54:45.095794 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:45.339907 kubelet[1969]: I0813 00:54:45.339849 1969 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-q796w" podStartSLOduration=29.339829427 podStartE2EDuration="29.339829427s" podCreationTimestamp="2025-08-13 00:54:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:54:45.3395348 +0000 UTC m=+34.527138108" watchObservedRunningTime="2025-08-13 00:54:45.339829427 +0000 UTC m=+34.527432745" Aug 13 00:54:45.340536 kubelet[1969]: I0813 00:54:45.340467 1969 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-495zb" podStartSLOduration=29.340459207 podStartE2EDuration="29.340459207s" podCreationTimestamp="2025-08-13 00:54:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:54:45.263188235 +0000 UTC m=+34.450791553" watchObservedRunningTime="2025-08-13 00:54:45.340459207 +0000 UTC m=+34.528062546" Aug 13 00:54:45.879739 systemd[1]: Started sshd@6-10.0.0.79:22-10.0.0.1:51978.service. Aug 13 00:54:45.918196 sshd[3367]: Accepted publickey for core from 10.0.0.1 port 51978 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:54:45.919616 sshd[3367]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:45.923628 systemd-logind[1213]: New session 7 of user core. Aug 13 00:54:45.924544 systemd[1]: Started session-7.scope. Aug 13 00:54:46.097377 kubelet[1969]: E0813 00:54:46.097327 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:46.097948 kubelet[1969]: E0813 00:54:46.097928 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:46.211706 sshd[3367]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:46.214297 systemd[1]: sshd@6-10.0.0.79:22-10.0.0.1:51978.service: Deactivated successfully. Aug 13 00:54:46.215007 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:54:46.215460 systemd-logind[1213]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:54:46.216239 systemd-logind[1213]: Removed session 7. Aug 13 00:54:47.099186 kubelet[1969]: E0813 00:54:47.099136 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:47.099552 kubelet[1969]: E0813 00:54:47.099260 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:51.216011 systemd[1]: Started sshd@7-10.0.0.79:22-10.0.0.1:57258.service. Aug 13 00:54:51.250285 sshd[3384]: Accepted publickey for core from 10.0.0.1 port 57258 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:54:51.251342 sshd[3384]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:51.255058 systemd-logind[1213]: New session 8 of user core. Aug 13 00:54:51.256130 systemd[1]: Started session-8.scope. Aug 13 00:54:51.364340 sshd[3384]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:51.366670 systemd[1]: sshd@7-10.0.0.79:22-10.0.0.1:57258.service: Deactivated successfully. Aug 13 00:54:51.367469 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:54:51.368187 systemd-logind[1213]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:54:51.368796 systemd-logind[1213]: Removed session 8. Aug 13 00:54:56.369890 systemd[1]: Started sshd@8-10.0.0.79:22-10.0.0.1:57260.service. Aug 13 00:54:56.404698 sshd[3398]: Accepted publickey for core from 10.0.0.1 port 57260 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:54:56.405891 sshd[3398]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:56.409450 systemd-logind[1213]: New session 9 of user core. Aug 13 00:54:56.410440 systemd[1]: Started session-9.scope. Aug 13 00:54:56.525727 sshd[3398]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:56.528452 systemd[1]: sshd@8-10.0.0.79:22-10.0.0.1:57260.service: Deactivated successfully. Aug 13 00:54:56.529194 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:54:56.529773 systemd-logind[1213]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:54:56.530563 systemd-logind[1213]: Removed session 9. Aug 13 00:55:01.531628 systemd[1]: Started sshd@9-10.0.0.79:22-10.0.0.1:33074.service. Aug 13 00:55:01.567726 sshd[3412]: Accepted publickey for core from 10.0.0.1 port 33074 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:55:01.569454 sshd[3412]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:01.573680 systemd-logind[1213]: New session 10 of user core. Aug 13 00:55:01.574505 systemd[1]: Started session-10.scope. Aug 13 00:55:01.686662 sshd[3412]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:01.689523 systemd[1]: sshd@9-10.0.0.79:22-10.0.0.1:33074.service: Deactivated successfully. Aug 13 00:55:01.690082 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:55:01.691765 systemd[1]: Started sshd@10-10.0.0.79:22-10.0.0.1:33082.service. Aug 13 00:55:01.692423 systemd-logind[1213]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:55:01.693459 systemd-logind[1213]: Removed session 10. Aug 13 00:55:01.728607 sshd[3427]: Accepted publickey for core from 10.0.0.1 port 33082 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:55:01.730388 sshd[3427]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:01.735313 systemd-logind[1213]: New session 11 of user core. Aug 13 00:55:01.736256 systemd[1]: Started session-11.scope. Aug 13 00:55:01.901269 sshd[3427]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:01.905569 systemd[1]: Started sshd@11-10.0.0.79:22-10.0.0.1:33084.service. Aug 13 00:55:01.912337 systemd[1]: sshd@10-10.0.0.79:22-10.0.0.1:33082.service: Deactivated successfully. Aug 13 00:55:01.913364 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:55:01.915298 systemd-logind[1213]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:55:01.916385 systemd-logind[1213]: Removed session 11. Aug 13 00:55:01.946435 sshd[3437]: Accepted publickey for core from 10.0.0.1 port 33084 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:55:01.947882 sshd[3437]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:01.951912 systemd-logind[1213]: New session 12 of user core. Aug 13 00:55:01.952691 systemd[1]: Started session-12.scope. Aug 13 00:55:02.069569 sshd[3437]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:02.072016 systemd[1]: sshd@11-10.0.0.79:22-10.0.0.1:33084.service: Deactivated successfully. Aug 13 00:55:02.073021 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:55:02.073735 systemd-logind[1213]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:55:02.074639 systemd-logind[1213]: Removed session 12. Aug 13 00:55:07.074993 systemd[1]: Started sshd@12-10.0.0.79:22-10.0.0.1:33098.service. Aug 13 00:55:07.108281 sshd[3451]: Accepted publickey for core from 10.0.0.1 port 33098 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:55:07.109298 sshd[3451]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:07.113043 systemd-logind[1213]: New session 13 of user core. Aug 13 00:55:07.113821 systemd[1]: Started session-13.scope. Aug 13 00:55:07.246686 sshd[3451]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:07.248820 systemd[1]: sshd@12-10.0.0.79:22-10.0.0.1:33098.service: Deactivated successfully. Aug 13 00:55:07.249598 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:55:07.250514 systemd-logind[1213]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:55:07.251221 systemd-logind[1213]: Removed session 13. Aug 13 00:55:12.251008 systemd[1]: Started sshd@13-10.0.0.79:22-10.0.0.1:35488.service. Aug 13 00:55:12.284379 sshd[3466]: Accepted publickey for core from 10.0.0.1 port 35488 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:55:12.285634 sshd[3466]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:12.288993 systemd-logind[1213]: New session 14 of user core. Aug 13 00:55:12.289815 systemd[1]: Started session-14.scope. Aug 13 00:55:12.397007 sshd[3466]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:12.399410 systemd[1]: sshd@13-10.0.0.79:22-10.0.0.1:35488.service: Deactivated successfully. Aug 13 00:55:12.400246 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:55:12.400978 systemd-logind[1213]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:55:12.401727 systemd-logind[1213]: Removed session 14. Aug 13 00:55:17.401558 systemd[1]: Started sshd@14-10.0.0.79:22-10.0.0.1:35502.service. Aug 13 00:55:17.439307 sshd[3480]: Accepted publickey for core from 10.0.0.1 port 35502 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:55:17.440515 sshd[3480]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:17.444164 systemd-logind[1213]: New session 15 of user core. Aug 13 00:55:17.444946 systemd[1]: Started session-15.scope. Aug 13 00:55:17.559289 sshd[3480]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:17.562462 systemd[1]: sshd@14-10.0.0.79:22-10.0.0.1:35502.service: Deactivated successfully. Aug 13 00:55:17.563089 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:55:17.564915 systemd[1]: Started sshd@15-10.0.0.79:22-10.0.0.1:35506.service. Aug 13 00:55:17.565858 systemd-logind[1213]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:55:17.566764 systemd-logind[1213]: Removed session 15. Aug 13 00:55:17.603154 sshd[3493]: Accepted publickey for core from 10.0.0.1 port 35506 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:55:17.604501 sshd[3493]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:17.608772 systemd-logind[1213]: New session 16 of user core. Aug 13 00:55:17.609737 systemd[1]: Started session-16.scope. Aug 13 00:55:18.340407 sshd[3493]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:18.345420 systemd[1]: Started sshd@16-10.0.0.79:22-10.0.0.1:35074.service. Aug 13 00:55:18.346221 systemd[1]: sshd@15-10.0.0.79:22-10.0.0.1:35506.service: Deactivated successfully. Aug 13 00:55:18.346946 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:55:18.347958 systemd-logind[1213]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:55:18.348941 systemd-logind[1213]: Removed session 16. Aug 13 00:55:18.380752 sshd[3504]: Accepted publickey for core from 10.0.0.1 port 35074 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:55:18.382244 sshd[3504]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:18.386510 systemd-logind[1213]: New session 17 of user core. Aug 13 00:55:18.387457 systemd[1]: Started session-17.scope. Aug 13 00:55:19.187524 sshd[3504]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:19.191088 systemd[1]: Started sshd@17-10.0.0.79:22-10.0.0.1:35082.service. Aug 13 00:55:19.191878 systemd[1]: sshd@16-10.0.0.79:22-10.0.0.1:35074.service: Deactivated successfully. Aug 13 00:55:19.193304 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:55:19.194128 systemd-logind[1213]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:55:19.195384 systemd-logind[1213]: Removed session 17. Aug 13 00:55:19.229503 sshd[3543]: Accepted publickey for core from 10.0.0.1 port 35082 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:55:19.230905 sshd[3543]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:19.234653 systemd-logind[1213]: New session 18 of user core. Aug 13 00:55:19.235492 systemd[1]: Started session-18.scope. Aug 13 00:55:19.507167 sshd[3543]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:19.511939 systemd[1]: Started sshd@18-10.0.0.79:22-10.0.0.1:35092.service. Aug 13 00:55:19.517070 systemd-logind[1213]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:55:19.519070 systemd[1]: sshd@17-10.0.0.79:22-10.0.0.1:35082.service: Deactivated successfully. Aug 13 00:55:19.520056 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:55:19.521793 systemd-logind[1213]: Removed session 18. Aug 13 00:55:19.561602 sshd[3555]: Accepted publickey for core from 10.0.0.1 port 35092 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:55:19.563467 sshd[3555]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:19.568400 systemd[1]: Started session-19.scope. Aug 13 00:55:19.568459 systemd-logind[1213]: New session 19 of user core. Aug 13 00:55:19.689238 sshd[3555]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:19.692347 systemd[1]: sshd@18-10.0.0.79:22-10.0.0.1:35092.service: Deactivated successfully. Aug 13 00:55:19.693399 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:55:19.694084 systemd-logind[1213]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:55:19.695012 systemd-logind[1213]: Removed session 19. Aug 13 00:55:24.007754 kubelet[1969]: E0813 00:55:24.007675 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:24.694502 systemd[1]: Started sshd@19-10.0.0.79:22-10.0.0.1:35096.service. Aug 13 00:55:24.728479 sshd[3569]: Accepted publickey for core from 10.0.0.1 port 35096 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:55:24.730197 sshd[3569]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:24.733844 systemd-logind[1213]: New session 20 of user core. Aug 13 00:55:24.734649 systemd[1]: Started session-20.scope. Aug 13 00:55:24.870004 sshd[3569]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:24.872898 systemd[1]: sshd@19-10.0.0.79:22-10.0.0.1:35096.service: Deactivated successfully. Aug 13 00:55:24.873764 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:55:24.874486 systemd-logind[1213]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:55:24.875355 systemd-logind[1213]: Removed session 20. Aug 13 00:55:29.874413 systemd[1]: Started sshd@20-10.0.0.79:22-10.0.0.1:37902.service. Aug 13 00:55:29.909009 sshd[3584]: Accepted publickey for core from 10.0.0.1 port 37902 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:55:29.910189 sshd[3584]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:29.913910 systemd-logind[1213]: New session 21 of user core. Aug 13 00:55:29.914991 systemd[1]: Started session-21.scope. Aug 13 00:55:30.023164 sshd[3584]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:30.025323 systemd[1]: sshd@20-10.0.0.79:22-10.0.0.1:37902.service: Deactivated successfully. Aug 13 00:55:30.026004 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:55:30.026552 systemd-logind[1213]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:55:30.027214 systemd-logind[1213]: Removed session 21. Aug 13 00:55:34.007927 kubelet[1969]: E0813 00:55:34.007839 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:35.028444 systemd[1]: Started sshd@21-10.0.0.79:22-10.0.0.1:37912.service. Aug 13 00:55:35.062350 sshd[3598]: Accepted publickey for core from 10.0.0.1 port 37912 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:55:35.063593 sshd[3598]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:35.067019 systemd-logind[1213]: New session 22 of user core. Aug 13 00:55:35.067883 systemd[1]: Started session-22.scope. Aug 13 00:55:35.176084 sshd[3598]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:35.180109 systemd[1]: sshd@21-10.0.0.79:22-10.0.0.1:37912.service: Deactivated successfully. Aug 13 00:55:35.180844 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:55:35.181629 systemd-logind[1213]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:55:35.183430 systemd[1]: Started sshd@22-10.0.0.79:22-10.0.0.1:37916.service. Aug 13 00:55:35.184513 systemd-logind[1213]: Removed session 22. Aug 13 00:55:35.219439 sshd[3612]: Accepted publickey for core from 10.0.0.1 port 37916 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:55:35.220787 sshd[3612]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:35.224153 systemd-logind[1213]: New session 23 of user core. Aug 13 00:55:35.224948 systemd[1]: Started session-23.scope. Aug 13 00:55:36.828357 env[1226]: time="2025-08-13T00:55:36.828294410Z" level=info msg="StopContainer for \"c4cdc8320bfa76eaf754eeb35152ba150f385acd4ba3da383e24e104ed02889a\" with timeout 30 (s)" Aug 13 00:55:36.829755 env[1226]: time="2025-08-13T00:55:36.829547245Z" level=info msg="Stop container \"c4cdc8320bfa76eaf754eeb35152ba150f385acd4ba3da383e24e104ed02889a\" with signal terminated" Aug 13 00:55:36.839960 systemd[1]: cri-containerd-c4cdc8320bfa76eaf754eeb35152ba150f385acd4ba3da383e24e104ed02889a.scope: Deactivated successfully. Aug 13 00:55:36.852589 env[1226]: time="2025-08-13T00:55:36.852513075Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:55:36.857232 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4cdc8320bfa76eaf754eeb35152ba150f385acd4ba3da383e24e104ed02889a-rootfs.mount: Deactivated successfully. Aug 13 00:55:36.861539 env[1226]: time="2025-08-13T00:55:36.861497617Z" level=info msg="StopContainer for \"9d58d61ce49feb7b31924766110c74cffbfcb779bd3f5923f489725079ef6400\" with timeout 2 (s)" Aug 13 00:55:36.861848 env[1226]: time="2025-08-13T00:55:36.861822436Z" level=info msg="Stop container \"9d58d61ce49feb7b31924766110c74cffbfcb779bd3f5923f489725079ef6400\" with signal terminated" Aug 13 00:55:36.862272 env[1226]: time="2025-08-13T00:55:36.862233188Z" level=info msg="shim disconnected" id=c4cdc8320bfa76eaf754eeb35152ba150f385acd4ba3da383e24e104ed02889a Aug 13 00:55:36.862326 env[1226]: time="2025-08-13T00:55:36.862277472Z" level=warning msg="cleaning up after shim disconnected" id=c4cdc8320bfa76eaf754eeb35152ba150f385acd4ba3da383e24e104ed02889a namespace=k8s.io Aug 13 00:55:36.862326 env[1226]: time="2025-08-13T00:55:36.862289766Z" level=info msg="cleaning up dead shim" Aug 13 00:55:36.868951 systemd-networkd[1046]: lxc_health: Link DOWN Aug 13 00:55:36.868958 systemd-networkd[1046]: lxc_health: Lost carrier Aug 13 00:55:36.871731 env[1226]: time="2025-08-13T00:55:36.871674199Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:55:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3659 runtime=io.containerd.runc.v2\n" Aug 13 00:55:36.875313 env[1226]: time="2025-08-13T00:55:36.875277732Z" level=info msg="StopContainer for \"c4cdc8320bfa76eaf754eeb35152ba150f385acd4ba3da383e24e104ed02889a\" returns successfully" Aug 13 00:55:36.875960 env[1226]: time="2025-08-13T00:55:36.875928912Z" level=info msg="StopPodSandbox for \"2e3246a8ba0a3073232709c00babd223e16822fb580839395e56dfc008a2380f\"" Aug 13 00:55:36.876026 env[1226]: time="2025-08-13T00:55:36.876000819Z" level=info msg="Container to stop \"c4cdc8320bfa76eaf754eeb35152ba150f385acd4ba3da383e24e104ed02889a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:55:36.878514 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2e3246a8ba0a3073232709c00babd223e16822fb580839395e56dfc008a2380f-shm.mount: Deactivated successfully. Aug 13 00:55:36.882058 systemd[1]: cri-containerd-2e3246a8ba0a3073232709c00babd223e16822fb580839395e56dfc008a2380f.scope: Deactivated successfully. Aug 13 00:55:36.901482 systemd[1]: cri-containerd-9d58d61ce49feb7b31924766110c74cffbfcb779bd3f5923f489725079ef6400.scope: Deactivated successfully. Aug 13 00:55:36.901806 systemd[1]: cri-containerd-9d58d61ce49feb7b31924766110c74cffbfcb779bd3f5923f489725079ef6400.scope: Consumed 6.312s CPU time. Aug 13 00:55:36.906793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e3246a8ba0a3073232709c00babd223e16822fb580839395e56dfc008a2380f-rootfs.mount: Deactivated successfully. Aug 13 00:55:36.915408 env[1226]: time="2025-08-13T00:55:36.915194276Z" level=info msg="shim disconnected" id=2e3246a8ba0a3073232709c00babd223e16822fb580839395e56dfc008a2380f Aug 13 00:55:36.915408 env[1226]: time="2025-08-13T00:55:36.915242798Z" level=warning msg="cleaning up after shim disconnected" id=2e3246a8ba0a3073232709c00babd223e16822fb580839395e56dfc008a2380f namespace=k8s.io Aug 13 00:55:36.915408 env[1226]: time="2025-08-13T00:55:36.915253668Z" level=info msg="cleaning up dead shim" Aug 13 00:55:36.920745 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d58d61ce49feb7b31924766110c74cffbfcb779bd3f5923f489725079ef6400-rootfs.mount: Deactivated successfully. Aug 13 00:55:36.924899 env[1226]: time="2025-08-13T00:55:36.924839506Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:55:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3709 runtime=io.containerd.runc.v2\n" Aug 13 00:55:36.925265 env[1226]: time="2025-08-13T00:55:36.925229347Z" level=info msg="TearDown network for sandbox \"2e3246a8ba0a3073232709c00babd223e16822fb580839395e56dfc008a2380f\" successfully" Aug 13 00:55:36.925329 env[1226]: time="2025-08-13T00:55:36.925263062Z" level=info msg="StopPodSandbox for \"2e3246a8ba0a3073232709c00babd223e16822fb580839395e56dfc008a2380f\" returns successfully" Aug 13 00:55:36.925877 env[1226]: time="2025-08-13T00:55:36.925833258Z" level=info msg="shim disconnected" id=9d58d61ce49feb7b31924766110c74cffbfcb779bd3f5923f489725079ef6400 Aug 13 00:55:36.925877 env[1226]: time="2025-08-13T00:55:36.925872924Z" level=warning msg="cleaning up after shim disconnected" id=9d58d61ce49feb7b31924766110c74cffbfcb779bd3f5923f489725079ef6400 namespace=k8s.io Aug 13 00:55:36.925877 env[1226]: time="2025-08-13T00:55:36.925884626Z" level=info msg="cleaning up dead shim" Aug 13 00:55:36.933489 env[1226]: time="2025-08-13T00:55:36.933418575Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:55:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3721 runtime=io.containerd.runc.v2\n" Aug 13 00:55:36.936904 env[1226]: time="2025-08-13T00:55:36.936835282Z" level=info msg="StopContainer for \"9d58d61ce49feb7b31924766110c74cffbfcb779bd3f5923f489725079ef6400\" returns successfully" Aug 13 00:55:36.937585 env[1226]: time="2025-08-13T00:55:36.937556065Z" level=info msg="StopPodSandbox for \"30e79f034101e1c2e514a40647896e6f5c606feb5e658a3d19b0a4eaf94edb52\"" Aug 13 00:55:36.937705 env[1226]: time="2025-08-13T00:55:36.937660925Z" level=info msg="Container to stop \"ee7a32e691e26867b2c6183ed92a8446d09ac58e8d5d0f6355dc761ab7d71003\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:55:36.937705 env[1226]: time="2025-08-13T00:55:36.937687916Z" level=info msg="Container to stop \"d4cb11e681ca21d10636ad1ac45bdeedf5bea060f3aaac2c1a2ab999e8abf2e6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:55:36.937782 env[1226]: time="2025-08-13T00:55:36.937711891Z" level=info msg="Container to stop \"9d58d61ce49feb7b31924766110c74cffbfcb779bd3f5923f489725079ef6400\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:55:36.937782 env[1226]: time="2025-08-13T00:55:36.937723193Z" level=info msg="Container to stop \"cbf4c402beb939d40fe9ffb052a9eadef604df30078b029d3ea6cb07b87ca1fd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:55:36.937782 env[1226]: time="2025-08-13T00:55:36.937733903Z" level=info msg="Container to stop \"87e7a53bfa8d11998990e06a3d1c08e44ebce678871160a4f8a80825f5e45be8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:55:36.944060 systemd[1]: cri-containerd-30e79f034101e1c2e514a40647896e6f5c606feb5e658a3d19b0a4eaf94edb52.scope: Deactivated successfully. Aug 13 00:55:36.971657 env[1226]: time="2025-08-13T00:55:36.971599041Z" level=info msg="shim disconnected" id=30e79f034101e1c2e514a40647896e6f5c606feb5e658a3d19b0a4eaf94edb52 Aug 13 00:55:36.972624 env[1226]: time="2025-08-13T00:55:36.972587695Z" level=warning msg="cleaning up after shim disconnected" id=30e79f034101e1c2e514a40647896e6f5c606feb5e658a3d19b0a4eaf94edb52 namespace=k8s.io Aug 13 00:55:36.972624 env[1226]: time="2025-08-13T00:55:36.972611300Z" level=info msg="cleaning up dead shim" Aug 13 00:55:36.988679 env[1226]: time="2025-08-13T00:55:36.988617264Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:55:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3752 runtime=io.containerd.runc.v2\n" Aug 13 00:55:36.989138 env[1226]: time="2025-08-13T00:55:36.989084914Z" level=info msg="TearDown network for sandbox \"30e79f034101e1c2e514a40647896e6f5c606feb5e658a3d19b0a4eaf94edb52\" successfully" Aug 13 00:55:36.989138 env[1226]: time="2025-08-13T00:55:36.989130412Z" level=info msg="StopPodSandbox for \"30e79f034101e1c2e514a40647896e6f5c606feb5e658a3d19b0a4eaf94edb52\" returns successfully" Aug 13 00:55:37.069352 kubelet[1969]: I0813 00:55:37.069272 1969 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-cilium-cgroup\") pod \"5cdb1c13-6e2f-4139-8655-d48371cf2856\" (UID: \"5cdb1c13-6e2f-4139-8655-d48371cf2856\") " Aug 13 00:55:37.069352 kubelet[1969]: I0813 00:55:37.069334 1969 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-etc-cni-netd\") pod \"5cdb1c13-6e2f-4139-8655-d48371cf2856\" (UID: \"5cdb1c13-6e2f-4139-8655-d48371cf2856\") " Aug 13 00:55:37.069352 kubelet[1969]: I0813 00:55:37.069355 1969 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-bpf-maps\") pod \"5cdb1c13-6e2f-4139-8655-d48371cf2856\" (UID: \"5cdb1c13-6e2f-4139-8655-d48371cf2856\") " Aug 13 00:55:37.069875 kubelet[1969]: I0813 00:55:37.069379 1969 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7b35d23d-6b85-4838-972f-ee61b825d323-cilium-config-path\") pod \"7b35d23d-6b85-4838-972f-ee61b825d323\" (UID: \"7b35d23d-6b85-4838-972f-ee61b825d323\") " Aug 13 00:55:37.069875 kubelet[1969]: I0813 00:55:37.069401 1969 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-hostproc\") pod \"5cdb1c13-6e2f-4139-8655-d48371cf2856\" (UID: \"5cdb1c13-6e2f-4139-8655-d48371cf2856\") " Aug 13 00:55:37.069875 kubelet[1969]: I0813 00:55:37.069416 1969 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j86n5\" (UniqueName: \"kubernetes.io/projected/5cdb1c13-6e2f-4139-8655-d48371cf2856-kube-api-access-j86n5\") pod \"5cdb1c13-6e2f-4139-8655-d48371cf2856\" (UID: \"5cdb1c13-6e2f-4139-8655-d48371cf2856\") " Aug 13 00:55:37.069875 kubelet[1969]: I0813 00:55:37.069430 1969 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-cilium-run\") pod \"5cdb1c13-6e2f-4139-8655-d48371cf2856\" (UID: \"5cdb1c13-6e2f-4139-8655-d48371cf2856\") " Aug 13 00:55:37.069875 kubelet[1969]: I0813 00:55:37.069444 1969 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-host-proc-sys-net\") pod \"5cdb1c13-6e2f-4139-8655-d48371cf2856\" (UID: \"5cdb1c13-6e2f-4139-8655-d48371cf2856\") " Aug 13 00:55:37.069875 kubelet[1969]: I0813 00:55:37.069457 1969 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5cdb1c13-6e2f-4139-8655-d48371cf2856-cilium-config-path\") pod \"5cdb1c13-6e2f-4139-8655-d48371cf2856\" (UID: \"5cdb1c13-6e2f-4139-8655-d48371cf2856\") " Aug 13 00:55:37.070035 kubelet[1969]: I0813 00:55:37.069469 1969 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-host-proc-sys-kernel\") pod \"5cdb1c13-6e2f-4139-8655-d48371cf2856\" (UID: \"5cdb1c13-6e2f-4139-8655-d48371cf2856\") " Aug 13 00:55:37.070035 kubelet[1969]: I0813 00:55:37.069486 1969 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5cdb1c13-6e2f-4139-8655-d48371cf2856-hubble-tls\") pod \"5cdb1c13-6e2f-4139-8655-d48371cf2856\" (UID: \"5cdb1c13-6e2f-4139-8655-d48371cf2856\") " Aug 13 00:55:37.070035 kubelet[1969]: I0813 00:55:37.069498 1969 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-cni-path\") pod \"5cdb1c13-6e2f-4139-8655-d48371cf2856\" (UID: \"5cdb1c13-6e2f-4139-8655-d48371cf2856\") " Aug 13 00:55:37.070035 kubelet[1969]: I0813 00:55:37.069514 1969 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-xtables-lock\") pod \"5cdb1c13-6e2f-4139-8655-d48371cf2856\" (UID: \"5cdb1c13-6e2f-4139-8655-d48371cf2856\") " Aug 13 00:55:37.070035 kubelet[1969]: I0813 00:55:37.069485 1969 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5cdb1c13-6e2f-4139-8655-d48371cf2856" (UID: "5cdb1c13-6e2f-4139-8655-d48371cf2856"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:55:37.070035 kubelet[1969]: I0813 00:55:37.069488 1969 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5cdb1c13-6e2f-4139-8655-d48371cf2856" (UID: "5cdb1c13-6e2f-4139-8655-d48371cf2856"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:55:37.070229 kubelet[1969]: I0813 00:55:37.069623 1969 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5cdb1c13-6e2f-4139-8655-d48371cf2856" (UID: "5cdb1c13-6e2f-4139-8655-d48371cf2856"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:55:37.070229 kubelet[1969]: I0813 00:55:37.069605 1969 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5cdb1c13-6e2f-4139-8655-d48371cf2856" (UID: "5cdb1c13-6e2f-4139-8655-d48371cf2856"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:55:37.070229 kubelet[1969]: I0813 00:55:37.069528 1969 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5cdb1c13-6e2f-4139-8655-d48371cf2856-clustermesh-secrets\") pod \"5cdb1c13-6e2f-4139-8655-d48371cf2856\" (UID: \"5cdb1c13-6e2f-4139-8655-d48371cf2856\") " Aug 13 00:55:37.070229 kubelet[1969]: I0813 00:55:37.069741 1969 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxf2s\" (UniqueName: \"kubernetes.io/projected/7b35d23d-6b85-4838-972f-ee61b825d323-kube-api-access-dxf2s\") pod \"7b35d23d-6b85-4838-972f-ee61b825d323\" (UID: \"7b35d23d-6b85-4838-972f-ee61b825d323\") " Aug 13 00:55:37.070229 kubelet[1969]: I0813 00:55:37.069798 1969 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-lib-modules\") pod \"5cdb1c13-6e2f-4139-8655-d48371cf2856\" (UID: \"5cdb1c13-6e2f-4139-8655-d48371cf2856\") " Aug 13 00:55:37.070229 kubelet[1969]: I0813 00:55:37.069900 1969 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 13 00:55:37.070405 kubelet[1969]: I0813 00:55:37.069917 1969 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 13 00:55:37.070405 kubelet[1969]: I0813 00:55:37.069964 1969 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 13 00:55:37.070405 kubelet[1969]: I0813 00:55:37.069975 1969 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 13 00:55:37.070405 kubelet[1969]: I0813 00:55:37.070000 1969 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5cdb1c13-6e2f-4139-8655-d48371cf2856" (UID: "5cdb1c13-6e2f-4139-8655-d48371cf2856"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:55:37.070783 kubelet[1969]: I0813 00:55:37.070756 1969 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-hostproc" (OuterVolumeSpecName: "hostproc") pod "5cdb1c13-6e2f-4139-8655-d48371cf2856" (UID: "5cdb1c13-6e2f-4139-8655-d48371cf2856"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:55:37.070938 kubelet[1969]: I0813 00:55:37.070907 1969 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5cdb1c13-6e2f-4139-8655-d48371cf2856" (UID: "5cdb1c13-6e2f-4139-8655-d48371cf2856"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:55:37.072388 kubelet[1969]: I0813 00:55:37.072359 1969 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b35d23d-6b85-4838-972f-ee61b825d323-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7b35d23d-6b85-4838-972f-ee61b825d323" (UID: "7b35d23d-6b85-4838-972f-ee61b825d323"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:55:37.072470 kubelet[1969]: I0813 00:55:37.072409 1969 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5cdb1c13-6e2f-4139-8655-d48371cf2856" (UID: "5cdb1c13-6e2f-4139-8655-d48371cf2856"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:55:37.072470 kubelet[1969]: I0813 00:55:37.072426 1969 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-cni-path" (OuterVolumeSpecName: "cni-path") pod "5cdb1c13-6e2f-4139-8655-d48371cf2856" (UID: "5cdb1c13-6e2f-4139-8655-d48371cf2856"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:55:37.072470 kubelet[1969]: I0813 00:55:37.072440 1969 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5cdb1c13-6e2f-4139-8655-d48371cf2856" (UID: "5cdb1c13-6e2f-4139-8655-d48371cf2856"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:55:37.073762 kubelet[1969]: I0813 00:55:37.073737 1969 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b35d23d-6b85-4838-972f-ee61b825d323-kube-api-access-dxf2s" (OuterVolumeSpecName: "kube-api-access-dxf2s") pod "7b35d23d-6b85-4838-972f-ee61b825d323" (UID: "7b35d23d-6b85-4838-972f-ee61b825d323"). InnerVolumeSpecName "kube-api-access-dxf2s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:55:37.073928 kubelet[1969]: I0813 00:55:37.073903 1969 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cdb1c13-6e2f-4139-8655-d48371cf2856-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5cdb1c13-6e2f-4139-8655-d48371cf2856" (UID: "5cdb1c13-6e2f-4139-8655-d48371cf2856"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:55:37.074303 kubelet[1969]: I0813 00:55:37.074253 1969 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cdb1c13-6e2f-4139-8655-d48371cf2856-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5cdb1c13-6e2f-4139-8655-d48371cf2856" (UID: "5cdb1c13-6e2f-4139-8655-d48371cf2856"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:55:37.074674 kubelet[1969]: I0813 00:55:37.074630 1969 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cdb1c13-6e2f-4139-8655-d48371cf2856-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5cdb1c13-6e2f-4139-8655-d48371cf2856" (UID: "5cdb1c13-6e2f-4139-8655-d48371cf2856"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:55:37.075872 kubelet[1969]: I0813 00:55:37.075836 1969 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cdb1c13-6e2f-4139-8655-d48371cf2856-kube-api-access-j86n5" (OuterVolumeSpecName: "kube-api-access-j86n5") pod "5cdb1c13-6e2f-4139-8655-d48371cf2856" (UID: "5cdb1c13-6e2f-4139-8655-d48371cf2856"). InnerVolumeSpecName "kube-api-access-j86n5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:55:37.170679 kubelet[1969]: I0813 00:55:37.170515 1969 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 13 00:55:37.170679 kubelet[1969]: I0813 00:55:37.170558 1969 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5cdb1c13-6e2f-4139-8655-d48371cf2856-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:55:37.170679 kubelet[1969]: I0813 00:55:37.170570 1969 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 13 00:55:37.170679 kubelet[1969]: I0813 00:55:37.170581 1969 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5cdb1c13-6e2f-4139-8655-d48371cf2856-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 13 00:55:37.170679 kubelet[1969]: I0813 00:55:37.170590 1969 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:55:37.170679 kubelet[1969]: I0813 00:55:37.170597 1969 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 13 00:55:37.170679 kubelet[1969]: I0813 00:55:37.170605 1969 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5cdb1c13-6e2f-4139-8655-d48371cf2856-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 13 00:55:37.170679 kubelet[1969]: I0813 00:55:37.170614 1969 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dxf2s\" (UniqueName: \"kubernetes.io/projected/7b35d23d-6b85-4838-972f-ee61b825d323-kube-api-access-dxf2s\") on node \"localhost\" DevicePath \"\"" Aug 13 00:55:37.171051 kubelet[1969]: I0813 00:55:37.170623 1969 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 13 00:55:37.171051 kubelet[1969]: I0813 00:55:37.170630 1969 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7b35d23d-6b85-4838-972f-ee61b825d323-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:55:37.171051 kubelet[1969]: I0813 00:55:37.170637 1969 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5cdb1c13-6e2f-4139-8655-d48371cf2856-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 13 00:55:37.171051 kubelet[1969]: I0813 00:55:37.170645 1969 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j86n5\" (UniqueName: \"kubernetes.io/projected/5cdb1c13-6e2f-4139-8655-d48371cf2856-kube-api-access-j86n5\") on node \"localhost\" DevicePath \"\"" Aug 13 00:55:37.209247 kubelet[1969]: I0813 00:55:37.209194 1969 scope.go:117] "RemoveContainer" containerID="9d58d61ce49feb7b31924766110c74cffbfcb779bd3f5923f489725079ef6400" Aug 13 00:55:37.211163 env[1226]: time="2025-08-13T00:55:37.211075428Z" level=info msg="RemoveContainer for \"9d58d61ce49feb7b31924766110c74cffbfcb779bd3f5923f489725079ef6400\"" Aug 13 00:55:37.213494 systemd[1]: Removed slice kubepods-burstable-pod5cdb1c13_6e2f_4139_8655_d48371cf2856.slice. Aug 13 00:55:37.213711 systemd[1]: kubepods-burstable-pod5cdb1c13_6e2f_4139_8655_d48371cf2856.slice: Consumed 6.411s CPU time. Aug 13 00:55:37.215239 env[1226]: time="2025-08-13T00:55:37.215199533Z" level=info msg="RemoveContainer for \"9d58d61ce49feb7b31924766110c74cffbfcb779bd3f5923f489725079ef6400\" returns successfully" Aug 13 00:55:37.215645 kubelet[1969]: I0813 00:55:37.215625 1969 scope.go:117] "RemoveContainer" containerID="87e7a53bfa8d11998990e06a3d1c08e44ebce678871160a4f8a80825f5e45be8" Aug 13 00:55:37.216583 systemd[1]: Removed slice kubepods-besteffort-pod7b35d23d_6b85_4838_972f_ee61b825d323.slice. Aug 13 00:55:37.217885 env[1226]: time="2025-08-13T00:55:37.217843749Z" level=info msg="RemoveContainer for \"87e7a53bfa8d11998990e06a3d1c08e44ebce678871160a4f8a80825f5e45be8\"" Aug 13 00:55:37.221066 env[1226]: time="2025-08-13T00:55:37.221032633Z" level=info msg="RemoveContainer for \"87e7a53bfa8d11998990e06a3d1c08e44ebce678871160a4f8a80825f5e45be8\" returns successfully" Aug 13 00:55:37.221303 kubelet[1969]: I0813 00:55:37.221272 1969 scope.go:117] "RemoveContainer" containerID="d4cb11e681ca21d10636ad1ac45bdeedf5bea060f3aaac2c1a2ab999e8abf2e6" Aug 13 00:55:37.222848 env[1226]: time="2025-08-13T00:55:37.222794679Z" level=info msg="RemoveContainer for \"d4cb11e681ca21d10636ad1ac45bdeedf5bea060f3aaac2c1a2ab999e8abf2e6\"" Aug 13 00:55:37.226531 env[1226]: time="2025-08-13T00:55:37.226480610Z" level=info msg="RemoveContainer for \"d4cb11e681ca21d10636ad1ac45bdeedf5bea060f3aaac2c1a2ab999e8abf2e6\" returns successfully" Aug 13 00:55:37.226841 kubelet[1969]: I0813 00:55:37.226804 1969 scope.go:117] "RemoveContainer" containerID="cbf4c402beb939d40fe9ffb052a9eadef604df30078b029d3ea6cb07b87ca1fd" Aug 13 00:55:37.228231 env[1226]: time="2025-08-13T00:55:37.228195437Z" level=info msg="RemoveContainer for \"cbf4c402beb939d40fe9ffb052a9eadef604df30078b029d3ea6cb07b87ca1fd\"" Aug 13 00:55:37.231935 env[1226]: time="2025-08-13T00:55:37.231888430Z" level=info msg="RemoveContainer for \"cbf4c402beb939d40fe9ffb052a9eadef604df30078b029d3ea6cb07b87ca1fd\" returns successfully" Aug 13 00:55:37.232272 kubelet[1969]: I0813 00:55:37.232235 1969 scope.go:117] "RemoveContainer" containerID="ee7a32e691e26867b2c6183ed92a8446d09ac58e8d5d0f6355dc761ab7d71003" Aug 13 00:55:37.233734 env[1226]: time="2025-08-13T00:55:37.233619217Z" level=info msg="RemoveContainer for \"ee7a32e691e26867b2c6183ed92a8446d09ac58e8d5d0f6355dc761ab7d71003\"" Aug 13 00:55:37.238359 env[1226]: time="2025-08-13T00:55:37.237380401Z" level=info msg="RemoveContainer for \"ee7a32e691e26867b2c6183ed92a8446d09ac58e8d5d0f6355dc761ab7d71003\" returns successfully" Aug 13 00:55:37.238633 kubelet[1969]: I0813 00:55:37.238597 1969 scope.go:117] "RemoveContainer" containerID="9d58d61ce49feb7b31924766110c74cffbfcb779bd3f5923f489725079ef6400" Aug 13 00:55:37.238983 env[1226]: time="2025-08-13T00:55:37.238874276Z" level=error msg="ContainerStatus for \"9d58d61ce49feb7b31924766110c74cffbfcb779bd3f5923f489725079ef6400\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9d58d61ce49feb7b31924766110c74cffbfcb779bd3f5923f489725079ef6400\": not found" Aug 13 00:55:37.239248 kubelet[1969]: E0813 00:55:37.239207 1969 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9d58d61ce49feb7b31924766110c74cffbfcb779bd3f5923f489725079ef6400\": not found" containerID="9d58d61ce49feb7b31924766110c74cffbfcb779bd3f5923f489725079ef6400" Aug 13 00:55:37.239310 kubelet[1969]: I0813 00:55:37.239249 1969 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9d58d61ce49feb7b31924766110c74cffbfcb779bd3f5923f489725079ef6400"} err="failed to get container status \"9d58d61ce49feb7b31924766110c74cffbfcb779bd3f5923f489725079ef6400\": rpc error: code = NotFound desc = an error occurred when try to find container \"9d58d61ce49feb7b31924766110c74cffbfcb779bd3f5923f489725079ef6400\": not found" Aug 13 00:55:37.239310 kubelet[1969]: I0813 00:55:37.239291 1969 scope.go:117] "RemoveContainer" containerID="87e7a53bfa8d11998990e06a3d1c08e44ebce678871160a4f8a80825f5e45be8" Aug 13 00:55:37.239662 env[1226]: time="2025-08-13T00:55:37.239600570Z" level=error msg="ContainerStatus for \"87e7a53bfa8d11998990e06a3d1c08e44ebce678871160a4f8a80825f5e45be8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"87e7a53bfa8d11998990e06a3d1c08e44ebce678871160a4f8a80825f5e45be8\": not found" Aug 13 00:55:37.239882 kubelet[1969]: E0813 00:55:37.239856 1969 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"87e7a53bfa8d11998990e06a3d1c08e44ebce678871160a4f8a80825f5e45be8\": not found" containerID="87e7a53bfa8d11998990e06a3d1c08e44ebce678871160a4f8a80825f5e45be8" Aug 13 00:55:37.240002 kubelet[1969]: I0813 00:55:37.239887 1969 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"87e7a53bfa8d11998990e06a3d1c08e44ebce678871160a4f8a80825f5e45be8"} err="failed to get container status \"87e7a53bfa8d11998990e06a3d1c08e44ebce678871160a4f8a80825f5e45be8\": rpc error: code = NotFound desc = an error occurred when try to find container \"87e7a53bfa8d11998990e06a3d1c08e44ebce678871160a4f8a80825f5e45be8\": not found" Aug 13 00:55:37.240002 kubelet[1969]: I0813 00:55:37.239913 1969 scope.go:117] "RemoveContainer" containerID="d4cb11e681ca21d10636ad1ac45bdeedf5bea060f3aaac2c1a2ab999e8abf2e6" Aug 13 00:55:37.240620 env[1226]: time="2025-08-13T00:55:37.240510563Z" level=error msg="ContainerStatus for \"d4cb11e681ca21d10636ad1ac45bdeedf5bea060f3aaac2c1a2ab999e8abf2e6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d4cb11e681ca21d10636ad1ac45bdeedf5bea060f3aaac2c1a2ab999e8abf2e6\": not found" Aug 13 00:55:37.240832 kubelet[1969]: E0813 00:55:37.240811 1969 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d4cb11e681ca21d10636ad1ac45bdeedf5bea060f3aaac2c1a2ab999e8abf2e6\": not found" containerID="d4cb11e681ca21d10636ad1ac45bdeedf5bea060f3aaac2c1a2ab999e8abf2e6" Aug 13 00:55:37.240914 kubelet[1969]: I0813 00:55:37.240885 1969 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d4cb11e681ca21d10636ad1ac45bdeedf5bea060f3aaac2c1a2ab999e8abf2e6"} err="failed to get container status \"d4cb11e681ca21d10636ad1ac45bdeedf5bea060f3aaac2c1a2ab999e8abf2e6\": rpc error: code = NotFound desc = an error occurred when try to find container \"d4cb11e681ca21d10636ad1ac45bdeedf5bea060f3aaac2c1a2ab999e8abf2e6\": not found" Aug 13 00:55:37.240914 kubelet[1969]: I0813 00:55:37.240904 1969 scope.go:117] "RemoveContainer" containerID="cbf4c402beb939d40fe9ffb052a9eadef604df30078b029d3ea6cb07b87ca1fd" Aug 13 00:55:37.241203 env[1226]: time="2025-08-13T00:55:37.241133689Z" level=error msg="ContainerStatus for \"cbf4c402beb939d40fe9ffb052a9eadef604df30078b029d3ea6cb07b87ca1fd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cbf4c402beb939d40fe9ffb052a9eadef604df30078b029d3ea6cb07b87ca1fd\": not found" Aug 13 00:55:37.241285 kubelet[1969]: E0813 00:55:37.241257 1969 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cbf4c402beb939d40fe9ffb052a9eadef604df30078b029d3ea6cb07b87ca1fd\": not found" containerID="cbf4c402beb939d40fe9ffb052a9eadef604df30078b029d3ea6cb07b87ca1fd" Aug 13 00:55:37.241285 kubelet[1969]: I0813 00:55:37.241276 1969 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cbf4c402beb939d40fe9ffb052a9eadef604df30078b029d3ea6cb07b87ca1fd"} err="failed to get container status \"cbf4c402beb939d40fe9ffb052a9eadef604df30078b029d3ea6cb07b87ca1fd\": rpc error: code = NotFound desc = an error occurred when try to find container \"cbf4c402beb939d40fe9ffb052a9eadef604df30078b029d3ea6cb07b87ca1fd\": not found" Aug 13 00:55:37.241435 kubelet[1969]: I0813 00:55:37.241287 1969 scope.go:117] "RemoveContainer" containerID="ee7a32e691e26867b2c6183ed92a8446d09ac58e8d5d0f6355dc761ab7d71003" Aug 13 00:55:37.241482 env[1226]: time="2025-08-13T00:55:37.241438050Z" level=error msg="ContainerStatus for \"ee7a32e691e26867b2c6183ed92a8446d09ac58e8d5d0f6355dc761ab7d71003\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ee7a32e691e26867b2c6183ed92a8446d09ac58e8d5d0f6355dc761ab7d71003\": not found" Aug 13 00:55:37.241568 kubelet[1969]: E0813 00:55:37.241550 1969 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ee7a32e691e26867b2c6183ed92a8446d09ac58e8d5d0f6355dc761ab7d71003\": not found" containerID="ee7a32e691e26867b2c6183ed92a8446d09ac58e8d5d0f6355dc761ab7d71003" Aug 13 00:55:37.241628 kubelet[1969]: I0813 00:55:37.241579 1969 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ee7a32e691e26867b2c6183ed92a8446d09ac58e8d5d0f6355dc761ab7d71003"} err="failed to get container status \"ee7a32e691e26867b2c6183ed92a8446d09ac58e8d5d0f6355dc761ab7d71003\": rpc error: code = NotFound desc = an error occurred when try to find container \"ee7a32e691e26867b2c6183ed92a8446d09ac58e8d5d0f6355dc761ab7d71003\": not found" Aug 13 00:55:37.241628 kubelet[1969]: I0813 00:55:37.241594 1969 scope.go:117] "RemoveContainer" containerID="c4cdc8320bfa76eaf754eeb35152ba150f385acd4ba3da383e24e104ed02889a" Aug 13 00:55:37.243124 env[1226]: time="2025-08-13T00:55:37.243046744Z" level=info msg="RemoveContainer for \"c4cdc8320bfa76eaf754eeb35152ba150f385acd4ba3da383e24e104ed02889a\"" Aug 13 00:55:37.246323 env[1226]: time="2025-08-13T00:55:37.246283468Z" level=info msg="RemoveContainer for \"c4cdc8320bfa76eaf754eeb35152ba150f385acd4ba3da383e24e104ed02889a\" returns successfully" Aug 13 00:55:37.246531 kubelet[1969]: I0813 00:55:37.246501 1969 scope.go:117] "RemoveContainer" containerID="c4cdc8320bfa76eaf754eeb35152ba150f385acd4ba3da383e24e104ed02889a" Aug 13 00:55:37.246751 env[1226]: time="2025-08-13T00:55:37.246704200Z" level=error msg="ContainerStatus for \"c4cdc8320bfa76eaf754eeb35152ba150f385acd4ba3da383e24e104ed02889a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c4cdc8320bfa76eaf754eeb35152ba150f385acd4ba3da383e24e104ed02889a\": not found" Aug 13 00:55:37.246882 kubelet[1969]: E0813 00:55:37.246852 1969 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c4cdc8320bfa76eaf754eeb35152ba150f385acd4ba3da383e24e104ed02889a\": not found" containerID="c4cdc8320bfa76eaf754eeb35152ba150f385acd4ba3da383e24e104ed02889a" Aug 13 00:55:37.246951 kubelet[1969]: I0813 00:55:37.246885 1969 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c4cdc8320bfa76eaf754eeb35152ba150f385acd4ba3da383e24e104ed02889a"} err="failed to get container status \"c4cdc8320bfa76eaf754eeb35152ba150f385acd4ba3da383e24e104ed02889a\": rpc error: code = NotFound desc = an error occurred when try to find container \"c4cdc8320bfa76eaf754eeb35152ba150f385acd4ba3da383e24e104ed02889a\": not found" Aug 13 00:55:37.833897 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30e79f034101e1c2e514a40647896e6f5c606feb5e658a3d19b0a4eaf94edb52-rootfs.mount: Deactivated successfully. Aug 13 00:55:37.834006 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-30e79f034101e1c2e514a40647896e6f5c606feb5e658a3d19b0a4eaf94edb52-shm.mount: Deactivated successfully. Aug 13 00:55:37.834123 systemd[1]: var-lib-kubelet-pods-5cdb1c13\x2d6e2f\x2d4139\x2d8655\x2dd48371cf2856-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj86n5.mount: Deactivated successfully. Aug 13 00:55:37.834228 systemd[1]: var-lib-kubelet-pods-7b35d23d\x2d6b85\x2d4838\x2d972f\x2dee61b825d323-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddxf2s.mount: Deactivated successfully. Aug 13 00:55:37.834299 systemd[1]: var-lib-kubelet-pods-5cdb1c13\x2d6e2f\x2d4139\x2d8655\x2dd48371cf2856-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:55:37.834371 systemd[1]: var-lib-kubelet-pods-5cdb1c13\x2d6e2f\x2d4139\x2d8655\x2dd48371cf2856-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:55:38.815976 sshd[3612]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:38.822158 systemd[1]: Started sshd@23-10.0.0.79:22-10.0.0.1:45652.service. Aug 13 00:55:38.822930 systemd[1]: sshd@22-10.0.0.79:22-10.0.0.1:37916.service: Deactivated successfully. Aug 13 00:55:38.823876 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:55:38.825636 systemd-logind[1213]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:55:38.827063 systemd-logind[1213]: Removed session 23. Aug 13 00:55:38.866447 sshd[3771]: Accepted publickey for core from 10.0.0.1 port 45652 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:55:38.868421 sshd[3771]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:38.879134 systemd-logind[1213]: New session 24 of user core. Aug 13 00:55:38.879913 systemd[1]: Started session-24.scope. Aug 13 00:55:39.010763 kubelet[1969]: I0813 00:55:39.010521 1969 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cdb1c13-6e2f-4139-8655-d48371cf2856" path="/var/lib/kubelet/pods/5cdb1c13-6e2f-4139-8655-d48371cf2856/volumes" Aug 13 00:55:39.014974 kubelet[1969]: I0813 00:55:39.014629 1969 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b35d23d-6b85-4838-972f-ee61b825d323" path="/var/lib/kubelet/pods/7b35d23d-6b85-4838-972f-ee61b825d323/volumes" Aug 13 00:55:39.649323 sshd[3771]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:39.651175 systemd[1]: Started sshd@24-10.0.0.79:22-10.0.0.1:45658.service. Aug 13 00:55:39.656499 systemd[1]: sshd@23-10.0.0.79:22-10.0.0.1:45652.service: Deactivated successfully. Aug 13 00:55:39.657645 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:55:39.658977 systemd-logind[1213]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:55:39.660895 systemd-logind[1213]: Removed session 24. Aug 13 00:55:39.697393 sshd[3783]: Accepted publickey for core from 10.0.0.1 port 45658 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:55:39.698819 sshd[3783]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:39.707493 systemd-logind[1213]: New session 25 of user core. Aug 13 00:55:39.709351 systemd[1]: Started session-25.scope. Aug 13 00:55:39.737501 systemd[1]: Created slice kubepods-burstable-podc263889a_feb3_454a_922b_aecf010a0cc9.slice. Aug 13 00:55:39.789046 kubelet[1969]: I0813 00:55:39.788984 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-xtables-lock\") pod \"cilium-qnqrj\" (UID: \"c263889a-feb3-454a-922b-aecf010a0cc9\") " pod="kube-system/cilium-qnqrj" Aug 13 00:55:39.789046 kubelet[1969]: I0813 00:55:39.789035 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c263889a-feb3-454a-922b-aecf010a0cc9-cilium-ipsec-secrets\") pod \"cilium-qnqrj\" (UID: \"c263889a-feb3-454a-922b-aecf010a0cc9\") " pod="kube-system/cilium-qnqrj" Aug 13 00:55:39.789046 kubelet[1969]: I0813 00:55:39.789050 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c263889a-feb3-454a-922b-aecf010a0cc9-hubble-tls\") pod \"cilium-qnqrj\" (UID: \"c263889a-feb3-454a-922b-aecf010a0cc9\") " pod="kube-system/cilium-qnqrj" Aug 13 00:55:39.789046 kubelet[1969]: I0813 00:55:39.789064 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-bpf-maps\") pod \"cilium-qnqrj\" (UID: \"c263889a-feb3-454a-922b-aecf010a0cc9\") " pod="kube-system/cilium-qnqrj" Aug 13 00:55:39.789046 kubelet[1969]: I0813 00:55:39.789078 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-cilium-run\") pod \"cilium-qnqrj\" (UID: \"c263889a-feb3-454a-922b-aecf010a0cc9\") " pod="kube-system/cilium-qnqrj" Aug 13 00:55:39.789466 kubelet[1969]: I0813 00:55:39.789104 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-hostproc\") pod \"cilium-qnqrj\" (UID: \"c263889a-feb3-454a-922b-aecf010a0cc9\") " pod="kube-system/cilium-qnqrj" Aug 13 00:55:39.789466 kubelet[1969]: I0813 00:55:39.789121 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-etc-cni-netd\") pod \"cilium-qnqrj\" (UID: \"c263889a-feb3-454a-922b-aecf010a0cc9\") " pod="kube-system/cilium-qnqrj" Aug 13 00:55:39.789466 kubelet[1969]: I0813 00:55:39.789135 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-lib-modules\") pod \"cilium-qnqrj\" (UID: \"c263889a-feb3-454a-922b-aecf010a0cc9\") " pod="kube-system/cilium-qnqrj" Aug 13 00:55:39.789466 kubelet[1969]: I0813 00:55:39.789152 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c263889a-feb3-454a-922b-aecf010a0cc9-cilium-config-path\") pod \"cilium-qnqrj\" (UID: \"c263889a-feb3-454a-922b-aecf010a0cc9\") " pod="kube-system/cilium-qnqrj" Aug 13 00:55:39.789466 kubelet[1969]: I0813 00:55:39.789166 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfxbn\" (UniqueName: \"kubernetes.io/projected/c263889a-feb3-454a-922b-aecf010a0cc9-kube-api-access-pfxbn\") pod \"cilium-qnqrj\" (UID: \"c263889a-feb3-454a-922b-aecf010a0cc9\") " pod="kube-system/cilium-qnqrj" Aug 13 00:55:39.789466 kubelet[1969]: I0813 00:55:39.789179 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-cilium-cgroup\") pod \"cilium-qnqrj\" (UID: \"c263889a-feb3-454a-922b-aecf010a0cc9\") " pod="kube-system/cilium-qnqrj" Aug 13 00:55:39.789715 kubelet[1969]: I0813 00:55:39.789192 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-cni-path\") pod \"cilium-qnqrj\" (UID: \"c263889a-feb3-454a-922b-aecf010a0cc9\") " pod="kube-system/cilium-qnqrj" Aug 13 00:55:39.789715 kubelet[1969]: I0813 00:55:39.789207 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c263889a-feb3-454a-922b-aecf010a0cc9-clustermesh-secrets\") pod \"cilium-qnqrj\" (UID: \"c263889a-feb3-454a-922b-aecf010a0cc9\") " pod="kube-system/cilium-qnqrj" Aug 13 00:55:39.789715 kubelet[1969]: I0813 00:55:39.789220 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-host-proc-sys-net\") pod \"cilium-qnqrj\" (UID: \"c263889a-feb3-454a-922b-aecf010a0cc9\") " pod="kube-system/cilium-qnqrj" Aug 13 00:55:39.789715 kubelet[1969]: I0813 00:55:39.789235 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-host-proc-sys-kernel\") pod \"cilium-qnqrj\" (UID: \"c263889a-feb3-454a-922b-aecf010a0cc9\") " pod="kube-system/cilium-qnqrj" Aug 13 00:55:39.855133 sshd[3783]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:39.860817 systemd[1]: Started sshd@25-10.0.0.79:22-10.0.0.1:45664.service. Aug 13 00:55:39.861606 systemd[1]: sshd@24-10.0.0.79:22-10.0.0.1:45658.service: Deactivated successfully. Aug 13 00:55:39.862343 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:55:39.863317 systemd-logind[1213]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:55:39.864316 systemd-logind[1213]: Removed session 25. Aug 13 00:55:39.877849 kubelet[1969]: E0813 00:55:39.877785 1969 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-pfxbn lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-qnqrj" podUID="c263889a-feb3-454a-922b-aecf010a0cc9" Aug 13 00:55:39.911226 sshd[3797]: Accepted publickey for core from 10.0.0.1 port 45664 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:55:39.910116 sshd[3797]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:39.928846 systemd[1]: Started session-26.scope. Aug 13 00:55:39.929234 systemd-logind[1213]: New session 26 of user core. Aug 13 00:55:40.293217 kubelet[1969]: I0813 00:55:40.293168 1969 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-cilium-cgroup\") pod \"c263889a-feb3-454a-922b-aecf010a0cc9\" (UID: \"c263889a-feb3-454a-922b-aecf010a0cc9\") " Aug 13 00:55:40.293217 kubelet[1969]: I0813 00:55:40.293212 1969 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-host-proc-sys-net\") pod \"c263889a-feb3-454a-922b-aecf010a0cc9\" (UID: \"c263889a-feb3-454a-922b-aecf010a0cc9\") " Aug 13 00:55:40.293690 kubelet[1969]: I0813 00:55:40.293233 1969 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-hostproc\") pod \"c263889a-feb3-454a-922b-aecf010a0cc9\" (UID: \"c263889a-feb3-454a-922b-aecf010a0cc9\") " Aug 13 00:55:40.293690 kubelet[1969]: I0813 00:55:40.293246 1969 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-cni-path\") pod \"c263889a-feb3-454a-922b-aecf010a0cc9\" (UID: \"c263889a-feb3-454a-922b-aecf010a0cc9\") " Aug 13 00:55:40.293690 kubelet[1969]: I0813 00:55:40.293259 1969 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-xtables-lock\") pod \"c263889a-feb3-454a-922b-aecf010a0cc9\" (UID: \"c263889a-feb3-454a-922b-aecf010a0cc9\") " Aug 13 00:55:40.293690 kubelet[1969]: I0813 00:55:40.293274 1969 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-etc-cni-netd\") pod \"c263889a-feb3-454a-922b-aecf010a0cc9\" (UID: \"c263889a-feb3-454a-922b-aecf010a0cc9\") " Aug 13 00:55:40.293690 kubelet[1969]: I0813 00:55:40.293287 1969 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-cilium-run\") pod \"c263889a-feb3-454a-922b-aecf010a0cc9\" (UID: \"c263889a-feb3-454a-922b-aecf010a0cc9\") " Aug 13 00:55:40.293690 kubelet[1969]: I0813 00:55:40.293308 1969 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfxbn\" (UniqueName: \"kubernetes.io/projected/c263889a-feb3-454a-922b-aecf010a0cc9-kube-api-access-pfxbn\") pod \"c263889a-feb3-454a-922b-aecf010a0cc9\" (UID: \"c263889a-feb3-454a-922b-aecf010a0cc9\") " Aug 13 00:55:40.293920 kubelet[1969]: I0813 00:55:40.293324 1969 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c263889a-feb3-454a-922b-aecf010a0cc9-clustermesh-secrets\") pod \"c263889a-feb3-454a-922b-aecf010a0cc9\" (UID: \"c263889a-feb3-454a-922b-aecf010a0cc9\") " Aug 13 00:55:40.293920 kubelet[1969]: I0813 00:55:40.293338 1969 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c263889a-feb3-454a-922b-aecf010a0cc9-hubble-tls\") pod \"c263889a-feb3-454a-922b-aecf010a0cc9\" (UID: \"c263889a-feb3-454a-922b-aecf010a0cc9\") " Aug 13 00:55:40.293920 kubelet[1969]: I0813 00:55:40.293325 1969 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c263889a-feb3-454a-922b-aecf010a0cc9" (UID: "c263889a-feb3-454a-922b-aecf010a0cc9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:55:40.293920 kubelet[1969]: I0813 00:55:40.293358 1969 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-host-proc-sys-kernel\") pod \"c263889a-feb3-454a-922b-aecf010a0cc9\" (UID: \"c263889a-feb3-454a-922b-aecf010a0cc9\") " Aug 13 00:55:40.293920 kubelet[1969]: I0813 00:55:40.293361 1969 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-hostproc" (OuterVolumeSpecName: "hostproc") pod "c263889a-feb3-454a-922b-aecf010a0cc9" (UID: "c263889a-feb3-454a-922b-aecf010a0cc9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:55:40.294120 kubelet[1969]: I0813 00:55:40.293373 1969 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c263889a-feb3-454a-922b-aecf010a0cc9-cilium-ipsec-secrets\") pod \"c263889a-feb3-454a-922b-aecf010a0cc9\" (UID: \"c263889a-feb3-454a-922b-aecf010a0cc9\") " Aug 13 00:55:40.294120 kubelet[1969]: I0813 00:55:40.293338 1969 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-cni-path" (OuterVolumeSpecName: "cni-path") pod "c263889a-feb3-454a-922b-aecf010a0cc9" (UID: "c263889a-feb3-454a-922b-aecf010a0cc9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:55:40.294120 kubelet[1969]: I0813 00:55:40.293388 1969 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c263889a-feb3-454a-922b-aecf010a0cc9-cilium-config-path\") pod \"c263889a-feb3-454a-922b-aecf010a0cc9\" (UID: \"c263889a-feb3-454a-922b-aecf010a0cc9\") " Aug 13 00:55:40.294120 kubelet[1969]: I0813 00:55:40.293403 1969 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-bpf-maps\") pod \"c263889a-feb3-454a-922b-aecf010a0cc9\" (UID: \"c263889a-feb3-454a-922b-aecf010a0cc9\") " Aug 13 00:55:40.294120 kubelet[1969]: I0813 00:55:40.293419 1969 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c263889a-feb3-454a-922b-aecf010a0cc9" (UID: "c263889a-feb3-454a-922b-aecf010a0cc9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:55:40.294374 kubelet[1969]: I0813 00:55:40.293424 1969 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c263889a-feb3-454a-922b-aecf010a0cc9" (UID: "c263889a-feb3-454a-922b-aecf010a0cc9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:55:40.294374 kubelet[1969]: I0813 00:55:40.293433 1969 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c263889a-feb3-454a-922b-aecf010a0cc9" (UID: "c263889a-feb3-454a-922b-aecf010a0cc9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:55:40.294374 kubelet[1969]: I0813 00:55:40.293447 1969 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c263889a-feb3-454a-922b-aecf010a0cc9" (UID: "c263889a-feb3-454a-922b-aecf010a0cc9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:55:40.294374 kubelet[1969]: I0813 00:55:40.293490 1969 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c263889a-feb3-454a-922b-aecf010a0cc9" (UID: "c263889a-feb3-454a-922b-aecf010a0cc9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:55:40.294374 kubelet[1969]: I0813 00:55:40.293282 1969 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c263889a-feb3-454a-922b-aecf010a0cc9" (UID: "c263889a-feb3-454a-922b-aecf010a0cc9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:55:40.294730 kubelet[1969]: I0813 00:55:40.294667 1969 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-lib-modules\") pod \"c263889a-feb3-454a-922b-aecf010a0cc9\" (UID: \"c263889a-feb3-454a-922b-aecf010a0cc9\") " Aug 13 00:55:40.294990 kubelet[1969]: I0813 00:55:40.294759 1969 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 13 00:55:40.294990 kubelet[1969]: I0813 00:55:40.294773 1969 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 13 00:55:40.294990 kubelet[1969]: I0813 00:55:40.294782 1969 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 13 00:55:40.294990 kubelet[1969]: I0813 00:55:40.294791 1969 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 13 00:55:40.294990 kubelet[1969]: I0813 00:55:40.294798 1969 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:55:40.294990 kubelet[1969]: I0813 00:55:40.294806 1969 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 13 00:55:40.294990 kubelet[1969]: I0813 00:55:40.294813 1969 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 13 00:55:40.294990 kubelet[1969]: I0813 00:55:40.294821 1969 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 13 00:55:40.295287 kubelet[1969]: I0813 00:55:40.294833 1969 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 13 00:55:40.295287 kubelet[1969]: I0813 00:55:40.294855 1969 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c263889a-feb3-454a-922b-aecf010a0cc9" (UID: "c263889a-feb3-454a-922b-aecf010a0cc9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:55:40.295805 kubelet[1969]: I0813 00:55:40.295773 1969 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c263889a-feb3-454a-922b-aecf010a0cc9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c263889a-feb3-454a-922b-aecf010a0cc9" (UID: "c263889a-feb3-454a-922b-aecf010a0cc9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:55:40.297345 systemd[1]: var-lib-kubelet-pods-c263889a\x2dfeb3\x2d454a\x2d922b\x2daecf010a0cc9-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Aug 13 00:55:40.297837 kubelet[1969]: I0813 00:55:40.297796 1969 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c263889a-feb3-454a-922b-aecf010a0cc9-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "c263889a-feb3-454a-922b-aecf010a0cc9" (UID: "c263889a-feb3-454a-922b-aecf010a0cc9"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:55:40.299557 systemd[1]: var-lib-kubelet-pods-c263889a\x2dfeb3\x2d454a\x2d922b\x2daecf010a0cc9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpfxbn.mount: Deactivated successfully. Aug 13 00:55:40.299634 systemd[1]: var-lib-kubelet-pods-c263889a\x2dfeb3\x2d454a\x2d922b\x2daecf010a0cc9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:55:40.299959 kubelet[1969]: I0813 00:55:40.299896 1969 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c263889a-feb3-454a-922b-aecf010a0cc9-kube-api-access-pfxbn" (OuterVolumeSpecName: "kube-api-access-pfxbn") pod "c263889a-feb3-454a-922b-aecf010a0cc9" (UID: "c263889a-feb3-454a-922b-aecf010a0cc9"). InnerVolumeSpecName "kube-api-access-pfxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:55:40.300167 kubelet[1969]: I0813 00:55:40.300075 1969 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c263889a-feb3-454a-922b-aecf010a0cc9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c263889a-feb3-454a-922b-aecf010a0cc9" (UID: "c263889a-feb3-454a-922b-aecf010a0cc9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:55:40.300414 kubelet[1969]: I0813 00:55:40.300362 1969 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c263889a-feb3-454a-922b-aecf010a0cc9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c263889a-feb3-454a-922b-aecf010a0cc9" (UID: "c263889a-feb3-454a-922b-aecf010a0cc9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:55:40.395661 kubelet[1969]: I0813 00:55:40.395594 1969 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c263889a-feb3-454a-922b-aecf010a0cc9-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 13 00:55:40.395661 kubelet[1969]: I0813 00:55:40.395635 1969 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pfxbn\" (UniqueName: \"kubernetes.io/projected/c263889a-feb3-454a-922b-aecf010a0cc9-kube-api-access-pfxbn\") on node \"localhost\" DevicePath \"\"" Aug 13 00:55:40.395661 kubelet[1969]: I0813 00:55:40.395645 1969 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c263889a-feb3-454a-922b-aecf010a0cc9-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 13 00:55:40.395661 kubelet[1969]: I0813 00:55:40.395653 1969 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c263889a-feb3-454a-922b-aecf010a0cc9-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 13 00:55:40.395661 kubelet[1969]: I0813 00:55:40.395661 1969 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c263889a-feb3-454a-922b-aecf010a0cc9-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Aug 13 00:55:40.395661 kubelet[1969]: I0813 00:55:40.395668 1969 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c263889a-feb3-454a-922b-aecf010a0cc9-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:55:40.895833 systemd[1]: var-lib-kubelet-pods-c263889a\x2dfeb3\x2d454a\x2d922b\x2daecf010a0cc9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:55:41.013289 systemd[1]: Removed slice kubepods-burstable-podc263889a_feb3_454a_922b_aecf010a0cc9.slice. Aug 13 00:55:41.058634 kubelet[1969]: E0813 00:55:41.058574 1969 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:55:41.265606 systemd[1]: Created slice kubepods-burstable-podc5b368b5_e5b8_4fab_93bc_e15b16040434.slice. Aug 13 00:55:41.300726 kubelet[1969]: I0813 00:55:41.300650 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c5b368b5-e5b8-4fab-93bc-e15b16040434-cilium-config-path\") pod \"cilium-lqnpl\" (UID: \"c5b368b5-e5b8-4fab-93bc-e15b16040434\") " pod="kube-system/cilium-lqnpl" Aug 13 00:55:41.301289 kubelet[1969]: I0813 00:55:41.301262 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c5b368b5-e5b8-4fab-93bc-e15b16040434-cilium-ipsec-secrets\") pod \"cilium-lqnpl\" (UID: \"c5b368b5-e5b8-4fab-93bc-e15b16040434\") " pod="kube-system/cilium-lqnpl" Aug 13 00:55:41.301478 kubelet[1969]: I0813 00:55:41.301435 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c5b368b5-e5b8-4fab-93bc-e15b16040434-hubble-tls\") pod \"cilium-lqnpl\" (UID: \"c5b368b5-e5b8-4fab-93bc-e15b16040434\") " pod="kube-system/cilium-lqnpl" Aug 13 00:55:41.301478 kubelet[1969]: I0813 00:55:41.301469 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t2rw\" (UniqueName: \"kubernetes.io/projected/c5b368b5-e5b8-4fab-93bc-e15b16040434-kube-api-access-5t2rw\") pod \"cilium-lqnpl\" (UID: \"c5b368b5-e5b8-4fab-93bc-e15b16040434\") " pod="kube-system/cilium-lqnpl" Aug 13 00:55:41.301478 kubelet[1969]: I0813 00:55:41.301493 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c5b368b5-e5b8-4fab-93bc-e15b16040434-host-proc-sys-kernel\") pod \"cilium-lqnpl\" (UID: \"c5b368b5-e5b8-4fab-93bc-e15b16040434\") " pod="kube-system/cilium-lqnpl" Aug 13 00:55:41.301757 kubelet[1969]: I0813 00:55:41.301513 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c5b368b5-e5b8-4fab-93bc-e15b16040434-cilium-run\") pod \"cilium-lqnpl\" (UID: \"c5b368b5-e5b8-4fab-93bc-e15b16040434\") " pod="kube-system/cilium-lqnpl" Aug 13 00:55:41.301757 kubelet[1969]: I0813 00:55:41.301533 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c5b368b5-e5b8-4fab-93bc-e15b16040434-cilium-cgroup\") pod \"cilium-lqnpl\" (UID: \"c5b368b5-e5b8-4fab-93bc-e15b16040434\") " pod="kube-system/cilium-lqnpl" Aug 13 00:55:41.301757 kubelet[1969]: I0813 00:55:41.301553 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c5b368b5-e5b8-4fab-93bc-e15b16040434-hostproc\") pod \"cilium-lqnpl\" (UID: \"c5b368b5-e5b8-4fab-93bc-e15b16040434\") " pod="kube-system/cilium-lqnpl" Aug 13 00:55:41.301757 kubelet[1969]: I0813 00:55:41.301572 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c5b368b5-e5b8-4fab-93bc-e15b16040434-host-proc-sys-net\") pod \"cilium-lqnpl\" (UID: \"c5b368b5-e5b8-4fab-93bc-e15b16040434\") " pod="kube-system/cilium-lqnpl" Aug 13 00:55:41.301757 kubelet[1969]: I0813 00:55:41.301667 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c5b368b5-e5b8-4fab-93bc-e15b16040434-clustermesh-secrets\") pod \"cilium-lqnpl\" (UID: \"c5b368b5-e5b8-4fab-93bc-e15b16040434\") " pod="kube-system/cilium-lqnpl" Aug 13 00:55:41.301757 kubelet[1969]: I0813 00:55:41.301755 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c5b368b5-e5b8-4fab-93bc-e15b16040434-bpf-maps\") pod \"cilium-lqnpl\" (UID: \"c5b368b5-e5b8-4fab-93bc-e15b16040434\") " pod="kube-system/cilium-lqnpl" Aug 13 00:55:41.301902 kubelet[1969]: I0813 00:55:41.301785 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c5b368b5-e5b8-4fab-93bc-e15b16040434-cni-path\") pod \"cilium-lqnpl\" (UID: \"c5b368b5-e5b8-4fab-93bc-e15b16040434\") " pod="kube-system/cilium-lqnpl" Aug 13 00:55:41.301902 kubelet[1969]: I0813 00:55:41.301805 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c5b368b5-e5b8-4fab-93bc-e15b16040434-etc-cni-netd\") pod \"cilium-lqnpl\" (UID: \"c5b368b5-e5b8-4fab-93bc-e15b16040434\") " pod="kube-system/cilium-lqnpl" Aug 13 00:55:41.301902 kubelet[1969]: I0813 00:55:41.301825 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5b368b5-e5b8-4fab-93bc-e15b16040434-lib-modules\") pod \"cilium-lqnpl\" (UID: \"c5b368b5-e5b8-4fab-93bc-e15b16040434\") " pod="kube-system/cilium-lqnpl" Aug 13 00:55:41.301902 kubelet[1969]: I0813 00:55:41.301840 1969 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5b368b5-e5b8-4fab-93bc-e15b16040434-xtables-lock\") pod \"cilium-lqnpl\" (UID: \"c5b368b5-e5b8-4fab-93bc-e15b16040434\") " pod="kube-system/cilium-lqnpl" Aug 13 00:55:41.568456 kubelet[1969]: E0813 00:55:41.568399 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:41.569129 env[1226]: time="2025-08-13T00:55:41.569039495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lqnpl,Uid:c5b368b5-e5b8-4fab-93bc-e15b16040434,Namespace:kube-system,Attempt:0,}" Aug 13 00:55:41.583027 env[1226]: time="2025-08-13T00:55:41.582928357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:55:41.583027 env[1226]: time="2025-08-13T00:55:41.582968002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:55:41.583027 env[1226]: time="2025-08-13T00:55:41.582978161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:55:41.583465 env[1226]: time="2025-08-13T00:55:41.583409002Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b7b5502b84f162161cbe0cc019b08b209cde063bc00868995a94254494658093 pid=3828 runtime=io.containerd.runc.v2 Aug 13 00:55:41.594292 systemd[1]: Started cri-containerd-b7b5502b84f162161cbe0cc019b08b209cde063bc00868995a94254494658093.scope. Aug 13 00:55:41.617968 env[1226]: time="2025-08-13T00:55:41.617901597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lqnpl,Uid:c5b368b5-e5b8-4fab-93bc-e15b16040434,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7b5502b84f162161cbe0cc019b08b209cde063bc00868995a94254494658093\"" Aug 13 00:55:41.618758 kubelet[1969]: E0813 00:55:41.618724 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:41.625997 env[1226]: time="2025-08-13T00:55:41.625910215Z" level=info msg="CreateContainer within sandbox \"b7b5502b84f162161cbe0cc019b08b209cde063bc00868995a94254494658093\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:55:41.712208 env[1226]: time="2025-08-13T00:55:41.712089794Z" level=info msg="CreateContainer within sandbox \"b7b5502b84f162161cbe0cc019b08b209cde063bc00868995a94254494658093\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e4fdc7532093ef048029ba4c7c49e0ba824d97c92fc0ccbf533d8e65f98aa4ea\"" Aug 13 00:55:41.713141 env[1226]: time="2025-08-13T00:55:41.713068859Z" level=info msg="StartContainer for \"e4fdc7532093ef048029ba4c7c49e0ba824d97c92fc0ccbf533d8e65f98aa4ea\"" Aug 13 00:55:41.729194 systemd[1]: Started cri-containerd-e4fdc7532093ef048029ba4c7c49e0ba824d97c92fc0ccbf533d8e65f98aa4ea.scope. Aug 13 00:55:41.765616 systemd[1]: cri-containerd-e4fdc7532093ef048029ba4c7c49e0ba824d97c92fc0ccbf533d8e65f98aa4ea.scope: Deactivated successfully. Aug 13 00:55:41.955201 env[1226]: time="2025-08-13T00:55:41.955034251Z" level=info msg="StartContainer for \"e4fdc7532093ef048029ba4c7c49e0ba824d97c92fc0ccbf533d8e65f98aa4ea\" returns successfully" Aug 13 00:55:41.971027 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4fdc7532093ef048029ba4c7c49e0ba824d97c92fc0ccbf533d8e65f98aa4ea-rootfs.mount: Deactivated successfully. Aug 13 00:55:42.010048 env[1226]: time="2025-08-13T00:55:42.009990034Z" level=info msg="shim disconnected" id=e4fdc7532093ef048029ba4c7c49e0ba824d97c92fc0ccbf533d8e65f98aa4ea Aug 13 00:55:42.010048 env[1226]: time="2025-08-13T00:55:42.010040069Z" level=warning msg="cleaning up after shim disconnected" id=e4fdc7532093ef048029ba4c7c49e0ba824d97c92fc0ccbf533d8e65f98aa4ea namespace=k8s.io Aug 13 00:55:42.010048 env[1226]: time="2025-08-13T00:55:42.010049397Z" level=info msg="cleaning up dead shim" Aug 13 00:55:42.021764 env[1226]: time="2025-08-13T00:55:42.021674357Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:55:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3910 runtime=io.containerd.runc.v2\n" Aug 13 00:55:42.227132 kubelet[1969]: E0813 00:55:42.226659 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:42.340835 env[1226]: time="2025-08-13T00:55:42.340140083Z" level=info msg="CreateContainer within sandbox \"b7b5502b84f162161cbe0cc019b08b209cde063bc00868995a94254494658093\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:55:42.364428 env[1226]: time="2025-08-13T00:55:42.364368576Z" level=info msg="CreateContainer within sandbox \"b7b5502b84f162161cbe0cc019b08b209cde063bc00868995a94254494658093\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e9a2025ab9c75fcbbfafcdef81d3419aa4e87942a41af7c4eb60b16e53795ea2\"" Aug 13 00:55:42.364972 env[1226]: time="2025-08-13T00:55:42.364949203Z" level=info msg="StartContainer for \"e9a2025ab9c75fcbbfafcdef81d3419aa4e87942a41af7c4eb60b16e53795ea2\"" Aug 13 00:55:42.378523 systemd[1]: Started cri-containerd-e9a2025ab9c75fcbbfafcdef81d3419aa4e87942a41af7c4eb60b16e53795ea2.scope. Aug 13 00:55:42.402498 env[1226]: time="2025-08-13T00:55:42.402434934Z" level=info msg="StartContainer for \"e9a2025ab9c75fcbbfafcdef81d3419aa4e87942a41af7c4eb60b16e53795ea2\" returns successfully" Aug 13 00:55:42.407683 systemd[1]: cri-containerd-e9a2025ab9c75fcbbfafcdef81d3419aa4e87942a41af7c4eb60b16e53795ea2.scope: Deactivated successfully. Aug 13 00:55:42.428007 env[1226]: time="2025-08-13T00:55:42.427927776Z" level=info msg="shim disconnected" id=e9a2025ab9c75fcbbfafcdef81d3419aa4e87942a41af7c4eb60b16e53795ea2 Aug 13 00:55:42.428007 env[1226]: time="2025-08-13T00:55:42.427985016Z" level=warning msg="cleaning up after shim disconnected" id=e9a2025ab9c75fcbbfafcdef81d3419aa4e87942a41af7c4eb60b16e53795ea2 namespace=k8s.io Aug 13 00:55:42.428007 env[1226]: time="2025-08-13T00:55:42.427995696Z" level=info msg="cleaning up dead shim" Aug 13 00:55:42.435230 env[1226]: time="2025-08-13T00:55:42.435177288Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:55:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3971 runtime=io.containerd.runc.v2\n" Aug 13 00:55:43.009545 kubelet[1969]: I0813 00:55:43.009503 1969 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c263889a-feb3-454a-922b-aecf010a0cc9" path="/var/lib/kubelet/pods/c263889a-feb3-454a-922b-aecf010a0cc9/volumes" Aug 13 00:55:43.198458 kubelet[1969]: I0813 00:55:43.198394 1969 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T00:55:43Z","lastTransitionTime":"2025-08-13T00:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 13 00:55:43.230738 kubelet[1969]: E0813 00:55:43.230668 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:43.237593 env[1226]: time="2025-08-13T00:55:43.237508096Z" level=info msg="CreateContainer within sandbox \"b7b5502b84f162161cbe0cc019b08b209cde063bc00868995a94254494658093\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:55:43.257268 env[1226]: time="2025-08-13T00:55:43.257198133Z" level=info msg="CreateContainer within sandbox \"b7b5502b84f162161cbe0cc019b08b209cde063bc00868995a94254494658093\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"412732a5f8a7f38f9c979891711e4c38e3b494dc5f8b6467fe29a1cb5aac60b6\"" Aug 13 00:55:43.257834 env[1226]: time="2025-08-13T00:55:43.257812895Z" level=info msg="StartContainer for \"412732a5f8a7f38f9c979891711e4c38e3b494dc5f8b6467fe29a1cb5aac60b6\"" Aug 13 00:55:43.278919 systemd[1]: Started cri-containerd-412732a5f8a7f38f9c979891711e4c38e3b494dc5f8b6467fe29a1cb5aac60b6.scope. Aug 13 00:55:43.309517 env[1226]: time="2025-08-13T00:55:43.309452120Z" level=info msg="StartContainer for \"412732a5f8a7f38f9c979891711e4c38e3b494dc5f8b6467fe29a1cb5aac60b6\" returns successfully" Aug 13 00:55:43.312116 systemd[1]: cri-containerd-412732a5f8a7f38f9c979891711e4c38e3b494dc5f8b6467fe29a1cb5aac60b6.scope: Deactivated successfully. Aug 13 00:55:43.338676 env[1226]: time="2025-08-13T00:55:43.338616500Z" level=info msg="shim disconnected" id=412732a5f8a7f38f9c979891711e4c38e3b494dc5f8b6467fe29a1cb5aac60b6 Aug 13 00:55:43.338676 env[1226]: time="2025-08-13T00:55:43.338669130Z" level=warning msg="cleaning up after shim disconnected" id=412732a5f8a7f38f9c979891711e4c38e3b494dc5f8b6467fe29a1cb5aac60b6 namespace=k8s.io Aug 13 00:55:43.338676 env[1226]: time="2025-08-13T00:55:43.338681694Z" level=info msg="cleaning up dead shim" Aug 13 00:55:43.347122 env[1226]: time="2025-08-13T00:55:43.347053997Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:55:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4029 runtime=io.containerd.runc.v2\n" Aug 13 00:55:43.895524 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-412732a5f8a7f38f9c979891711e4c38e3b494dc5f8b6467fe29a1cb5aac60b6-rootfs.mount: Deactivated successfully. Aug 13 00:55:44.235677 kubelet[1969]: E0813 00:55:44.235552 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:44.276410 env[1226]: time="2025-08-13T00:55:44.276336333Z" level=info msg="CreateContainer within sandbox \"b7b5502b84f162161cbe0cc019b08b209cde063bc00868995a94254494658093\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:55:44.298181 env[1226]: time="2025-08-13T00:55:44.298113440Z" level=info msg="CreateContainer within sandbox \"b7b5502b84f162161cbe0cc019b08b209cde063bc00868995a94254494658093\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"042d5ac1524ce79b6fb25ac00aaf5049eae251c85edfaecce4078e3d0ba273d5\"" Aug 13 00:55:44.298828 env[1226]: time="2025-08-13T00:55:44.298767898Z" level=info msg="StartContainer for \"042d5ac1524ce79b6fb25ac00aaf5049eae251c85edfaecce4078e3d0ba273d5\"" Aug 13 00:55:44.316018 systemd[1]: Started cri-containerd-042d5ac1524ce79b6fb25ac00aaf5049eae251c85edfaecce4078e3d0ba273d5.scope. Aug 13 00:55:44.340547 systemd[1]: cri-containerd-042d5ac1524ce79b6fb25ac00aaf5049eae251c85edfaecce4078e3d0ba273d5.scope: Deactivated successfully. Aug 13 00:55:44.346075 env[1226]: time="2025-08-13T00:55:44.346011893Z" level=info msg="StartContainer for \"042d5ac1524ce79b6fb25ac00aaf5049eae251c85edfaecce4078e3d0ba273d5\" returns successfully" Aug 13 00:55:44.373326 env[1226]: time="2025-08-13T00:55:44.373268200Z" level=info msg="shim disconnected" id=042d5ac1524ce79b6fb25ac00aaf5049eae251c85edfaecce4078e3d0ba273d5 Aug 13 00:55:44.373326 env[1226]: time="2025-08-13T00:55:44.373317064Z" level=warning msg="cleaning up after shim disconnected" id=042d5ac1524ce79b6fb25ac00aaf5049eae251c85edfaecce4078e3d0ba273d5 namespace=k8s.io Aug 13 00:55:44.373326 env[1226]: time="2025-08-13T00:55:44.373327534Z" level=info msg="cleaning up dead shim" Aug 13 00:55:44.380828 env[1226]: time="2025-08-13T00:55:44.380773261Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:55:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4084 runtime=io.containerd.runc.v2\n" Aug 13 00:55:44.896238 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-042d5ac1524ce79b6fb25ac00aaf5049eae251c85edfaecce4078e3d0ba273d5-rootfs.mount: Deactivated successfully. Aug 13 00:55:45.007743 kubelet[1969]: E0813 00:55:45.007659 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:45.007970 kubelet[1969]: E0813 00:55:45.007764 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:45.239450 kubelet[1969]: E0813 00:55:45.239338 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:45.362483 env[1226]: time="2025-08-13T00:55:45.362414568Z" level=info msg="CreateContainer within sandbox \"b7b5502b84f162161cbe0cc019b08b209cde063bc00868995a94254494658093\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:55:45.533058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3668739884.mount: Deactivated successfully. Aug 13 00:55:45.544627 env[1226]: time="2025-08-13T00:55:45.544560953Z" level=info msg="CreateContainer within sandbox \"b7b5502b84f162161cbe0cc019b08b209cde063bc00868995a94254494658093\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a758a168caad6b620e68a816322ef48cd767c00482528e4b98582fc193930af6\"" Aug 13 00:55:45.545246 env[1226]: time="2025-08-13T00:55:45.545196314Z" level=info msg="StartContainer for \"a758a168caad6b620e68a816322ef48cd767c00482528e4b98582fc193930af6\"" Aug 13 00:55:45.560587 systemd[1]: Started cri-containerd-a758a168caad6b620e68a816322ef48cd767c00482528e4b98582fc193930af6.scope. Aug 13 00:55:45.594777 env[1226]: time="2025-08-13T00:55:45.594696724Z" level=info msg="StartContainer for \"a758a168caad6b620e68a816322ef48cd767c00482528e4b98582fc193930af6\" returns successfully" Aug 13 00:55:45.891151 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Aug 13 00:55:46.245689 kubelet[1969]: E0813 00:55:46.245548 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:46.290826 kubelet[1969]: I0813 00:55:46.290746 1969 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lqnpl" podStartSLOduration=5.290725261 podStartE2EDuration="5.290725261s" podCreationTimestamp="2025-08-13 00:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:55:46.290379171 +0000 UTC m=+95.477982479" watchObservedRunningTime="2025-08-13 00:55:46.290725261 +0000 UTC m=+95.478328579" Aug 13 00:55:47.569753 kubelet[1969]: E0813 00:55:47.569698 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:48.007385 kubelet[1969]: E0813 00:55:48.007277 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:48.647567 systemd-networkd[1046]: lxc_health: Link UP Aug 13 00:55:48.659146 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Aug 13 00:55:48.659376 systemd-networkd[1046]: lxc_health: Gained carrier Aug 13 00:55:48.725012 systemd[1]: run-containerd-runc-k8s.io-a758a168caad6b620e68a816322ef48cd767c00482528e4b98582fc193930af6-runc.QIBZJq.mount: Deactivated successfully. Aug 13 00:55:48.959864 update_engine[1215]: I0813 00:55:48.959635 1215 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Aug 13 00:55:48.959864 update_engine[1215]: I0813 00:55:48.959692 1215 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Aug 13 00:55:48.961541 update_engine[1215]: I0813 00:55:48.960963 1215 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Aug 13 00:55:48.961541 update_engine[1215]: I0813 00:55:48.961426 1215 omaha_request_params.cc:62] Current group set to lts Aug 13 00:55:48.962722 update_engine[1215]: I0813 00:55:48.962349 1215 update_attempter.cc:499] Already updated boot flags. Skipping. Aug 13 00:55:48.962722 update_engine[1215]: I0813 00:55:48.962361 1215 update_attempter.cc:643] Scheduling an action processor start. Aug 13 00:55:48.962722 update_engine[1215]: I0813 00:55:48.962383 1215 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 13 00:55:48.962722 update_engine[1215]: I0813 00:55:48.962424 1215 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Aug 13 00:55:48.962722 update_engine[1215]: I0813 00:55:48.962473 1215 omaha_request_action.cc:270] Posting an Omaha request to disabled Aug 13 00:55:48.962722 update_engine[1215]: I0813 00:55:48.962477 1215 omaha_request_action.cc:271] Request: Aug 13 00:55:48.962722 update_engine[1215]: Aug 13 00:55:48.962722 update_engine[1215]: Aug 13 00:55:48.962722 update_engine[1215]: Aug 13 00:55:48.962722 update_engine[1215]: Aug 13 00:55:48.962722 update_engine[1215]: Aug 13 00:55:48.962722 update_engine[1215]: Aug 13 00:55:48.962722 update_engine[1215]: Aug 13 00:55:48.962722 update_engine[1215]: Aug 13 00:55:48.962722 update_engine[1215]: I0813 00:55:48.962486 1215 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:55:48.963220 locksmithd[1241]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Aug 13 00:55:48.966650 update_engine[1215]: I0813 00:55:48.966382 1215 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:55:48.966650 update_engine[1215]: I0813 00:55:48.966614 1215 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 00:55:48.975356 update_engine[1215]: E0813 00:55:48.975224 1215 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:55:48.975356 update_engine[1215]: I0813 00:55:48.975329 1215 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Aug 13 00:55:49.570335 kubelet[1969]: E0813 00:55:49.570291 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:50.094288 systemd-networkd[1046]: lxc_health: Gained IPv6LL Aug 13 00:55:50.253908 kubelet[1969]: E0813 00:55:50.253857 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:50.844078 systemd[1]: run-containerd-runc-k8s.io-a758a168caad6b620e68a816322ef48cd767c00482528e4b98582fc193930af6-runc.VvlkEK.mount: Deactivated successfully. Aug 13 00:55:51.255935 kubelet[1969]: E0813 00:55:51.255793 1969 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:55.189426 systemd[1]: run-containerd-runc-k8s.io-a758a168caad6b620e68a816322ef48cd767c00482528e4b98582fc193930af6-runc.3j9FD0.mount: Deactivated successfully. Aug 13 00:55:55.238378 sshd[3797]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:55.241328 systemd[1]: sshd@25-10.0.0.79:22-10.0.0.1:45664.service: Deactivated successfully. Aug 13 00:55:55.242030 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 00:55:55.242710 systemd-logind[1213]: Session 26 logged out. Waiting for processes to exit. Aug 13 00:55:55.243466 systemd-logind[1213]: Removed session 26.