Aug 13 00:54:17.968275 kernel: Linux version 5.15.189-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Aug 12 23:01:50 -00 2025 Aug 13 00:54:17.968298 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:54:17.968308 kernel: BIOS-provided physical RAM map: Aug 13 00:54:17.968314 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 13 00:54:17.968320 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Aug 13 00:54:17.968325 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Aug 13 00:54:17.968332 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Aug 13 00:54:17.968338 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Aug 13 00:54:17.968343 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Aug 13 00:54:17.968360 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Aug 13 00:54:17.968366 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Aug 13 00:54:17.968371 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Aug 13 00:54:17.968377 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Aug 13 00:54:17.968383 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Aug 13 00:54:17.968390 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Aug 13 00:54:17.968398 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Aug 13 00:54:17.968404 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Aug 13 00:54:17.968410 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 00:54:17.968420 kernel: NX (Execute Disable) protection: active Aug 13 00:54:17.968426 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Aug 13 00:54:17.968432 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Aug 13 00:54:17.968438 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Aug 13 00:54:17.968444 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Aug 13 00:54:17.968450 kernel: extended physical RAM map: Aug 13 00:54:17.968456 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 13 00:54:17.968475 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Aug 13 00:54:17.968481 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Aug 13 00:54:17.968487 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Aug 13 00:54:17.968493 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Aug 13 00:54:17.968499 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Aug 13 00:54:17.968505 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Aug 13 00:54:17.968511 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b438017] usable Aug 13 00:54:17.968517 kernel: reserve setup_data: [mem 0x000000009b438018-0x000000009b474e57] usable Aug 13 00:54:17.968523 kernel: reserve setup_data: [mem 0x000000009b474e58-0x000000009b475017] usable Aug 13 00:54:17.968529 kernel: reserve setup_data: [mem 0x000000009b475018-0x000000009b47ec57] usable Aug 13 00:54:17.968535 kernel: reserve setup_data: [mem 0x000000009b47ec58-0x000000009c8eefff] usable Aug 13 00:54:17.968543 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Aug 13 00:54:17.968549 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Aug 13 00:54:17.968555 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Aug 13 00:54:17.968561 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Aug 13 00:54:17.968570 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Aug 13 00:54:17.968576 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Aug 13 00:54:17.968583 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 00:54:17.968590 kernel: efi: EFI v2.70 by EDK II Aug 13 00:54:17.968597 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b673018 RNG=0x9cb73018 Aug 13 00:54:17.968603 kernel: random: crng init done Aug 13 00:54:17.968610 kernel: SMBIOS 2.8 present. Aug 13 00:54:17.968617 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Aug 13 00:54:17.968623 kernel: Hypervisor detected: KVM Aug 13 00:54:17.968629 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 00:54:17.968636 kernel: kvm-clock: cpu 0, msr 6b19e001, primary cpu clock Aug 13 00:54:17.968643 kernel: kvm-clock: using sched offset of 5410235460 cycles Aug 13 00:54:17.968655 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 00:54:17.968662 kernel: tsc: Detected 2794.750 MHz processor Aug 13 00:54:17.968669 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 00:54:17.968675 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 00:54:17.968682 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Aug 13 00:54:17.968689 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 00:54:17.968696 kernel: Using GB pages for direct mapping Aug 13 00:54:17.968702 kernel: Secure boot disabled Aug 13 00:54:17.968709 kernel: ACPI: Early table checksum verification disabled Aug 13 00:54:17.968717 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Aug 13 00:54:17.968724 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Aug 13 00:54:17.968731 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:54:17.968750 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:54:17.968759 kernel: ACPI: FACS 0x000000009CBDD000 000040 Aug 13 00:54:17.968766 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:54:17.968772 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:54:17.968782 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:54:17.968788 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:54:17.968797 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Aug 13 00:54:17.968804 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Aug 13 00:54:17.968811 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Aug 13 00:54:17.968817 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Aug 13 00:54:17.968824 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Aug 13 00:54:17.968831 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Aug 13 00:54:17.968837 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Aug 13 00:54:17.968844 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Aug 13 00:54:17.968850 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Aug 13 00:54:17.968858 kernel: No NUMA configuration found Aug 13 00:54:17.968865 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Aug 13 00:54:17.968871 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Aug 13 00:54:17.968878 kernel: Zone ranges: Aug 13 00:54:17.968885 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 00:54:17.968891 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Aug 13 00:54:17.968898 kernel: Normal empty Aug 13 00:54:17.968904 kernel: Movable zone start for each node Aug 13 00:54:17.968911 kernel: Early memory node ranges Aug 13 00:54:17.968919 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Aug 13 00:54:17.968925 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Aug 13 00:54:17.968932 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Aug 13 00:54:17.968938 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Aug 13 00:54:17.968945 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Aug 13 00:54:17.968951 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Aug 13 00:54:17.968958 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Aug 13 00:54:17.968965 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 00:54:17.968971 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Aug 13 00:54:17.968978 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Aug 13 00:54:17.968986 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 00:54:17.968992 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Aug 13 00:54:17.968999 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Aug 13 00:54:17.969006 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Aug 13 00:54:17.969013 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 00:54:17.969019 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 00:54:17.969026 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 00:54:17.969033 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 00:54:17.969039 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 00:54:17.969047 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 00:54:17.969054 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 00:54:17.969061 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 00:54:17.969070 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 00:54:17.969079 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 00:54:17.969086 kernel: TSC deadline timer available Aug 13 00:54:17.969093 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Aug 13 00:54:17.969099 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 00:54:17.969106 kernel: kvm-guest: setup PV sched yield Aug 13 00:54:17.969114 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Aug 13 00:54:17.969121 kernel: Booting paravirtualized kernel on KVM Aug 13 00:54:17.969132 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 00:54:17.969141 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Aug 13 00:54:17.969148 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Aug 13 00:54:17.969155 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Aug 13 00:54:17.969162 kernel: pcpu-alloc: [0] 0 1 2 3 Aug 13 00:54:17.969169 kernel: kvm-guest: setup async PF for cpu 0 Aug 13 00:54:17.969176 kernel: kvm-guest: stealtime: cpu 0, msr 9b21c0c0 Aug 13 00:54:17.969183 kernel: kvm-guest: PV spinlocks enabled Aug 13 00:54:17.969189 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 00:54:17.969196 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Aug 13 00:54:17.969205 kernel: Policy zone: DMA32 Aug 13 00:54:17.969213 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:54:17.969220 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:54:17.969227 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:54:17.969236 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:54:17.969243 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:54:17.969250 kernel: Memory: 2397432K/2567000K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47488K init, 4092K bss, 169308K reserved, 0K cma-reserved) Aug 13 00:54:17.969257 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 13 00:54:17.969264 kernel: ftrace: allocating 34608 entries in 136 pages Aug 13 00:54:17.969271 kernel: ftrace: allocated 136 pages with 2 groups Aug 13 00:54:17.969278 kernel: rcu: Hierarchical RCU implementation. Aug 13 00:54:17.969285 kernel: rcu: RCU event tracing is enabled. Aug 13 00:54:17.969294 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 13 00:54:17.969301 kernel: Rude variant of Tasks RCU enabled. Aug 13 00:54:17.969308 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:54:17.969315 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:54:17.969322 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 13 00:54:17.969329 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Aug 13 00:54:17.969336 kernel: Console: colour dummy device 80x25 Aug 13 00:54:17.969343 kernel: printk: console [ttyS0] enabled Aug 13 00:54:17.969361 kernel: ACPI: Core revision 20210730 Aug 13 00:54:17.969368 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 00:54:17.969377 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 00:54:17.969384 kernel: x2apic enabled Aug 13 00:54:17.969391 kernel: Switched APIC routing to physical x2apic. Aug 13 00:54:17.969398 kernel: kvm-guest: setup PV IPIs Aug 13 00:54:17.969405 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 00:54:17.969412 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Aug 13 00:54:17.969419 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Aug 13 00:54:17.969426 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 00:54:17.969436 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 00:54:17.969445 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 00:54:17.969452 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 00:54:17.969468 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 00:54:17.969476 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 00:54:17.969483 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Aug 13 00:54:17.969490 kernel: RETBleed: Mitigation: untrained return thunk Aug 13 00:54:17.969497 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 00:54:17.969507 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Aug 13 00:54:17.969516 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 00:54:17.969523 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 00:54:17.969530 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 00:54:17.969537 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 00:54:17.969544 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 13 00:54:17.969551 kernel: Freeing SMP alternatives memory: 32K Aug 13 00:54:17.969558 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:54:17.969565 kernel: LSM: Security Framework initializing Aug 13 00:54:17.969572 kernel: SELinux: Initializing. Aug 13 00:54:17.969579 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:54:17.969588 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:54:17.969595 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Aug 13 00:54:17.969602 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 00:54:17.969609 kernel: ... version: 0 Aug 13 00:54:17.969615 kernel: ... bit width: 48 Aug 13 00:54:17.969622 kernel: ... generic registers: 6 Aug 13 00:54:17.969629 kernel: ... value mask: 0000ffffffffffff Aug 13 00:54:17.969636 kernel: ... max period: 00007fffffffffff Aug 13 00:54:17.969643 kernel: ... fixed-purpose events: 0 Aug 13 00:54:17.969651 kernel: ... event mask: 000000000000003f Aug 13 00:54:17.969658 kernel: signal: max sigframe size: 1776 Aug 13 00:54:17.969665 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:54:17.969672 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:54:17.969679 kernel: x86: Booting SMP configuration: Aug 13 00:54:17.969686 kernel: .... node #0, CPUs: #1 Aug 13 00:54:17.969693 kernel: kvm-clock: cpu 1, msr 6b19e041, secondary cpu clock Aug 13 00:54:17.969700 kernel: kvm-guest: setup async PF for cpu 1 Aug 13 00:54:17.969707 kernel: kvm-guest: stealtime: cpu 1, msr 9b29c0c0 Aug 13 00:54:17.969715 kernel: #2 Aug 13 00:54:17.969722 kernel: kvm-clock: cpu 2, msr 6b19e081, secondary cpu clock Aug 13 00:54:17.969729 kernel: kvm-guest: setup async PF for cpu 2 Aug 13 00:54:17.969736 kernel: kvm-guest: stealtime: cpu 2, msr 9b31c0c0 Aug 13 00:54:17.969743 kernel: #3 Aug 13 00:54:17.969750 kernel: kvm-clock: cpu 3, msr 6b19e0c1, secondary cpu clock Aug 13 00:54:17.969756 kernel: kvm-guest: setup async PF for cpu 3 Aug 13 00:54:17.969763 kernel: kvm-guest: stealtime: cpu 3, msr 9b39c0c0 Aug 13 00:54:17.969770 kernel: smp: Brought up 1 node, 4 CPUs Aug 13 00:54:17.969782 kernel: smpboot: Max logical packages: 1 Aug 13 00:54:17.969789 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Aug 13 00:54:17.969796 kernel: devtmpfs: initialized Aug 13 00:54:17.969803 kernel: x86/mm: Memory block size: 128MB Aug 13 00:54:17.969810 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Aug 13 00:54:17.969817 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Aug 13 00:54:17.969824 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Aug 13 00:54:17.969831 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Aug 13 00:54:17.969838 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Aug 13 00:54:17.969847 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:54:17.969854 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 13 00:54:17.969861 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:54:17.969868 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:54:17.969875 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:54:17.969882 kernel: audit: type=2000 audit(1755046457.174:1): state=initialized audit_enabled=0 res=1 Aug 13 00:54:17.969889 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:54:17.969896 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 00:54:17.969903 kernel: cpuidle: using governor menu Aug 13 00:54:17.969911 kernel: ACPI: bus type PCI registered Aug 13 00:54:17.969918 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:54:17.969925 kernel: dca service started, version 1.12.1 Aug 13 00:54:17.969932 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Aug 13 00:54:17.969939 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Aug 13 00:54:17.969946 kernel: PCI: Using configuration type 1 for base access Aug 13 00:54:17.969953 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 00:54:17.969960 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:54:17.969967 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:54:17.969976 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:54:17.969983 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:54:17.969990 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:54:17.969997 kernel: ACPI: Added _OSI(Linux-Dell-Video) Aug 13 00:54:17.970004 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Aug 13 00:54:17.970011 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Aug 13 00:54:17.970018 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:54:17.970025 kernel: ACPI: Interpreter enabled Aug 13 00:54:17.970032 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 00:54:17.970040 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 00:54:17.970047 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 00:54:17.970054 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 00:54:17.970061 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 00:54:17.970204 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:54:17.970285 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 00:54:17.970372 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 00:54:17.970384 kernel: PCI host bridge to bus 0000:00 Aug 13 00:54:17.970490 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 00:54:17.970571 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 00:54:17.970641 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 00:54:17.970708 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Aug 13 00:54:17.970774 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 00:54:17.970842 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Aug 13 00:54:17.970916 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 00:54:17.971044 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Aug 13 00:54:17.971137 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Aug 13 00:54:17.971214 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Aug 13 00:54:17.971289 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Aug 13 00:54:17.971376 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Aug 13 00:54:17.971453 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Aug 13 00:54:17.971558 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 00:54:17.971654 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Aug 13 00:54:17.971737 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Aug 13 00:54:17.971814 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Aug 13 00:54:17.971889 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Aug 13 00:54:17.971980 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Aug 13 00:54:17.972057 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Aug 13 00:54:17.972145 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Aug 13 00:54:17.972223 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Aug 13 00:54:17.972311 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 13 00:54:17.972401 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Aug 13 00:54:17.972491 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Aug 13 00:54:17.972568 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Aug 13 00:54:17.972647 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Aug 13 00:54:17.972739 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Aug 13 00:54:17.972814 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 00:54:17.972905 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Aug 13 00:54:17.972981 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Aug 13 00:54:17.973056 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Aug 13 00:54:17.973155 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Aug 13 00:54:17.973237 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Aug 13 00:54:17.973247 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 00:54:17.973254 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 00:54:17.973262 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 00:54:17.973269 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 00:54:17.973276 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 00:54:17.973283 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 00:54:17.973290 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 00:54:17.973300 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 00:54:17.973307 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 00:54:17.973314 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 00:54:17.973322 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 00:54:17.973331 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 00:54:17.973338 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 00:54:17.973345 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 00:54:17.973375 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 00:54:17.973382 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 00:54:17.973391 kernel: iommu: Default domain type: Translated Aug 13 00:54:17.973398 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 00:54:17.973559 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 00:54:17.973656 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 00:54:17.973775 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 00:54:17.973787 kernel: vgaarb: loaded Aug 13 00:54:17.973794 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 00:54:17.973802 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 00:54:17.973809 kernel: PTP clock support registered Aug 13 00:54:17.973819 kernel: Registered efivars operations Aug 13 00:54:17.973827 kernel: PCI: Using ACPI for IRQ routing Aug 13 00:54:17.973834 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 00:54:17.973841 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Aug 13 00:54:17.973848 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Aug 13 00:54:17.973855 kernel: e820: reserve RAM buffer [mem 0x9b438018-0x9bffffff] Aug 13 00:54:17.973862 kernel: e820: reserve RAM buffer [mem 0x9b475018-0x9bffffff] Aug 13 00:54:17.973869 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Aug 13 00:54:17.973876 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Aug 13 00:54:17.973885 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 00:54:17.973892 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 00:54:17.973899 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 00:54:17.973906 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:54:17.973913 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:54:17.973920 kernel: pnp: PnP ACPI init Aug 13 00:54:17.974038 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 00:54:17.974050 kernel: pnp: PnP ACPI: found 6 devices Aug 13 00:54:17.974060 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 00:54:17.974067 kernel: NET: Registered PF_INET protocol family Aug 13 00:54:17.974074 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:54:17.974081 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 00:54:17.974089 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:54:17.974096 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:54:17.974103 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Aug 13 00:54:17.974110 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 00:54:17.974117 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:54:17.974126 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:54:17.974133 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:54:17.974140 kernel: NET: Registered PF_XDP protocol family Aug 13 00:54:17.974217 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Aug 13 00:54:17.974295 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Aug 13 00:54:17.974378 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 00:54:17.974448 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 00:54:17.974531 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 00:54:17.974604 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Aug 13 00:54:17.974669 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 00:54:17.974735 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Aug 13 00:54:17.974745 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:54:17.974753 kernel: Initialise system trusted keyrings Aug 13 00:54:17.974760 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 00:54:17.974767 kernel: Key type asymmetric registered Aug 13 00:54:17.974774 kernel: Asymmetric key parser 'x509' registered Aug 13 00:54:17.974781 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 13 00:54:17.974791 kernel: io scheduler mq-deadline registered Aug 13 00:54:17.974799 kernel: io scheduler kyber registered Aug 13 00:54:17.974814 kernel: io scheduler bfq registered Aug 13 00:54:17.974823 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 00:54:17.974831 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 00:54:17.974839 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 00:54:17.974846 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Aug 13 00:54:17.974853 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:54:17.974861 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 00:54:17.974870 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 00:54:17.974877 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 00:54:17.974885 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 00:54:17.974892 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 00:54:17.974977 kernel: rtc_cmos 00:04: RTC can wake from S4 Aug 13 00:54:17.975052 kernel: rtc_cmos 00:04: registered as rtc0 Aug 13 00:54:17.975121 kernel: rtc_cmos 00:04: setting system clock to 2025-08-13T00:54:17 UTC (1755046457) Aug 13 00:54:17.975195 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 00:54:17.975205 kernel: efifb: probing for efifb Aug 13 00:54:17.975213 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Aug 13 00:54:17.975220 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Aug 13 00:54:17.975228 kernel: efifb: scrolling: redraw Aug 13 00:54:17.975235 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Aug 13 00:54:17.975243 kernel: Console: switching to colour frame buffer device 160x50 Aug 13 00:54:17.975250 kernel: fb0: EFI VGA frame buffer device Aug 13 00:54:17.975258 kernel: pstore: Registered efi as persistent store backend Aug 13 00:54:17.975267 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:54:17.975274 kernel: Segment Routing with IPv6 Aug 13 00:54:17.975282 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:54:17.975289 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:54:17.975298 kernel: Key type dns_resolver registered Aug 13 00:54:17.975305 kernel: IPI shorthand broadcast: enabled Aug 13 00:54:17.975314 kernel: sched_clock: Marking stable (592121248, 126222140)->(748590290, -30246902) Aug 13 00:54:17.975322 kernel: registered taskstats version 1 Aug 13 00:54:17.975329 kernel: Loading compiled-in X.509 certificates Aug 13 00:54:17.975337 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.189-flatcar: 1d5a64b5798e654719a8bd91d683e7e9894bd433' Aug 13 00:54:17.975344 kernel: Key type .fscrypt registered Aug 13 00:54:17.975362 kernel: Key type fscrypt-provisioning registered Aug 13 00:54:17.975369 kernel: pstore: Using crash dump compression: deflate Aug 13 00:54:17.975377 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:54:17.975384 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:54:17.975394 kernel: ima: No architecture policies found Aug 13 00:54:17.975401 kernel: clk: Disabling unused clocks Aug 13 00:54:17.975409 kernel: Freeing unused kernel image (initmem) memory: 47488K Aug 13 00:54:17.975417 kernel: Write protecting the kernel read-only data: 28672k Aug 13 00:54:17.975424 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Aug 13 00:54:17.975432 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Aug 13 00:54:17.975439 kernel: Run /init as init process Aug 13 00:54:17.975446 kernel: with arguments: Aug 13 00:54:17.975454 kernel: /init Aug 13 00:54:17.975475 kernel: with environment: Aug 13 00:54:17.975483 kernel: HOME=/ Aug 13 00:54:17.975490 kernel: TERM=linux Aug 13 00:54:17.975497 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:54:17.975506 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 00:54:17.975517 systemd[1]: Detected virtualization kvm. Aug 13 00:54:17.975525 systemd[1]: Detected architecture x86-64. Aug 13 00:54:17.975533 systemd[1]: Running in initrd. Aug 13 00:54:17.975543 systemd[1]: No hostname configured, using default hostname. Aug 13 00:54:17.975551 systemd[1]: Hostname set to . Aug 13 00:54:17.975559 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:54:17.975567 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:54:17.975575 systemd[1]: Started systemd-ask-password-console.path. Aug 13 00:54:17.975582 systemd[1]: Reached target cryptsetup.target. Aug 13 00:54:17.975590 systemd[1]: Reached target paths.target. Aug 13 00:54:17.975598 systemd[1]: Reached target slices.target. Aug 13 00:54:17.975607 systemd[1]: Reached target swap.target. Aug 13 00:54:17.975615 systemd[1]: Reached target timers.target. Aug 13 00:54:17.975623 systemd[1]: Listening on iscsid.socket. Aug 13 00:54:17.975631 systemd[1]: Listening on iscsiuio.socket. Aug 13 00:54:17.975639 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 00:54:17.975647 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 00:54:17.975655 systemd[1]: Listening on systemd-journald.socket. Aug 13 00:54:17.975663 systemd[1]: Listening on systemd-networkd.socket. Aug 13 00:54:17.975672 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 00:54:17.975679 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 00:54:17.975687 systemd[1]: Reached target sockets.target. Aug 13 00:54:17.975695 systemd[1]: Starting kmod-static-nodes.service... Aug 13 00:54:17.975703 systemd[1]: Finished network-cleanup.service. Aug 13 00:54:17.975710 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:54:17.975718 systemd[1]: Starting systemd-journald.service... Aug 13 00:54:17.975726 systemd[1]: Starting systemd-modules-load.service... Aug 13 00:54:17.975734 systemd[1]: Starting systemd-resolved.service... Aug 13 00:54:17.975743 systemd[1]: Starting systemd-vconsole-setup.service... Aug 13 00:54:17.975752 systemd[1]: Finished kmod-static-nodes.service. Aug 13 00:54:17.975760 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:54:17.975768 kernel: audit: type=1130 audit(1755046457.966:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:17.975776 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 00:54:17.975787 systemd-journald[198]: Journal started Aug 13 00:54:17.975829 systemd-journald[198]: Runtime Journal (/run/log/journal/e3ab0d9bb217489f94f95a14567b9b76) is 6.0M, max 48.4M, 42.4M free. Aug 13 00:54:17.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:17.973751 systemd-modules-load[199]: Inserted module 'overlay' Aug 13 00:54:17.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:17.982126 systemd[1]: Started systemd-journald.service. Aug 13 00:54:17.982148 kernel: audit: type=1130 audit(1755046457.978:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:17.982396 systemd[1]: Finished systemd-vconsole-setup.service. Aug 13 00:54:17.987821 kernel: audit: type=1130 audit(1755046457.982:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:17.987850 kernel: audit: type=1130 audit(1755046457.987:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:17.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:17.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:17.983630 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 00:54:17.991664 systemd[1]: Starting dracut-cmdline-ask.service... Aug 13 00:54:18.001918 systemd-resolved[200]: Positive Trust Anchors: Aug 13 00:54:18.001931 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:54:18.001972 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 00:54:18.010045 systemd[1]: Finished dracut-cmdline-ask.service. Aug 13 00:54:18.014490 kernel: audit: type=1130 audit(1755046458.010:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:18.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:18.015020 systemd[1]: Starting dracut-cmdline.service... Aug 13 00:54:18.026920 dracut-cmdline[215]: dracut-dracut-053 Aug 13 00:54:18.029962 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:54:18.102123 systemd-resolved[200]: Defaulting to hostname 'linux'. Aug 13 00:54:18.105585 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:54:18.103993 systemd[1]: Started systemd-resolved.service. Aug 13 00:54:18.110712 kernel: audit: type=1130 audit(1755046458.106:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:18.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:18.107413 systemd[1]: Reached target nss-lookup.target. Aug 13 00:54:18.112417 systemd-modules-load[199]: Inserted module 'br_netfilter' Aug 13 00:54:18.113364 kernel: Bridge firewalling registered Aug 13 00:54:18.127494 kernel: SCSI subsystem initialized Aug 13 00:54:18.138497 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:54:18.144959 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:54:18.145013 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:54:18.146406 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Aug 13 00:54:18.150699 systemd-modules-load[199]: Inserted module 'dm_multipath' Aug 13 00:54:18.151977 systemd[1]: Finished systemd-modules-load.service. Aug 13 00:54:18.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:18.155214 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:54:18.158669 kernel: audit: type=1130 audit(1755046458.153:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:18.161497 kernel: iscsi: registered transport (tcp) Aug 13 00:54:18.166316 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:54:18.170765 kernel: audit: type=1130 audit(1755046458.166:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:18.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:18.189684 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:54:18.189729 kernel: QLogic iSCSI HBA Driver Aug 13 00:54:18.215116 systemd[1]: Finished dracut-cmdline.service. Aug 13 00:54:18.219876 kernel: audit: type=1130 audit(1755046458.215:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:18.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:18.219892 systemd[1]: Starting dracut-pre-udev.service... Aug 13 00:54:18.268499 kernel: raid6: avx2x4 gen() 29601 MB/s Aug 13 00:54:18.285498 kernel: raid6: avx2x4 xor() 7506 MB/s Aug 13 00:54:18.302511 kernel: raid6: avx2x2 gen() 31661 MB/s Aug 13 00:54:18.319507 kernel: raid6: avx2x2 xor() 18847 MB/s Aug 13 00:54:18.336509 kernel: raid6: avx2x1 gen() 25631 MB/s Aug 13 00:54:18.353489 kernel: raid6: avx2x1 xor() 15038 MB/s Aug 13 00:54:18.370492 kernel: raid6: sse2x4 gen() 14557 MB/s Aug 13 00:54:18.387489 kernel: raid6: sse2x4 xor() 7204 MB/s Aug 13 00:54:18.404491 kernel: raid6: sse2x2 gen() 15840 MB/s Aug 13 00:54:18.421487 kernel: raid6: sse2x2 xor() 9509 MB/s Aug 13 00:54:18.438498 kernel: raid6: sse2x1 gen() 11980 MB/s Aug 13 00:54:18.455970 kernel: raid6: sse2x1 xor() 6791 MB/s Aug 13 00:54:18.456063 kernel: raid6: using algorithm avx2x2 gen() 31661 MB/s Aug 13 00:54:18.456073 kernel: raid6: .... xor() 18847 MB/s, rmw enabled Aug 13 00:54:18.456663 kernel: raid6: using avx2x2 recovery algorithm Aug 13 00:54:18.470512 kernel: xor: automatically using best checksumming function avx Aug 13 00:54:18.571508 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Aug 13 00:54:18.581145 systemd[1]: Finished dracut-pre-udev.service. Aug 13 00:54:18.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:18.583000 audit: BPF prog-id=7 op=LOAD Aug 13 00:54:18.583000 audit: BPF prog-id=8 op=LOAD Aug 13 00:54:18.584250 systemd[1]: Starting systemd-udevd.service... Aug 13 00:54:18.599348 systemd-udevd[401]: Using default interface naming scheme 'v252'. Aug 13 00:54:18.603926 systemd[1]: Started systemd-udevd.service. Aug 13 00:54:18.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:18.606397 systemd[1]: Starting dracut-pre-trigger.service... Aug 13 00:54:18.620231 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Aug 13 00:54:18.651833 systemd[1]: Finished dracut-pre-trigger.service. Aug 13 00:54:18.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:18.654695 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 00:54:18.691632 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 00:54:18.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:18.724965 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 13 00:54:18.730905 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:54:18.730925 kernel: GPT:9289727 != 19775487 Aug 13 00:54:18.730934 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:54:18.730943 kernel: GPT:9289727 != 19775487 Aug 13 00:54:18.730952 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:54:18.730961 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:54:18.733480 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 00:54:18.745571 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 00:54:18.745601 kernel: AES CTR mode by8 optimization enabled Aug 13 00:54:18.754495 kernel: libata version 3.00 loaded. Aug 13 00:54:18.765490 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (448) Aug 13 00:54:18.766595 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 00:54:18.793849 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 00:54:18.793867 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Aug 13 00:54:18.793970 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 00:54:18.794050 kernel: scsi host0: ahci Aug 13 00:54:18.794165 kernel: scsi host1: ahci Aug 13 00:54:18.794263 kernel: scsi host2: ahci Aug 13 00:54:18.794382 kernel: scsi host3: ahci Aug 13 00:54:18.794513 kernel: scsi host4: ahci Aug 13 00:54:18.794624 kernel: scsi host5: ahci Aug 13 00:54:18.794736 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Aug 13 00:54:18.794750 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Aug 13 00:54:18.794759 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Aug 13 00:54:18.794768 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Aug 13 00:54:18.794780 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Aug 13 00:54:18.794791 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Aug 13 00:54:18.773348 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Aug 13 00:54:18.777983 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Aug 13 00:54:18.782889 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Aug 13 00:54:18.784920 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Aug 13 00:54:18.796545 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 00:54:18.798683 systemd[1]: Starting disk-uuid.service... Aug 13 00:54:18.810016 disk-uuid[532]: Primary Header is updated. Aug 13 00:54:18.810016 disk-uuid[532]: Secondary Entries is updated. Aug 13 00:54:18.810016 disk-uuid[532]: Secondary Header is updated. Aug 13 00:54:18.813600 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:54:18.816484 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:54:19.103071 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 00:54:19.103138 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 00:54:19.103160 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Aug 13 00:54:19.104474 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 00:54:19.105514 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 00:54:19.106517 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Aug 13 00:54:19.107911 kernel: ata3.00: applying bridge limits Aug 13 00:54:19.108486 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 00:54:19.109519 kernel: ata3.00: configured for UDMA/100 Aug 13 00:54:19.111510 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Aug 13 00:54:19.160521 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Aug 13 00:54:19.178273 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 00:54:19.178289 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Aug 13 00:54:19.818511 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:54:19.818799 disk-uuid[533]: The operation has completed successfully. Aug 13 00:54:19.845153 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:54:19.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:19.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:19.845240 systemd[1]: Finished disk-uuid.service. Aug 13 00:54:19.855378 systemd[1]: Starting verity-setup.service... Aug 13 00:54:19.870504 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Aug 13 00:54:19.896966 systemd[1]: Found device dev-mapper-usr.device. Aug 13 00:54:19.899362 systemd[1]: Mounting sysusr-usr.mount... Aug 13 00:54:19.902833 systemd[1]: Finished verity-setup.service. Aug 13 00:54:19.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:20.026496 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Aug 13 00:54:20.026556 systemd[1]: Mounted sysusr-usr.mount. Aug 13 00:54:20.028139 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Aug 13 00:54:20.030132 systemd[1]: Starting ignition-setup.service... Aug 13 00:54:20.032243 systemd[1]: Starting parse-ip-for-networkd.service... Aug 13 00:54:20.042955 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:54:20.042988 kernel: BTRFS info (device vda6): using free space tree Aug 13 00:54:20.042999 kernel: BTRFS info (device vda6): has skinny extents Aug 13 00:54:20.051869 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 00:54:20.069882 systemd[1]: Finished ignition-setup.service. Aug 13 00:54:20.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:20.071922 systemd[1]: Starting ignition-fetch-offline.service... Aug 13 00:54:20.112347 systemd[1]: Finished parse-ip-for-networkd.service. Aug 13 00:54:20.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:20.113000 audit: BPF prog-id=9 op=LOAD Aug 13 00:54:20.115013 systemd[1]: Starting systemd-networkd.service... Aug 13 00:54:20.138942 systemd-networkd[715]: lo: Link UP Aug 13 00:54:20.138950 systemd-networkd[715]: lo: Gained carrier Aug 13 00:54:20.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:20.139880 systemd-networkd[715]: Enumeration completed Aug 13 00:54:20.139957 systemd[1]: Started systemd-networkd.service. Aug 13 00:54:20.140891 systemd[1]: Reached target network.target. Aug 13 00:54:20.141603 systemd-networkd[715]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:54:20.143527 systemd[1]: Starting iscsiuio.service... Aug 13 00:54:20.143899 systemd-networkd[715]: eth0: Link UP Aug 13 00:54:20.143904 systemd-networkd[715]: eth0: Gained carrier Aug 13 00:54:20.157980 ignition[656]: Ignition 2.14.0 Aug 13 00:54:20.158006 ignition[656]: Stage: fetch-offline Aug 13 00:54:20.158237 ignition[656]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:54:20.158324 ignition[656]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:54:20.158563 ignition[656]: parsed url from cmdline: "" Aug 13 00:54:20.158570 ignition[656]: no config URL provided Aug 13 00:54:20.158589 ignition[656]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:54:20.158597 ignition[656]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:54:20.158658 ignition[656]: op(1): [started] loading QEMU firmware config module Aug 13 00:54:20.158674 ignition[656]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 13 00:54:20.168212 ignition[656]: op(1): [finished] loading QEMU firmware config module Aug 13 00:54:20.180440 systemd[1]: Started iscsiuio.service. Aug 13 00:54:20.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:20.181790 systemd[1]: Starting iscsid.service... Aug 13 00:54:20.187597 iscsid[721]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Aug 13 00:54:20.187597 iscsid[721]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Aug 13 00:54:20.187597 iscsid[721]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Aug 13 00:54:20.187597 iscsid[721]: If using hardware iscsi like qla4xxx this message can be ignored. Aug 13 00:54:20.187597 iscsid[721]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Aug 13 00:54:20.187597 iscsid[721]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Aug 13 00:54:20.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:20.190622 systemd[1]: Started iscsid.service. Aug 13 00:54:20.195942 systemd[1]: Starting dracut-initqueue.service... Aug 13 00:54:20.211847 systemd[1]: Finished dracut-initqueue.service. Aug 13 00:54:20.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:20.212844 systemd[1]: Reached target remote-fs-pre.target. Aug 13 00:54:20.214337 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 00:54:20.215957 systemd[1]: Reached target remote-fs.target. Aug 13 00:54:20.218671 systemd[1]: Starting dracut-pre-mount.service... Aug 13 00:54:20.229317 systemd[1]: Finished dracut-pre-mount.service. Aug 13 00:54:20.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:20.236089 ignition[656]: parsing config with SHA512: 974a22b0e9fe6cb2a4d270b86361a2e09957bc53af564eab13076330605c4f7d3d0d908599e41271a67d58e4fea84413d5d81c8c47028778057d157dbd04f86c Aug 13 00:54:20.237654 systemd-networkd[715]: eth0: DHCPv4 address 10.0.0.89/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 00:54:20.248936 unknown[656]: fetched base config from "system" Aug 13 00:54:20.248948 unknown[656]: fetched user config from "qemu" Aug 13 00:54:20.249422 ignition[656]: fetch-offline: fetch-offline passed Aug 13 00:54:20.249502 ignition[656]: Ignition finished successfully Aug 13 00:54:20.252975 systemd[1]: Finished ignition-fetch-offline.service. Aug 13 00:54:20.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:20.253969 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 13 00:54:20.255392 systemd[1]: Starting ignition-kargs.service... Aug 13 00:54:20.267088 ignition[736]: Ignition 2.14.0 Aug 13 00:54:20.267101 ignition[736]: Stage: kargs Aug 13 00:54:20.267202 ignition[736]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:54:20.267212 ignition[736]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:54:20.271062 ignition[736]: kargs: kargs passed Aug 13 00:54:20.271107 ignition[736]: Ignition finished successfully Aug 13 00:54:20.273229 systemd[1]: Finished ignition-kargs.service. Aug 13 00:54:20.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:20.274973 systemd[1]: Starting ignition-disks.service... Aug 13 00:54:20.286956 ignition[742]: Ignition 2.14.0 Aug 13 00:54:20.286967 ignition[742]: Stage: disks Aug 13 00:54:20.287077 ignition[742]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:54:20.287088 ignition[742]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:54:20.288633 ignition[742]: disks: disks passed Aug 13 00:54:20.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:20.289694 systemd[1]: Finished ignition-disks.service. Aug 13 00:54:20.288671 ignition[742]: Ignition finished successfully Aug 13 00:54:20.291253 systemd[1]: Reached target initrd-root-device.target. Aug 13 00:54:20.292685 systemd[1]: Reached target local-fs-pre.target. Aug 13 00:54:20.293544 systemd[1]: Reached target local-fs.target. Aug 13 00:54:20.295020 systemd[1]: Reached target sysinit.target. Aug 13 00:54:20.295076 systemd[1]: Reached target basic.target. Aug 13 00:54:20.296252 systemd[1]: Starting systemd-fsck-root.service... Aug 13 00:54:20.338399 systemd-resolved[200]: Detected conflict on linux IN A 10.0.0.89 Aug 13 00:54:20.338418 systemd-resolved[200]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Aug 13 00:54:20.394693 systemd-fsck[750]: ROOT: clean, 629/553520 files, 56027/553472 blocks Aug 13 00:54:20.424757 systemd[1]: Finished systemd-fsck-root.service. Aug 13 00:54:20.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:20.426035 systemd[1]: Mounting sysroot.mount... Aug 13 00:54:20.433498 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Aug 13 00:54:20.434449 systemd[1]: Mounted sysroot.mount. Aug 13 00:54:20.434584 systemd[1]: Reached target initrd-root-fs.target. Aug 13 00:54:20.438983 systemd[1]: Mounting sysroot-usr.mount... Aug 13 00:54:20.439584 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Aug 13 00:54:20.439699 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:54:20.439742 systemd[1]: Reached target ignition-diskful.target. Aug 13 00:54:20.448924 systemd[1]: Mounted sysroot-usr.mount. Aug 13 00:54:20.450425 systemd[1]: Starting initrd-setup-root.service... Aug 13 00:54:20.457333 initrd-setup-root[760]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:54:20.462236 initrd-setup-root[768]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:54:20.465367 initrd-setup-root[776]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:54:20.469054 initrd-setup-root[784]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:54:20.497726 systemd[1]: Finished initrd-setup-root.service. Aug 13 00:54:20.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:20.499491 systemd[1]: Starting ignition-mount.service... Aug 13 00:54:20.500727 systemd[1]: Starting sysroot-boot.service... Aug 13 00:54:20.505432 bash[801]: umount: /sysroot/usr/share/oem: not mounted. Aug 13 00:54:20.517622 systemd[1]: Finished sysroot-boot.service. Aug 13 00:54:20.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:20.520688 ignition[802]: INFO : Ignition 2.14.0 Aug 13 00:54:20.520688 ignition[802]: INFO : Stage: mount Aug 13 00:54:20.522721 ignition[802]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:54:20.522721 ignition[802]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:54:20.522721 ignition[802]: INFO : mount: mount passed Aug 13 00:54:20.522721 ignition[802]: INFO : Ignition finished successfully Aug 13 00:54:20.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:20.523156 systemd[1]: Finished ignition-mount.service. Aug 13 00:54:20.910493 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 13 00:54:20.918507 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (811) Aug 13 00:54:20.918565 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:54:20.920508 kernel: BTRFS info (device vda6): using free space tree Aug 13 00:54:20.920532 kernel: BTRFS info (device vda6): has skinny extents Aug 13 00:54:20.926490 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 13 00:54:20.928276 systemd[1]: Starting ignition-files.service... Aug 13 00:54:20.953287 ignition[831]: INFO : Ignition 2.14.0 Aug 13 00:54:20.953287 ignition[831]: INFO : Stage: files Aug 13 00:54:20.955067 ignition[831]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:54:20.955067 ignition[831]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:54:20.955067 ignition[831]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:54:20.959065 ignition[831]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:54:20.959065 ignition[831]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:54:20.959065 ignition[831]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:54:20.959065 ignition[831]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:54:20.980451 ignition[831]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:54:20.980451 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 00:54:20.980451 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 00:54:20.959345 unknown[831]: wrote ssh authorized keys file for user: core Aug 13 00:54:21.092708 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 00:54:21.473762 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 00:54:21.475946 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:54:21.475946 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 13 00:54:21.571023 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 00:54:21.678935 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:54:21.678935 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:54:21.682740 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:54:21.682740 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:54:21.682740 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:54:21.682740 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:54:21.682740 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:54:21.682740 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:54:21.682740 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:54:21.682740 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:54:21.682740 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:54:21.682740 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:54:21.682740 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:54:21.682740 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:54:21.682740 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 00:54:21.720629 systemd-networkd[715]: eth0: Gained IPv6LL Aug 13 00:54:22.095623 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 00:54:22.648867 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:54:22.648867 ignition[831]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 13 00:54:22.652250 ignition[831]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:54:22.654125 ignition[831]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:54:22.654125 ignition[831]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 13 00:54:22.654125 ignition[831]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Aug 13 00:54:22.658282 ignition[831]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 00:54:22.658282 ignition[831]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 00:54:22.658282 ignition[831]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Aug 13 00:54:22.663221 ignition[831]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:54:22.664562 ignition[831]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:54:22.664562 ignition[831]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Aug 13 00:54:22.664562 ignition[831]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 00:54:22.737521 ignition[831]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 00:54:22.739284 ignition[831]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Aug 13 00:54:22.740749 ignition[831]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:54:22.742496 ignition[831]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:54:22.744116 ignition[831]: INFO : files: files passed Aug 13 00:54:22.744844 ignition[831]: INFO : Ignition finished successfully Aug 13 00:54:22.746079 systemd[1]: Finished ignition-files.service. Aug 13 00:54:22.752308 kernel: kauditd_printk_skb: 24 callbacks suppressed Aug 13 00:54:22.752330 kernel: audit: type=1130 audit(1755046462.746:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.747855 systemd[1]: Starting initrd-setup-root-after-ignition.service... Aug 13 00:54:22.752273 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Aug 13 00:54:22.758108 initrd-setup-root-after-ignition[854]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Aug 13 00:54:22.763524 kernel: audit: type=1130 audit(1755046462.757:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.752848 systemd[1]: Starting ignition-quench.service... Aug 13 00:54:22.771423 kernel: audit: type=1130 audit(1755046462.763:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.771438 kernel: audit: type=1131 audit(1755046462.763:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.771547 initrd-setup-root-after-ignition[856]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:54:22.754244 systemd[1]: Finished initrd-setup-root-after-ignition.service. Aug 13 00:54:22.758266 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:54:22.758337 systemd[1]: Finished ignition-quench.service. Aug 13 00:54:22.763622 systemd[1]: Reached target ignition-complete.target. Aug 13 00:54:22.770563 systemd[1]: Starting initrd-parse-etc.service... Aug 13 00:54:22.782007 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:54:22.782093 systemd[1]: Finished initrd-parse-etc.service. Aug 13 00:54:22.791013 kernel: audit: type=1130 audit(1755046462.783:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.791029 kernel: audit: type=1131 audit(1755046462.783:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.784041 systemd[1]: Reached target initrd-fs.target. Aug 13 00:54:22.791023 systemd[1]: Reached target initrd.target. Aug 13 00:54:22.791828 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Aug 13 00:54:22.792548 systemd[1]: Starting dracut-pre-pivot.service... Aug 13 00:54:22.801684 systemd[1]: Finished dracut-pre-pivot.service. Aug 13 00:54:22.806751 kernel: audit: type=1130 audit(1755046462.802:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.803291 systemd[1]: Starting initrd-cleanup.service... Aug 13 00:54:22.812121 systemd[1]: Stopped target nss-lookup.target. Aug 13 00:54:22.813042 systemd[1]: Stopped target remote-cryptsetup.target. Aug 13 00:54:22.814664 systemd[1]: Stopped target timers.target. Aug 13 00:54:22.816187 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:54:22.822013 kernel: audit: type=1131 audit(1755046462.817:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.816311 systemd[1]: Stopped dracut-pre-pivot.service. Aug 13 00:54:22.817784 systemd[1]: Stopped target initrd.target. Aug 13 00:54:22.822112 systemd[1]: Stopped target basic.target. Aug 13 00:54:22.823650 systemd[1]: Stopped target ignition-complete.target. Aug 13 00:54:22.825223 systemd[1]: Stopped target ignition-diskful.target. Aug 13 00:54:22.826768 systemd[1]: Stopped target initrd-root-device.target. Aug 13 00:54:22.828480 systemd[1]: Stopped target remote-fs.target. Aug 13 00:54:22.830049 systemd[1]: Stopped target remote-fs-pre.target. Aug 13 00:54:22.831712 systemd[1]: Stopped target sysinit.target. Aug 13 00:54:22.833218 systemd[1]: Stopped target local-fs.target. Aug 13 00:54:22.834773 systemd[1]: Stopped target local-fs-pre.target. Aug 13 00:54:22.836416 systemd[1]: Stopped target swap.target. Aug 13 00:54:22.844631 kernel: audit: type=1131 audit(1755046462.839:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.837861 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:54:22.837981 systemd[1]: Stopped dracut-pre-mount.service. Aug 13 00:54:22.850858 kernel: audit: type=1131 audit(1755046462.846:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.840123 systemd[1]: Stopped target cryptsetup.target. Aug 13 00:54:22.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.844672 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:54:22.844786 systemd[1]: Stopped dracut-initqueue.service. Aug 13 00:54:22.846573 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:54:22.846682 systemd[1]: Stopped ignition-fetch-offline.service. Aug 13 00:54:22.850990 systemd[1]: Stopped target paths.target. Aug 13 00:54:22.852444 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:54:22.857513 systemd[1]: Stopped systemd-ask-password-console.path. Aug 13 00:54:22.858839 systemd[1]: Stopped target slices.target. Aug 13 00:54:22.860451 systemd[1]: Stopped target sockets.target. Aug 13 00:54:22.861919 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:54:22.861989 systemd[1]: Closed iscsid.socket. Aug 13 00:54:22.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.863403 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:54:22.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.863530 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Aug 13 00:54:22.865244 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:54:22.865351 systemd[1]: Stopped ignition-files.service. Aug 13 00:54:22.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.867475 systemd[1]: Stopping ignition-mount.service... Aug 13 00:54:22.868552 systemd[1]: Stopping iscsiuio.service... Aug 13 00:54:22.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.869949 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:54:22.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.878281 ignition[871]: INFO : Ignition 2.14.0 Aug 13 00:54:22.878281 ignition[871]: INFO : Stage: umount Aug 13 00:54:22.878281 ignition[871]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:54:22.878281 ignition[871]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:54:22.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.870086 systemd[1]: Stopped kmod-static-nodes.service. Aug 13 00:54:22.884866 ignition[871]: INFO : umount: umount passed Aug 13 00:54:22.884866 ignition[871]: INFO : Ignition finished successfully Aug 13 00:54:22.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.872714 systemd[1]: Stopping sysroot-boot.service... Aug 13 00:54:22.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.874645 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:54:22.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.874794 systemd[1]: Stopped systemd-udev-trigger.service. Aug 13 00:54:22.875812 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:54:22.875901 systemd[1]: Stopped dracut-pre-trigger.service. Aug 13 00:54:22.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.879312 systemd[1]: iscsiuio.service: Deactivated successfully. Aug 13 00:54:22.879391 systemd[1]: Stopped iscsiuio.service. Aug 13 00:54:22.880904 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:54:22.880967 systemd[1]: Stopped ignition-mount.service. Aug 13 00:54:22.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.882601 systemd[1]: Stopped target network.target. Aug 13 00:54:22.884162 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:54:22.884192 systemd[1]: Closed iscsiuio.socket. Aug 13 00:54:22.885517 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:54:22.885550 systemd[1]: Stopped ignition-disks.service. Aug 13 00:54:22.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.886283 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:54:22.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.886314 systemd[1]: Stopped ignition-kargs.service. Aug 13 00:54:22.888805 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:54:22.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.888836 systemd[1]: Stopped ignition-setup.service. Aug 13 00:54:22.890575 systemd[1]: Stopping systemd-networkd.service... Aug 13 00:54:22.892429 systemd[1]: Stopping systemd-resolved.service... Aug 13 00:54:22.894238 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:54:22.894310 systemd[1]: Finished initrd-cleanup.service. Aug 13 00:54:22.897518 systemd-networkd[715]: eth0: DHCPv6 lease lost Aug 13 00:54:22.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.919000 audit: BPF prog-id=9 op=UNLOAD Aug 13 00:54:22.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.898525 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:54:22.898596 systemd[1]: Stopped systemd-networkd.service. Aug 13 00:54:22.901935 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:54:22.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.925000 audit: BPF prog-id=6 op=UNLOAD Aug 13 00:54:22.901962 systemd[1]: Closed systemd-networkd.socket. Aug 13 00:54:22.904068 systemd[1]: Stopping network-cleanup.service... Aug 13 00:54:22.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.904796 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:54:22.904834 systemd[1]: Stopped parse-ip-for-networkd.service. Aug 13 00:54:22.906648 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:54:22.906680 systemd[1]: Stopped systemd-sysctl.service. Aug 13 00:54:22.909084 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:54:22.909116 systemd[1]: Stopped systemd-modules-load.service. Aug 13 00:54:22.910693 systemd[1]: Stopping systemd-udevd.service... Aug 13 00:54:22.915736 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:54:22.916137 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:54:22.916232 systemd[1]: Stopped systemd-resolved.service. Aug 13 00:54:22.919652 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:54:22.919788 systemd[1]: Stopped systemd-udevd.service. Aug 13 00:54:22.921423 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:54:22.922626 systemd[1]: Stopped network-cleanup.service. Aug 13 00:54:22.925161 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:54:22.925574 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:54:22.925603 systemd[1]: Closed systemd-udevd-control.socket. Aug 13 00:54:22.927163 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:54:22.927189 systemd[1]: Closed systemd-udevd-kernel.socket. Aug 13 00:54:22.927264 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:54:22.927293 systemd[1]: Stopped dracut-pre-udev.service. Aug 13 00:54:22.927512 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:54:22.927542 systemd[1]: Stopped dracut-cmdline.service. Aug 13 00:54:22.927723 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:54:22.927749 systemd[1]: Stopped dracut-cmdline-ask.service. Aug 13 00:54:22.928488 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Aug 13 00:54:22.928781 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:54:22.928825 systemd[1]: Stopped systemd-vconsole-setup.service. Aug 13 00:54:22.933551 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:54:22.933615 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Aug 13 00:54:23.019108 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:54:23.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:23.019222 systemd[1]: Stopped sysroot-boot.service. Aug 13 00:54:23.021529 systemd[1]: Reached target initrd-switch-root.target. Aug 13 00:54:23.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:23.023302 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:54:23.023342 systemd[1]: Stopped initrd-setup-root.service. Aug 13 00:54:23.026159 systemd[1]: Starting initrd-switch-root.service... Aug 13 00:54:23.042520 systemd[1]: Switching root. Aug 13 00:54:23.065042 iscsid[721]: iscsid shutting down. Aug 13 00:54:23.066154 systemd-journald[198]: Received SIGTERM from PID 1 (n/a). Aug 13 00:54:23.066229 systemd-journald[198]: Journal stopped Aug 13 00:54:27.089023 kernel: SELinux: Class mctp_socket not defined in policy. Aug 13 00:54:27.089120 kernel: SELinux: Class anon_inode not defined in policy. Aug 13 00:54:27.089148 kernel: SELinux: the above unknown classes and permissions will be allowed Aug 13 00:54:27.089164 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:54:27.089184 kernel: SELinux: policy capability open_perms=1 Aug 13 00:54:27.089198 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:54:27.089214 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:54:27.089229 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:54:27.089242 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:54:27.089262 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:54:27.089284 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:54:27.089300 systemd[1]: Successfully loaded SELinux policy in 41.966ms. Aug 13 00:54:27.089324 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.531ms. Aug 13 00:54:27.089341 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 00:54:27.089358 systemd[1]: Detected virtualization kvm. Aug 13 00:54:27.089373 systemd[1]: Detected architecture x86-64. Aug 13 00:54:27.089388 systemd[1]: Detected first boot. Aug 13 00:54:27.089405 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:54:27.089429 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Aug 13 00:54:27.089445 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:54:27.089512 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:54:27.089554 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:54:27.089577 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:54:27.089603 systemd[1]: iscsid.service: Deactivated successfully. Aug 13 00:54:27.089620 systemd[1]: Stopped iscsid.service. Aug 13 00:54:27.089638 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 00:54:27.089654 systemd[1]: Stopped initrd-switch-root.service. Aug 13 00:54:27.089670 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 00:54:27.089695 systemd[1]: Created slice system-addon\x2dconfig.slice. Aug 13 00:54:27.089712 systemd[1]: Created slice system-addon\x2drun.slice. Aug 13 00:54:27.089727 systemd[1]: Created slice system-getty.slice. Aug 13 00:54:27.089754 systemd[1]: Created slice system-modprobe.slice. Aug 13 00:54:27.089777 systemd[1]: Created slice system-serial\x2dgetty.slice. Aug 13 00:54:27.089793 systemd[1]: Created slice system-system\x2dcloudinit.slice. Aug 13 00:54:27.089811 systemd[1]: Created slice system-systemd\x2dfsck.slice. Aug 13 00:54:27.089828 systemd[1]: Created slice user.slice. Aug 13 00:54:27.089843 systemd[1]: Started systemd-ask-password-console.path. Aug 13 00:54:27.089859 systemd[1]: Started systemd-ask-password-wall.path. Aug 13 00:54:27.089875 systemd[1]: Set up automount boot.automount. Aug 13 00:54:27.089891 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Aug 13 00:54:27.089917 systemd[1]: Stopped target initrd-switch-root.target. Aug 13 00:54:27.089934 systemd[1]: Stopped target initrd-fs.target. Aug 13 00:54:27.089950 systemd[1]: Stopped target initrd-root-fs.target. Aug 13 00:54:27.089967 systemd[1]: Reached target integritysetup.target. Aug 13 00:54:27.089988 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 00:54:27.090003 systemd[1]: Reached target remote-fs.target. Aug 13 00:54:27.090019 systemd[1]: Reached target slices.target. Aug 13 00:54:27.090034 systemd[1]: Reached target swap.target. Aug 13 00:54:27.090058 systemd[1]: Reached target torcx.target. Aug 13 00:54:27.090091 systemd[1]: Reached target veritysetup.target. Aug 13 00:54:27.090108 systemd[1]: Listening on systemd-coredump.socket. Aug 13 00:54:27.090125 systemd[1]: Listening on systemd-initctl.socket. Aug 13 00:54:27.090141 systemd[1]: Listening on systemd-networkd.socket. Aug 13 00:54:27.090158 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 00:54:27.090174 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 00:54:27.090191 systemd[1]: Listening on systemd-userdbd.socket. Aug 13 00:54:27.090207 systemd[1]: Mounting dev-hugepages.mount... Aug 13 00:54:27.090223 systemd[1]: Mounting dev-mqueue.mount... Aug 13 00:54:27.090249 systemd[1]: Mounting media.mount... Aug 13 00:54:27.090267 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:54:27.090283 systemd[1]: Mounting sys-kernel-debug.mount... Aug 13 00:54:27.090299 systemd[1]: Mounting sys-kernel-tracing.mount... Aug 13 00:54:27.090313 systemd[1]: Mounting tmp.mount... Aug 13 00:54:27.090326 systemd[1]: Starting flatcar-tmpfiles.service... Aug 13 00:54:27.090339 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:54:27.090353 systemd[1]: Starting kmod-static-nodes.service... Aug 13 00:54:27.090366 systemd[1]: Starting modprobe@configfs.service... Aug 13 00:54:27.090399 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:54:27.090419 systemd[1]: Starting modprobe@drm.service... Aug 13 00:54:27.090433 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:54:27.090484 systemd[1]: Starting modprobe@fuse.service... Aug 13 00:54:27.090509 systemd[1]: Starting modprobe@loop.service... Aug 13 00:54:27.090526 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:54:27.090542 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 00:54:27.090557 systemd[1]: Stopped systemd-fsck-root.service. Aug 13 00:54:27.090584 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 00:54:27.090602 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 00:54:27.090617 systemd[1]: Stopped systemd-journald.service. Aug 13 00:54:27.090633 kernel: loop: module loaded Aug 13 00:54:27.090648 kernel: fuse: init (API version 7.34) Aug 13 00:54:27.090664 systemd[1]: Starting systemd-journald.service... Aug 13 00:54:27.090680 systemd[1]: Starting systemd-modules-load.service... Aug 13 00:54:27.090696 systemd[1]: Starting systemd-network-generator.service... Aug 13 00:54:27.090712 systemd[1]: Starting systemd-remount-fs.service... Aug 13 00:54:27.090729 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 00:54:27.090755 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 00:54:27.090771 systemd[1]: Stopped verity-setup.service. Aug 13 00:54:27.090788 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:54:27.090807 systemd-journald[982]: Journal started Aug 13 00:54:27.090864 systemd-journald[982]: Runtime Journal (/run/log/journal/e3ab0d9bb217489f94f95a14567b9b76) is 6.0M, max 48.4M, 42.4M free. Aug 13 00:54:23.134000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:54:23.867000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 00:54:23.867000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 00:54:23.867000 audit: BPF prog-id=10 op=LOAD Aug 13 00:54:23.867000 audit: BPF prog-id=10 op=UNLOAD Aug 13 00:54:23.867000 audit: BPF prog-id=11 op=LOAD Aug 13 00:54:23.867000 audit: BPF prog-id=11 op=UNLOAD Aug 13 00:54:23.901000 audit[904]: AVC avc: denied { associate } for pid=904 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Aug 13 00:54:23.901000 audit[904]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001558b2 a1=c0000d8de0 a2=c0000e10c0 a3=32 items=0 ppid=887 pid=904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:23.901000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Aug 13 00:54:23.904000 audit[904]: AVC avc: denied { associate } for pid=904 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Aug 13 00:54:23.904000 audit[904]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000155999 a2=1ed a3=0 items=2 ppid=887 pid=904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:23.904000 audit: CWD cwd="/" Aug 13 00:54:23.904000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:23.904000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:23.904000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Aug 13 00:54:26.954000 audit: BPF prog-id=12 op=LOAD Aug 13 00:54:26.954000 audit: BPF prog-id=3 op=UNLOAD Aug 13 00:54:26.954000 audit: BPF prog-id=13 op=LOAD Aug 13 00:54:26.954000 audit: BPF prog-id=14 op=LOAD Aug 13 00:54:26.954000 audit: BPF prog-id=4 op=UNLOAD Aug 13 00:54:26.954000 audit: BPF prog-id=5 op=UNLOAD Aug 13 00:54:26.954000 audit: BPF prog-id=15 op=LOAD Aug 13 00:54:26.954000 audit: BPF prog-id=12 op=UNLOAD Aug 13 00:54:26.954000 audit: BPF prog-id=16 op=LOAD Aug 13 00:54:26.955000 audit: BPF prog-id=17 op=LOAD Aug 13 00:54:26.955000 audit: BPF prog-id=13 op=UNLOAD Aug 13 00:54:26.955000 audit: BPF prog-id=14 op=UNLOAD Aug 13 00:54:26.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:26.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:26.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:26.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:26.968000 audit: BPF prog-id=15 op=UNLOAD Aug 13 00:54:27.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:27.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:27.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:27.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:27.069000 audit: BPF prog-id=18 op=LOAD Aug 13 00:54:27.069000 audit: BPF prog-id=19 op=LOAD Aug 13 00:54:27.092486 systemd[1]: Mounted dev-hugepages.mount. Aug 13 00:54:27.069000 audit: BPF prog-id=20 op=LOAD Aug 13 00:54:27.069000 audit: BPF prog-id=16 op=UNLOAD Aug 13 00:54:27.069000 audit: BPF prog-id=17 op=UNLOAD Aug 13 00:54:27.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:27.087000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Aug 13 00:54:27.087000 audit[982]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffccb487240 a2=4000 a3=7ffccb4872dc items=0 ppid=1 pid=982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:27.087000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Aug 13 00:54:23.899952 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:54:23Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:54:26.952043 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:54:23.900163 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:54:23Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Aug 13 00:54:26.952060 systemd[1]: Unnecessary job was removed for dev-vda6.device. Aug 13 00:54:23.900194 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:54:23Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Aug 13 00:54:26.956297 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 00:54:23.900224 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:54:23Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Aug 13 00:54:23.900233 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:54:23Z" level=debug msg="skipped missing lower profile" missing profile=oem Aug 13 00:54:23.900265 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:54:23Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Aug 13 00:54:23.900278 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:54:23Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Aug 13 00:54:27.094018 systemd[1]: Started systemd-journald.service. Aug 13 00:54:23.900532 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:54:23Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Aug 13 00:54:27.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:23.900571 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:54:23Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Aug 13 00:54:23.900584 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:54:23Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Aug 13 00:54:27.094627 systemd[1]: Mounted dev-mqueue.mount. Aug 13 00:54:23.901150 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:54:23Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Aug 13 00:54:23.901208 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:54:23Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Aug 13 00:54:23.901232 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:54:23Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Aug 13 00:54:23.901251 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:54:23Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Aug 13 00:54:23.901273 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:54:23Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Aug 13 00:54:23.901291 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:54:23Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Aug 13 00:54:27.095594 systemd[1]: Mounted media.mount. Aug 13 00:54:26.644848 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:54:26Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 00:54:26.645231 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:54:26Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 00:54:26.645435 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:54:26Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 00:54:26.645653 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:54:26Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 00:54:26.645715 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:54:26Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Aug 13 00:54:26.645790 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:54:26Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Aug 13 00:54:27.096494 systemd[1]: Mounted sys-kernel-debug.mount. Aug 13 00:54:27.097381 systemd[1]: Mounted sys-kernel-tracing.mount. Aug 13 00:54:27.098331 systemd[1]: Mounted tmp.mount. Aug 13 00:54:27.099618 systemd[1]: Finished kmod-static-nodes.service. Aug 13 00:54:27.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:27.100703 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:54:27.100835 systemd[1]: Finished modprobe@configfs.service. Aug 13 00:54:27.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:27.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:27.101940 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:54:27.102075 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:54:27.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:27.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:27.103160 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:54:27.103284 systemd[1]: Finished modprobe@drm.service. Aug 13 00:54:27.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:27.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:27.104383 systemd[1]: Finished flatcar-tmpfiles.service. Aug 13 00:54:27.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:27.105561 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:54:27.105676 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:54:27.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:27.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:27.106850 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:54:27.106980 systemd[1]: Finished modprobe@fuse.service. Aug 13 00:54:27.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:27.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:27.108122 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:54:27.108250 systemd[1]: Finished modprobe@loop.service. Aug 13 00:54:27.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:27.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:27.109367 systemd[1]: Finished systemd-modules-load.service. Aug 13 00:54:27.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:27.110625 systemd[1]: Finished systemd-network-generator.service. Aug 13 00:54:27.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:27.111810 systemd[1]: Finished systemd-remount-fs.service. Aug 13 00:54:27.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:27.113225 systemd[1]: Reached target network-pre.target. Aug 13 00:54:27.115484 systemd[1]: Mounting sys-fs-fuse-connections.mount... Aug 13 00:54:27.117517 systemd[1]: Mounting sys-kernel-config.mount... Aug 13 00:54:27.118601 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:54:27.120677 systemd[1]: Starting systemd-hwdb-update.service... Aug 13 00:54:27.122670 systemd[1]: Starting systemd-journal-flush.service... Aug 13 00:54:27.124037 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:54:27.125125 systemd[1]: Starting systemd-random-seed.service... Aug 13 00:54:27.126103 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:54:27.127187 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:54:27.128007 systemd-journald[982]: Time spent on flushing to /var/log/journal/e3ab0d9bb217489f94f95a14567b9b76 is 23.261ms for 1164 entries. Aug 13 00:54:27.128007 systemd-journald[982]: System Journal (/var/log/journal/e3ab0d9bb217489f94f95a14567b9b76) is 8.0M, max 195.6M, 187.6M free. Aug 13 00:54:27.303892 systemd-journald[982]: Received client request to flush runtime journal. Aug 13 00:54:27.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:27.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:27.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:27.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:27.130160 systemd[1]: Starting systemd-sysusers.service... Aug 13 00:54:27.135071 systemd[1]: Mounted sys-fs-fuse-connections.mount. Aug 13 00:54:27.304726 udevadm[1009]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 00:54:27.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:27.136241 systemd[1]: Mounted sys-kernel-config.mount. Aug 13 00:54:27.146673 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 00:54:27.149168 systemd[1]: Starting systemd-udev-settle.service... Aug 13 00:54:27.150260 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:54:27.151323 systemd[1]: Finished systemd-sysusers.service. Aug 13 00:54:27.206944 systemd[1]: Finished systemd-random-seed.service. Aug 13 00:54:27.207934 systemd[1]: Reached target first-boot-complete.target. Aug 13 00:54:27.304784 systemd[1]: Finished systemd-journal-flush.service. Aug 13 00:54:27.727308 systemd[1]: Finished systemd-hwdb-update.service. Aug 13 00:54:27.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:27.728000 audit: BPF prog-id=21 op=LOAD Aug 13 00:54:27.728000 audit: BPF prog-id=22 op=LOAD Aug 13 00:54:27.728000 audit: BPF prog-id=7 op=UNLOAD Aug 13 00:54:27.728000 audit: BPF prog-id=8 op=UNLOAD Aug 13 00:54:27.730058 systemd[1]: Starting systemd-udevd.service... Aug 13 00:54:27.747740 systemd-udevd[1011]: Using default interface naming scheme 'v252'. Aug 13 00:54:27.760399 systemd[1]: Started systemd-udevd.service. Aug 13 00:54:27.770518 kernel: kauditd_printk_skb: 104 callbacks suppressed Aug 13 00:54:27.770701 kernel: audit: type=1130 audit(1755046467.761:140): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:27.770742 kernel: audit: type=1334 audit(1755046467.763:141): prog-id=23 op=LOAD Aug 13 00:54:27.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:27.763000 audit: BPF prog-id=23 op=LOAD Aug 13 00:54:27.764535 systemd[1]: Starting systemd-networkd.service... Aug 13 00:54:27.773000 audit: BPF prog-id=24 op=LOAD Aug 13 00:54:27.778078 kernel: audit: type=1334 audit(1755046467.773:142): prog-id=24 op=LOAD Aug 13 00:54:27.778123 kernel: audit: type=1334 audit(1755046467.774:143): prog-id=25 op=LOAD Aug 13 00:54:27.778154 kernel: audit: type=1334 audit(1755046467.775:144): prog-id=26 op=LOAD Aug 13 00:54:27.774000 audit: BPF prog-id=25 op=LOAD Aug 13 00:54:27.775000 audit: BPF prog-id=26 op=LOAD Aug 13 00:54:27.777870 systemd[1]: Starting systemd-userdbd.service... Aug 13 00:54:27.803586 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Aug 13 00:54:27.811036 systemd[1]: Started systemd-userdbd.service. Aug 13 00:54:27.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:27.816559 kernel: audit: type=1130 audit(1755046467.811:145): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:27.831264 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 00:54:27.856531 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 13 00:54:27.861519 kernel: ACPI: button: Power Button [PWRF] Aug 13 00:54:27.902954 systemd-networkd[1021]: lo: Link UP Aug 13 00:54:27.903288 systemd-networkd[1021]: lo: Gained carrier Aug 13 00:54:27.899000 audit[1013]: AVC avc: denied { confidentiality } for pid=1013 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Aug 13 00:54:27.903826 systemd-networkd[1021]: Enumeration completed Aug 13 00:54:27.904101 systemd[1]: Started systemd-networkd.service. Aug 13 00:54:27.904296 systemd-networkd[1021]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:54:27.905504 systemd-networkd[1021]: eth0: Link UP Aug 13 00:54:27.905590 systemd-networkd[1021]: eth0: Gained carrier Aug 13 00:54:27.908510 kernel: audit: type=1400 audit(1755046467.899:146): avc: denied { confidentiality } for pid=1013 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Aug 13 00:54:27.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:27.913487 kernel: audit: type=1130 audit(1755046467.908:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:27.899000 audit[1013]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55b5b3da94a0 a1=338ac a2=7fbb651c8bc5 a3=5 items=110 ppid=1011 pid=1013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:27.925158 kernel: audit: type=1300 audit(1755046467.899:146): arch=c000003e syscall=175 success=yes exit=0 a0=55b5b3da94a0 a1=338ac a2=7fbb651c8bc5 a3=5 items=110 ppid=1011 pid=1013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:27.925300 kernel: audit: type=1307 audit(1755046467.899:146): cwd="/" Aug 13 00:54:27.899000 audit: CWD cwd="/" Aug 13 00:54:27.899000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=1 name=(null) inode=15375 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=2 name=(null) inode=15375 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=3 name=(null) inode=15376 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=4 name=(null) inode=15375 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=5 name=(null) inode=15377 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=6 name=(null) inode=15375 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=7 name=(null) inode=15378 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=8 name=(null) inode=15378 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=9 name=(null) inode=15379 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=10 name=(null) inode=15378 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=11 name=(null) inode=15380 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=12 name=(null) inode=15378 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=13 name=(null) inode=15381 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=14 name=(null) inode=15378 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=15 name=(null) inode=15382 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=16 name=(null) inode=15378 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=17 name=(null) inode=15383 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=18 name=(null) inode=15375 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=19 name=(null) inode=15384 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=20 name=(null) inode=15384 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=21 name=(null) inode=15385 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=22 name=(null) inode=15384 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=23 name=(null) inode=15386 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=24 name=(null) inode=15384 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=25 name=(null) inode=15387 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=26 name=(null) inode=15384 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=27 name=(null) inode=15388 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=28 name=(null) inode=15384 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=29 name=(null) inode=15389 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=30 name=(null) inode=15375 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=31 name=(null) inode=15390 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=32 name=(null) inode=15390 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=33 name=(null) inode=15391 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=34 name=(null) inode=15390 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=35 name=(null) inode=15392 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=36 name=(null) inode=15390 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=37 name=(null) inode=15393 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=38 name=(null) inode=15390 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=39 name=(null) inode=15394 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=40 name=(null) inode=15390 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=41 name=(null) inode=15395 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=42 name=(null) inode=15375 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=43 name=(null) inode=15396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=44 name=(null) inode=15396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=45 name=(null) inode=15397 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=46 name=(null) inode=15396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=47 name=(null) inode=15398 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=48 name=(null) inode=15396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=49 name=(null) inode=15399 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=50 name=(null) inode=15396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=51 name=(null) inode=15400 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=52 name=(null) inode=15396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=53 name=(null) inode=15401 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=55 name=(null) inode=15402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=56 name=(null) inode=15402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=57 name=(null) inode=15403 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=58 name=(null) inode=15402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=59 name=(null) inode=15404 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=60 name=(null) inode=15402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=61 name=(null) inode=15405 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=62 name=(null) inode=15405 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=63 name=(null) inode=15406 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=64 name=(null) inode=15405 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=65 name=(null) inode=15407 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=66 name=(null) inode=15405 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=67 name=(null) inode=15408 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=68 name=(null) inode=15405 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=69 name=(null) inode=15409 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=70 name=(null) inode=15405 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=71 name=(null) inode=15410 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=72 name=(null) inode=15402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=73 name=(null) inode=15411 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=74 name=(null) inode=15411 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=75 name=(null) inode=15412 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=76 name=(null) inode=15411 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=77 name=(null) inode=15413 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=78 name=(null) inode=15411 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=79 name=(null) inode=15414 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=80 name=(null) inode=15411 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=81 name=(null) inode=15415 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=82 name=(null) inode=15411 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=83 name=(null) inode=15416 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=84 name=(null) inode=15402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=85 name=(null) inode=15417 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=86 name=(null) inode=15417 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=87 name=(null) inode=15418 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=88 name=(null) inode=15417 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=89 name=(null) inode=15419 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=90 name=(null) inode=15417 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=91 name=(null) inode=15420 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=92 name=(null) inode=15417 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=93 name=(null) inode=15421 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=94 name=(null) inode=15417 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=95 name=(null) inode=15422 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=96 name=(null) inode=15402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=97 name=(null) inode=15423 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=98 name=(null) inode=15423 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=99 name=(null) inode=15424 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=100 name=(null) inode=15423 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=101 name=(null) inode=15425 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=102 name=(null) inode=15423 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=103 name=(null) inode=15426 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=104 name=(null) inode=15423 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=105 name=(null) inode=15427 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=106 name=(null) inode=15423 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=107 name=(null) inode=15428 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PATH item=109 name=(null) inode=15429 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:27.899000 audit: PROCTITLE proctitle="(udev-worker)" Aug 13 00:54:27.928628 systemd-networkd[1021]: eth0: DHCPv4 address 10.0.0.89/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 00:54:27.991293 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Aug 13 00:54:27.997876 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 00:54:27.998028 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Aug 13 00:54:27.998168 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 00:54:28.048208 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 13 00:54:28.066484 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:54:28.077272 kernel: kvm: Nested Virtualization enabled Aug 13 00:54:28.077359 kernel: SVM: kvm: Nested Paging enabled Aug 13 00:54:28.077375 kernel: SVM: Virtual VMLOAD VMSAVE supported Aug 13 00:54:28.077388 kernel: SVM: Virtual GIF supported Aug 13 00:54:28.095483 kernel: EDAC MC: Ver: 3.0.0 Aug 13 00:54:28.123885 systemd[1]: Finished systemd-udev-settle.service. Aug 13 00:54:28.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:28.126164 systemd[1]: Starting lvm2-activation-early.service... Aug 13 00:54:28.137243 lvm[1047]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:54:28.164623 systemd[1]: Finished lvm2-activation-early.service. Aug 13 00:54:28.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:28.165612 systemd[1]: Reached target cryptsetup.target. Aug 13 00:54:28.167526 systemd[1]: Starting lvm2-activation.service... Aug 13 00:54:28.171647 lvm[1048]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:54:28.198713 systemd[1]: Finished lvm2-activation.service. Aug 13 00:54:28.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:28.199633 systemd[1]: Reached target local-fs-pre.target. Aug 13 00:54:28.200519 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:54:28.200541 systemd[1]: Reached target local-fs.target. Aug 13 00:54:28.201392 systemd[1]: Reached target machines.target. Aug 13 00:54:28.203267 systemd[1]: Starting ldconfig.service... Aug 13 00:54:28.204316 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:54:28.204389 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:54:28.205498 systemd[1]: Starting systemd-boot-update.service... Aug 13 00:54:28.207700 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Aug 13 00:54:28.210154 systemd[1]: Starting systemd-machine-id-commit.service... Aug 13 00:54:28.212253 systemd[1]: Starting systemd-sysext.service... Aug 13 00:54:28.213456 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1050 (bootctl) Aug 13 00:54:28.214495 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Aug 13 00:54:28.223364 systemd[1]: Unmounting usr-share-oem.mount... Aug 13 00:54:28.225735 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Aug 13 00:54:28.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:28.231236 systemd[1]: usr-share-oem.mount: Deactivated successfully. Aug 13 00:54:28.231408 systemd[1]: Unmounted usr-share-oem.mount. Aug 13 00:54:28.242505 kernel: loop0: detected capacity change from 0 to 221472 Aug 13 00:54:28.261063 systemd-fsck[1058]: fsck.fat 4.2 (2021-01-31) Aug 13 00:54:28.261063 systemd-fsck[1058]: /dev/vda1: 790 files, 119344/258078 clusters Aug 13 00:54:28.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:28.263598 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Aug 13 00:54:28.267092 systemd[1]: Mounting boot.mount... Aug 13 00:54:28.466613 systemd[1]: Mounted boot.mount. Aug 13 00:54:28.481429 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:54:28.482525 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:54:28.482727 systemd[1]: Finished systemd-machine-id-commit.service. Aug 13 00:54:28.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:28.484083 systemd[1]: Finished systemd-boot-update.service. Aug 13 00:54:28.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:28.499513 kernel: loop1: detected capacity change from 0 to 221472 Aug 13 00:54:28.505138 (sd-sysext)[1064]: Using extensions 'kubernetes'. Aug 13 00:54:28.505537 (sd-sysext)[1064]: Merged extensions into '/usr'. Aug 13 00:54:28.523204 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:54:28.524872 systemd[1]: Mounting usr-share-oem.mount... Aug 13 00:54:28.525960 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:54:28.529724 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:54:28.532718 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:54:28.535292 systemd[1]: Starting modprobe@loop.service... Aug 13 00:54:28.536491 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:54:28.536643 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:54:28.536791 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:54:28.539798 systemd[1]: Mounted usr-share-oem.mount. Aug 13 00:54:28.541253 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:54:28.541422 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:54:28.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:28.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:28.543030 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:54:28.543150 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:54:28.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:28.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:28.544924 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:54:28.545078 systemd[1]: Finished modprobe@loop.service. Aug 13 00:54:28.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:28.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:28.548076 systemd[1]: Finished systemd-sysext.service. Aug 13 00:54:28.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:28.551694 systemd[1]: Starting ensure-sysext.service... Aug 13 00:54:28.552834 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:54:28.552889 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:54:28.553937 systemd[1]: Starting systemd-tmpfiles-setup.service... Aug 13 00:54:28.559366 systemd[1]: Reloading. Aug 13 00:54:28.573626 systemd-tmpfiles[1071]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Aug 13 00:54:28.577342 systemd-tmpfiles[1071]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:54:28.577829 ldconfig[1049]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:54:28.584920 systemd-tmpfiles[1071]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:54:28.625572 /usr/lib/systemd/system-generators/torcx-generator[1091]: time="2025-08-13T00:54:28Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:54:28.626219 /usr/lib/systemd/system-generators/torcx-generator[1091]: time="2025-08-13T00:54:28Z" level=info msg="torcx already run" Aug 13 00:54:28.694626 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:54:28.694646 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:54:28.712769 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:54:28.769000 audit: BPF prog-id=27 op=LOAD Aug 13 00:54:28.769000 audit: BPF prog-id=24 op=UNLOAD Aug 13 00:54:28.769000 audit: BPF prog-id=28 op=LOAD Aug 13 00:54:28.769000 audit: BPF prog-id=29 op=LOAD Aug 13 00:54:28.769000 audit: BPF prog-id=25 op=UNLOAD Aug 13 00:54:28.769000 audit: BPF prog-id=26 op=UNLOAD Aug 13 00:54:28.770000 audit: BPF prog-id=30 op=LOAD Aug 13 00:54:28.770000 audit: BPF prog-id=18 op=UNLOAD Aug 13 00:54:28.770000 audit: BPF prog-id=31 op=LOAD Aug 13 00:54:28.770000 audit: BPF prog-id=32 op=LOAD Aug 13 00:54:28.770000 audit: BPF prog-id=19 op=UNLOAD Aug 13 00:54:28.770000 audit: BPF prog-id=20 op=UNLOAD Aug 13 00:54:28.773000 audit: BPF prog-id=33 op=LOAD Aug 13 00:54:28.773000 audit: BPF prog-id=34 op=LOAD Aug 13 00:54:28.773000 audit: BPF prog-id=21 op=UNLOAD Aug 13 00:54:28.773000 audit: BPF prog-id=22 op=UNLOAD Aug 13 00:54:28.774000 audit: BPF prog-id=35 op=LOAD Aug 13 00:54:28.774000 audit: BPF prog-id=23 op=UNLOAD Aug 13 00:54:28.777568 systemd[1]: Finished ldconfig.service. Aug 13 00:54:28.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:28.778702 systemd[1]: Finished systemd-tmpfiles-setup.service. Aug 13 00:54:28.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:28.783996 systemd[1]: Starting audit-rules.service... Aug 13 00:54:28.786479 systemd[1]: Starting clean-ca-certificates.service... Aug 13 00:54:28.788837 systemd[1]: Starting systemd-journal-catalog-update.service... Aug 13 00:54:28.793000 audit: BPF prog-id=36 op=LOAD Aug 13 00:54:28.795759 systemd[1]: Starting systemd-resolved.service... Aug 13 00:54:28.797000 audit: BPF prog-id=37 op=LOAD Aug 13 00:54:28.799192 systemd[1]: Starting systemd-timesyncd.service... Aug 13 00:54:28.801509 systemd[1]: Starting systemd-update-utmp.service... Aug 13 00:54:28.803486 systemd[1]: Finished clean-ca-certificates.service. Aug 13 00:54:28.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:28.806978 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:54:28.810000 audit[1147]: SYSTEM_BOOT pid=1147 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Aug 13 00:54:28.812992 systemd[1]: Finished systemd-journal-catalog-update.service. Aug 13 00:54:28.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:28.814624 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:54:28.816122 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:54:28.818487 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:54:28.821215 systemd[1]: Starting modprobe@loop.service... Aug 13 00:54:28.822435 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:54:28.822849 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:54:28.822000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Aug 13 00:54:28.822000 audit[1157]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc89032180 a2=420 a3=0 items=0 ppid=1134 pid=1157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:28.822000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Aug 13 00:54:28.823672 augenrules[1157]: No rules Aug 13 00:54:28.825537 systemd[1]: Starting systemd-update-done.service... Aug 13 00:54:28.826323 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:54:28.828047 systemd[1]: Finished audit-rules.service. Aug 13 00:54:28.831151 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:54:28.831293 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:54:28.832623 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:54:28.832743 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:54:28.834037 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:54:28.834156 systemd[1]: Finished modprobe@loop.service. Aug 13 00:54:28.835435 systemd[1]: Finished systemd-update-done.service. Aug 13 00:54:28.838068 systemd[1]: Finished systemd-update-utmp.service. Aug 13 00:54:28.840080 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:54:28.841371 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:54:28.843762 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:54:28.845703 systemd[1]: Starting modprobe@loop.service... Aug 13 00:54:28.846497 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:54:28.846621 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:54:28.846728 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:54:28.847497 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:54:28.848084 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:54:28.849426 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:54:28.849623 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:54:28.850957 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:54:28.851115 systemd[1]: Finished modprobe@loop.service. Aug 13 00:54:28.852312 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:54:28.852496 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:54:28.855685 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:54:28.857289 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:54:28.859949 systemd[1]: Starting modprobe@drm.service... Aug 13 00:54:28.864757 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:54:28.867052 systemd[1]: Starting modprobe@loop.service... Aug 13 00:54:28.867927 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:54:28.868064 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:54:28.869344 systemd-resolved[1143]: Positive Trust Anchors: Aug 13 00:54:28.869369 systemd-resolved[1143]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:54:28.869398 systemd-resolved[1143]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 00:54:28.869421 systemd[1]: Starting systemd-networkd-wait-online.service... Aug 13 00:54:28.870652 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:54:28.872435 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:54:28.872605 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:54:28.874044 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:54:28.874226 systemd[1]: Finished modprobe@drm.service. Aug 13 00:54:28.875770 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:54:28.875911 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:54:28.877306 systemd[1]: Started systemd-timesyncd.service. Aug 13 00:54:28.878783 systemd-timesyncd[1145]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 13 00:54:28.878840 systemd-timesyncd[1145]: Initial clock synchronization to Wed 2025-08-13 00:54:29.095131 UTC. Aug 13 00:54:28.879152 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:54:28.879269 systemd[1]: Finished modprobe@loop.service. Aug 13 00:54:28.880693 systemd-resolved[1143]: Defaulting to hostname 'linux'. Aug 13 00:54:28.881128 systemd[1]: Reached target time-set.target. Aug 13 00:54:28.882218 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:54:28.882257 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:54:28.882579 systemd[1]: Finished ensure-sysext.service. Aug 13 00:54:28.883636 systemd[1]: Started systemd-resolved.service. Aug 13 00:54:28.885987 systemd[1]: Reached target network.target. Aug 13 00:54:28.886995 systemd[1]: Reached target nss-lookup.target. Aug 13 00:54:28.888037 systemd[1]: Reached target sysinit.target. Aug 13 00:54:28.889162 systemd[1]: Started motdgen.path. Aug 13 00:54:28.889977 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Aug 13 00:54:28.891367 systemd[1]: Started logrotate.timer. Aug 13 00:54:28.892286 systemd[1]: Started mdadm.timer. Aug 13 00:54:28.893169 systemd[1]: Started systemd-tmpfiles-clean.timer. Aug 13 00:54:28.894107 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:54:28.894134 systemd[1]: Reached target paths.target. Aug 13 00:54:28.894947 systemd[1]: Reached target timers.target. Aug 13 00:54:28.896485 systemd[1]: Listening on dbus.socket. Aug 13 00:54:28.898536 systemd[1]: Starting docker.socket... Aug 13 00:54:28.902552 systemd[1]: Listening on sshd.socket. Aug 13 00:54:28.903539 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:54:28.903952 systemd[1]: Listening on docker.socket. Aug 13 00:54:28.904833 systemd[1]: Reached target sockets.target. Aug 13 00:54:28.905734 systemd[1]: Reached target basic.target. Aug 13 00:54:28.906609 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 00:54:28.906638 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 00:54:28.907967 systemd[1]: Starting containerd.service... Aug 13 00:54:28.909913 systemd[1]: Starting dbus.service... Aug 13 00:54:28.911666 systemd[1]: Starting enable-oem-cloudinit.service... Aug 13 00:54:28.913722 systemd[1]: Starting extend-filesystems.service... Aug 13 00:54:28.914648 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Aug 13 00:54:28.916055 systemd[1]: Starting motdgen.service... Aug 13 00:54:28.924891 jq[1176]: false Aug 13 00:54:28.948187 systemd[1]: Starting prepare-helm.service... Aug 13 00:54:28.951394 systemd[1]: Starting ssh-key-proc-cmdline.service... Aug 13 00:54:28.953702 systemd[1]: Starting sshd-keygen.service... Aug 13 00:54:28.958800 systemd[1]: Starting systemd-logind.service... Aug 13 00:54:28.959773 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:54:28.959913 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:54:28.962394 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:54:28.963272 systemd[1]: Starting update-engine.service... Aug 13 00:54:28.966777 extend-filesystems[1177]: Found loop1 Aug 13 00:54:28.966777 extend-filesystems[1177]: Found sr0 Aug 13 00:54:28.966777 extend-filesystems[1177]: Found vda Aug 13 00:54:28.966777 extend-filesystems[1177]: Found vda1 Aug 13 00:54:28.966777 extend-filesystems[1177]: Found vda2 Aug 13 00:54:28.966777 extend-filesystems[1177]: Found vda3 Aug 13 00:54:28.966777 extend-filesystems[1177]: Found usr Aug 13 00:54:28.966777 extend-filesystems[1177]: Found vda4 Aug 13 00:54:28.966777 extend-filesystems[1177]: Found vda6 Aug 13 00:54:28.966777 extend-filesystems[1177]: Found vda7 Aug 13 00:54:28.966777 extend-filesystems[1177]: Found vda9 Aug 13 00:54:28.966777 extend-filesystems[1177]: Checking size of /dev/vda9 Aug 13 00:54:28.968095 systemd[1]: Starting update-ssh-keys-after-ignition.service... Aug 13 00:54:28.992835 jq[1195]: true Aug 13 00:54:28.971136 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:54:28.971366 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Aug 13 00:54:28.993322 tar[1197]: linux-amd64/helm Aug 13 00:54:28.971824 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:54:28.993690 jq[1198]: true Aug 13 00:54:28.971988 systemd[1]: Finished motdgen.service. Aug 13 00:54:28.974868 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:54:28.975092 systemd[1]: Finished ssh-key-proc-cmdline.service. Aug 13 00:54:28.997228 dbus-daemon[1175]: [system] SELinux support is enabled Aug 13 00:54:28.997376 systemd[1]: Started dbus.service. Aug 13 00:54:29.001021 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:54:29.001071 systemd[1]: Reached target system-config.target. Aug 13 00:54:29.002266 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:54:29.002300 systemd[1]: Reached target user-config.target. Aug 13 00:54:29.026693 extend-filesystems[1177]: Resized partition /dev/vda9 Aug 13 00:54:29.028076 update_engine[1192]: I0813 00:54:29.027458 1192 main.cc:92] Flatcar Update Engine starting Aug 13 00:54:29.030535 update_engine[1192]: I0813 00:54:29.030236 1192 update_check_scheduler.cc:74] Next update check in 10m1s Aug 13 00:54:29.031149 systemd[1]: Started update-engine.service. Aug 13 00:54:29.034511 systemd[1]: Started locksmithd.service. Aug 13 00:54:29.038826 extend-filesystems[1222]: resize2fs 1.46.5 (30-Dec-2021) Aug 13 00:54:29.121515 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 13 00:54:29.126896 env[1199]: time="2025-08-13T00:54:29.126833253Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Aug 13 00:54:29.145665 env[1199]: time="2025-08-13T00:54:29.145608240Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 00:54:29.146006 env[1199]: time="2025-08-13T00:54:29.145986515Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:54:29.150997 systemd-logind[1190]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 00:54:29.151026 systemd-logind[1190]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 00:54:29.152526 systemd-logind[1190]: New seat seat0. Aug 13 00:54:29.168249 systemd[1]: Started systemd-logind.service. Aug 13 00:54:29.184026 env[1199]: time="2025-08-13T00:54:29.182451922Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.189-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:54:29.184026 env[1199]: time="2025-08-13T00:54:29.182526194Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:54:29.184026 env[1199]: time="2025-08-13T00:54:29.182846002Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:54:29.184026 env[1199]: time="2025-08-13T00:54:29.182862002Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 00:54:29.184026 env[1199]: time="2025-08-13T00:54:29.182879845Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 13 00:54:29.184026 env[1199]: time="2025-08-13T00:54:29.182891308Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 00:54:29.184026 env[1199]: time="2025-08-13T00:54:29.182965477Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:54:29.184026 env[1199]: time="2025-08-13T00:54:29.183242921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:54:29.184026 env[1199]: time="2025-08-13T00:54:29.183361975Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:54:29.184026 env[1199]: time="2025-08-13T00:54:29.183375671Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 00:54:29.184277 env[1199]: time="2025-08-13T00:54:29.183418815Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 13 00:54:29.184277 env[1199]: time="2025-08-13T00:54:29.183437194Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:54:29.275180 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:54:29.275249 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:54:29.288713 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 13 00:54:29.319067 extend-filesystems[1222]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 00:54:29.319067 extend-filesystems[1222]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 00:54:29.319067 extend-filesystems[1222]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 13 00:54:29.324662 extend-filesystems[1177]: Resized filesystem in /dev/vda9 Aug 13 00:54:29.324316 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:54:29.328262 env[1199]: time="2025-08-13T00:54:29.325812359Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 00:54:29.328262 env[1199]: time="2025-08-13T00:54:29.325896200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 00:54:29.328262 env[1199]: time="2025-08-13T00:54:29.325923530Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 00:54:29.328262 env[1199]: time="2025-08-13T00:54:29.326036708Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 00:54:29.328262 env[1199]: time="2025-08-13T00:54:29.326095308Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 00:54:29.328262 env[1199]: time="2025-08-13T00:54:29.326138886Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 00:54:29.328262 env[1199]: time="2025-08-13T00:54:29.326157521Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 00:54:29.328262 env[1199]: time="2025-08-13T00:54:29.326175260Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 00:54:29.328262 env[1199]: time="2025-08-13T00:54:29.326241280Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Aug 13 00:54:29.328262 env[1199]: time="2025-08-13T00:54:29.326291618Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 00:54:29.328262 env[1199]: time="2025-08-13T00:54:29.326325132Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 00:54:29.328262 env[1199]: time="2025-08-13T00:54:29.326364872Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 00:54:29.328262 env[1199]: time="2025-08-13T00:54:29.326574979Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 00:54:29.328262 env[1199]: time="2025-08-13T00:54:29.326763232Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 00:54:29.328716 bash[1226]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:54:29.324558 systemd[1]: Finished extend-filesystems.service. Aug 13 00:54:29.327960 systemd[1]: Finished update-ssh-keys-after-ignition.service. Aug 13 00:54:29.330312 env[1199]: time="2025-08-13T00:54:29.330183447Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 00:54:29.330312 env[1199]: time="2025-08-13T00:54:29.330271837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 00:54:29.330312 env[1199]: time="2025-08-13T00:54:29.330288805Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 00:54:29.372739 env[1199]: time="2025-08-13T00:54:29.372488302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 00:54:29.372739 env[1199]: time="2025-08-13T00:54:29.372554064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 00:54:29.372739 env[1199]: time="2025-08-13T00:54:29.372578245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 00:54:29.372739 env[1199]: time="2025-08-13T00:54:29.372590706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 00:54:29.372739 env[1199]: time="2025-08-13T00:54:29.372602797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 00:54:29.372739 env[1199]: time="2025-08-13T00:54:29.372622440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 00:54:29.372739 env[1199]: time="2025-08-13T00:54:29.372637391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 00:54:29.372739 env[1199]: time="2025-08-13T00:54:29.372652249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 00:54:29.372739 env[1199]: time="2025-08-13T00:54:29.372682821Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 00:54:29.373279 env[1199]: time="2025-08-13T00:54:29.372973981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 00:54:29.373279 env[1199]: time="2025-08-13T00:54:29.372992657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 00:54:29.373279 env[1199]: time="2025-08-13T00:54:29.373011107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 00:54:29.373279 env[1199]: time="2025-08-13T00:54:29.373023352Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 00:54:29.373279 env[1199]: time="2025-08-13T00:54:29.373046185Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Aug 13 00:54:29.373279 env[1199]: time="2025-08-13T00:54:29.373060457Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 00:54:29.373279 env[1199]: time="2025-08-13T00:54:29.373089681Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Aug 13 00:54:29.373279 env[1199]: time="2025-08-13T00:54:29.373147406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 00:54:29.373529 env[1199]: time="2025-08-13T00:54:29.373446850Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 00:54:29.376174 env[1199]: time="2025-08-13T00:54:29.373550356Z" level=info msg="Connect containerd service" Aug 13 00:54:29.376174 env[1199]: time="2025-08-13T00:54:29.373609934Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 00:54:29.376174 env[1199]: time="2025-08-13T00:54:29.374408425Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:54:29.376174 env[1199]: time="2025-08-13T00:54:29.374733532Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:54:29.376174 env[1199]: time="2025-08-13T00:54:29.374776749Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:54:29.376174 env[1199]: time="2025-08-13T00:54:29.375057662Z" level=info msg="containerd successfully booted in 0.249055s" Aug 13 00:54:29.374947 systemd[1]: Started containerd.service. Aug 13 00:54:29.379122 env[1199]: time="2025-08-13T00:54:29.374638845Z" level=info msg="Start subscribing containerd event" Aug 13 00:54:29.379122 env[1199]: time="2025-08-13T00:54:29.377824041Z" level=info msg="Start recovering state" Aug 13 00:54:29.379122 env[1199]: time="2025-08-13T00:54:29.377926302Z" level=info msg="Start event monitor" Aug 13 00:54:29.379122 env[1199]: time="2025-08-13T00:54:29.377949196Z" level=info msg="Start snapshots syncer" Aug 13 00:54:29.379122 env[1199]: time="2025-08-13T00:54:29.377978718Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:54:29.379122 env[1199]: time="2025-08-13T00:54:29.378009639Z" level=info msg="Start streaming server" Aug 13 00:54:29.393607 locksmithd[1227]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:54:29.464715 systemd-networkd[1021]: eth0: Gained IPv6LL Aug 13 00:54:29.467420 systemd[1]: Finished systemd-networkd-wait-online.service. Aug 13 00:54:29.468916 systemd[1]: Reached target network-online.target. Aug 13 00:54:29.471633 systemd[1]: Starting kubelet.service... Aug 13 00:54:29.567985 sshd_keygen[1202]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:54:29.600330 systemd[1]: Finished sshd-keygen.service. Aug 13 00:54:29.604092 systemd[1]: Starting issuegen.service... Aug 13 00:54:29.611873 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:54:29.612019 systemd[1]: Finished issuegen.service. Aug 13 00:54:29.614141 systemd[1]: Starting systemd-user-sessions.service... Aug 13 00:54:29.624048 systemd[1]: Finished systemd-user-sessions.service. Aug 13 00:54:29.626416 systemd[1]: Started getty@tty1.service. Aug 13 00:54:29.628409 systemd[1]: Started serial-getty@ttyS0.service. Aug 13 00:54:29.629545 systemd[1]: Reached target getty.target. Aug 13 00:54:29.701395 tar[1197]: linux-amd64/LICENSE Aug 13 00:54:29.701642 tar[1197]: linux-amd64/README.md Aug 13 00:54:29.707958 systemd[1]: Finished prepare-helm.service. Aug 13 00:54:30.545262 systemd[1]: Created slice system-sshd.slice. Aug 13 00:54:30.557059 systemd[1]: Started sshd@0-10.0.0.89:22-10.0.0.1:55804.service. Aug 13 00:54:30.684020 sshd[1255]: Accepted publickey for core from 10.0.0.1 port 55804 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:54:30.686433 sshd[1255]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:30.694855 systemd[1]: Created slice user-500.slice. Aug 13 00:54:30.696827 systemd[1]: Starting user-runtime-dir@500.service... Aug 13 00:54:30.700255 systemd-logind[1190]: New session 1 of user core. Aug 13 00:54:30.706573 systemd[1]: Finished user-runtime-dir@500.service. Aug 13 00:54:30.709025 systemd[1]: Starting user@500.service... Aug 13 00:54:30.711918 (systemd)[1258]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:30.783642 systemd[1258]: Queued start job for default target default.target. Aug 13 00:54:30.784083 systemd[1258]: Reached target paths.target. Aug 13 00:54:30.784103 systemd[1258]: Reached target sockets.target. Aug 13 00:54:30.784115 systemd[1258]: Reached target timers.target. Aug 13 00:54:30.784127 systemd[1258]: Reached target basic.target. Aug 13 00:54:30.784230 systemd[1]: Started user@500.service. Aug 13 00:54:30.784331 systemd[1258]: Reached target default.target. Aug 13 00:54:30.784361 systemd[1258]: Startup finished in 66ms. Aug 13 00:54:30.786331 systemd[1]: Started session-1.scope. Aug 13 00:54:30.904606 systemd[1]: Started sshd@1-10.0.0.89:22-10.0.0.1:55814.service. Aug 13 00:54:30.949017 systemd[1]: Started kubelet.service. Aug 13 00:54:30.950640 systemd[1]: Reached target multi-user.target. Aug 13 00:54:30.953062 systemd[1]: Starting systemd-update-utmp-runlevel.service... Aug 13 00:54:30.962467 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Aug 13 00:54:30.962663 systemd[1]: Finished systemd-update-utmp-runlevel.service. Aug 13 00:54:30.963981 systemd[1]: Startup finished in 866ms (kernel) + 5.296s (initrd) + 7.873s (userspace) = 14.037s. Aug 13 00:54:31.077297 sshd[1267]: Accepted publickey for core from 10.0.0.1 port 55814 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:54:31.078868 sshd[1267]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:31.083631 systemd[1]: Started session-2.scope. Aug 13 00:54:31.084557 systemd-logind[1190]: New session 2 of user core. Aug 13 00:54:31.148622 sshd[1267]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:31.151678 systemd[1]: sshd@1-10.0.0.89:22-10.0.0.1:55814.service: Deactivated successfully. Aug 13 00:54:31.152337 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:54:31.152914 systemd-logind[1190]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:54:31.154084 systemd[1]: Started sshd@2-10.0.0.89:22-10.0.0.1:55822.service. Aug 13 00:54:31.154987 systemd-logind[1190]: Removed session 2. Aug 13 00:54:31.189689 sshd[1282]: Accepted publickey for core from 10.0.0.1 port 55822 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:54:31.190971 sshd[1282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:31.194641 systemd-logind[1190]: New session 3 of user core. Aug 13 00:54:31.195370 systemd[1]: Started session-3.scope. Aug 13 00:54:31.248736 sshd[1282]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:31.251588 systemd[1]: sshd@2-10.0.0.89:22-10.0.0.1:55822.service: Deactivated successfully. Aug 13 00:54:31.252092 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:54:31.252647 systemd-logind[1190]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:54:31.253789 systemd[1]: Started sshd@3-10.0.0.89:22-10.0.0.1:55830.service. Aug 13 00:54:31.254852 systemd-logind[1190]: Removed session 3. Aug 13 00:54:31.292769 sshd[1289]: Accepted publickey for core from 10.0.0.1 port 55830 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:54:31.294504 sshd[1289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:31.297813 systemd-logind[1190]: New session 4 of user core. Aug 13 00:54:31.298553 systemd[1]: Started session-4.scope. Aug 13 00:54:31.432562 sshd[1289]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:31.436339 systemd[1]: sshd@3-10.0.0.89:22-10.0.0.1:55830.service: Deactivated successfully. Aug 13 00:54:31.437106 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:54:31.437809 systemd-logind[1190]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:54:31.439406 systemd[1]: Started sshd@4-10.0.0.89:22-10.0.0.1:55846.service. Aug 13 00:54:31.441002 systemd-logind[1190]: Removed session 4. Aug 13 00:54:31.475619 sshd[1296]: Accepted publickey for core from 10.0.0.1 port 55846 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:54:31.476924 sshd[1296]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:31.481140 systemd-logind[1190]: New session 5 of user core. Aug 13 00:54:31.481877 systemd[1]: Started session-5.scope. Aug 13 00:54:31.542973 sudo[1299]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:54:31.543188 sudo[1299]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 00:54:31.678659 systemd[1]: Starting docker.service... Aug 13 00:54:31.691364 kubelet[1271]: E0813 00:54:31.691247 1271 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:54:31.693155 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:54:31.693282 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:54:31.693562 systemd[1]: kubelet.service: Consumed 2.028s CPU time. Aug 13 00:54:31.782538 env[1311]: time="2025-08-13T00:54:31.782443748Z" level=info msg="Starting up" Aug 13 00:54:31.783917 env[1311]: time="2025-08-13T00:54:31.783868548Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 00:54:31.783917 env[1311]: time="2025-08-13T00:54:31.783897099Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 00:54:31.784039 env[1311]: time="2025-08-13T00:54:31.783954387Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 00:54:31.784039 env[1311]: time="2025-08-13T00:54:31.783977089Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 00:54:31.786091 env[1311]: time="2025-08-13T00:54:31.786064483Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 00:54:31.786091 env[1311]: time="2025-08-13T00:54:31.786082808Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 00:54:31.786193 env[1311]: time="2025-08-13T00:54:31.786101481Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 00:54:31.786193 env[1311]: time="2025-08-13T00:54:31.786114653Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 00:54:33.165886 env[1311]: time="2025-08-13T00:54:33.165837097Z" level=info msg="Loading containers: start." Aug 13 00:54:33.445552 kernel: Initializing XFRM netlink socket Aug 13 00:54:33.482371 env[1311]: time="2025-08-13T00:54:33.482309390Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Aug 13 00:54:33.540331 systemd-networkd[1021]: docker0: Link UP Aug 13 00:54:33.558232 env[1311]: time="2025-08-13T00:54:33.558164595Z" level=info msg="Loading containers: done." Aug 13 00:54:33.570929 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1924300981-merged.mount: Deactivated successfully. Aug 13 00:54:33.574064 env[1311]: time="2025-08-13T00:54:33.574014629Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:54:33.574251 env[1311]: time="2025-08-13T00:54:33.574209277Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Aug 13 00:54:33.574390 env[1311]: time="2025-08-13T00:54:33.574305212Z" level=info msg="Daemon has completed initialization" Aug 13 00:54:33.593768 systemd[1]: Started docker.service. Aug 13 00:54:33.605365 env[1311]: time="2025-08-13T00:54:33.605291551Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:54:34.503977 env[1199]: time="2025-08-13T00:54:34.503903084Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 00:54:35.137126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1428361959.mount: Deactivated successfully. Aug 13 00:54:37.050574 env[1199]: time="2025-08-13T00:54:37.050494385Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:37.052621 env[1199]: time="2025-08-13T00:54:37.052588017Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:37.054513 env[1199]: time="2025-08-13T00:54:37.054418445Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:37.056090 env[1199]: time="2025-08-13T00:54:37.056066769Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:37.057016 env[1199]: time="2025-08-13T00:54:37.056980592Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 13 00:54:37.057935 env[1199]: time="2025-08-13T00:54:37.057912638Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 00:54:40.568330 env[1199]: time="2025-08-13T00:54:40.568231123Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:40.578335 env[1199]: time="2025-08-13T00:54:40.578182855Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:40.580658 env[1199]: time="2025-08-13T00:54:40.580613517Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:40.582781 env[1199]: time="2025-08-13T00:54:40.582710089Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:40.583898 env[1199]: time="2025-08-13T00:54:40.583840823Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 13 00:54:40.584568 env[1199]: time="2025-08-13T00:54:40.584531081Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 00:54:41.944356 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:54:41.944643 systemd[1]: Stopped kubelet.service. Aug 13 00:54:41.944693 systemd[1]: kubelet.service: Consumed 2.028s CPU time. Aug 13 00:54:41.946425 systemd[1]: Starting kubelet.service... Aug 13 00:54:42.125102 systemd[1]: Started kubelet.service. Aug 13 00:54:42.193200 kubelet[1447]: E0813 00:54:42.193140 1447 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:54:42.196029 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:54:42.196153 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:54:42.804927 env[1199]: time="2025-08-13T00:54:42.804842416Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:42.807052 env[1199]: time="2025-08-13T00:54:42.807018512Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:42.810995 env[1199]: time="2025-08-13T00:54:42.810931733Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:42.814156 env[1199]: time="2025-08-13T00:54:42.814109018Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:42.815034 env[1199]: time="2025-08-13T00:54:42.814989328Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 13 00:54:42.815911 env[1199]: time="2025-08-13T00:54:42.815705010Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 00:54:44.150378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2573301207.mount: Deactivated successfully. Aug 13 00:54:46.241812 env[1199]: time="2025-08-13T00:54:46.241631751Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:46.246428 env[1199]: time="2025-08-13T00:54:46.246342306Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:46.248263 env[1199]: time="2025-08-13T00:54:46.248226822Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:46.250766 env[1199]: time="2025-08-13T00:54:46.250673043Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:46.251305 env[1199]: time="2025-08-13T00:54:46.251260448Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 13 00:54:46.251907 env[1199]: time="2025-08-13T00:54:46.251871242Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:54:47.263865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2994795821.mount: Deactivated successfully. Aug 13 00:54:50.028691 env[1199]: time="2025-08-13T00:54:50.028593759Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:50.032862 env[1199]: time="2025-08-13T00:54:50.032803554Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:50.035067 env[1199]: time="2025-08-13T00:54:50.035018521Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:50.037179 env[1199]: time="2025-08-13T00:54:50.037146294Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:50.038232 env[1199]: time="2025-08-13T00:54:50.038188781Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 00:54:50.038718 env[1199]: time="2025-08-13T00:54:50.038652908Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:54:50.642160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1766006810.mount: Deactivated successfully. Aug 13 00:54:50.648590 env[1199]: time="2025-08-13T00:54:50.648522316Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:50.651070 env[1199]: time="2025-08-13T00:54:50.651041693Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:50.652889 env[1199]: time="2025-08-13T00:54:50.652842298Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:50.654204 env[1199]: time="2025-08-13T00:54:50.654165318Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:50.654723 env[1199]: time="2025-08-13T00:54:50.654674905Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 00:54:50.655332 env[1199]: time="2025-08-13T00:54:50.655303510Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 00:54:51.289657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2336982029.mount: Deactivated successfully. Aug 13 00:54:52.417786 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:54:52.418024 systemd[1]: Stopped kubelet.service. Aug 13 00:54:52.419850 systemd[1]: Starting kubelet.service... Aug 13 00:54:52.521652 systemd[1]: Started kubelet.service. Aug 13 00:54:52.557328 kubelet[1458]: E0813 00:54:52.557254 1458 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:54:52.559565 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:54:52.559701 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:54:55.564449 env[1199]: time="2025-08-13T00:54:55.564371718Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:55.577677 env[1199]: time="2025-08-13T00:54:55.577643910Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:55.589376 env[1199]: time="2025-08-13T00:54:55.589334524Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:55.596630 env[1199]: time="2025-08-13T00:54:55.596579497Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:55.597340 env[1199]: time="2025-08-13T00:54:55.597307432Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 00:54:58.059480 systemd[1]: Stopped kubelet.service. Aug 13 00:54:58.061684 systemd[1]: Starting kubelet.service... Aug 13 00:54:58.085953 systemd[1]: Reloading. Aug 13 00:54:58.176110 /usr/lib/systemd/system-generators/torcx-generator[1512]: time="2025-08-13T00:54:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:54:58.176142 /usr/lib/systemd/system-generators/torcx-generator[1512]: time="2025-08-13T00:54:58Z" level=info msg="torcx already run" Aug 13 00:54:58.752383 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:54:58.752401 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:54:58.770264 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:54:58.852780 systemd[1]: Started kubelet.service. Aug 13 00:54:58.854166 systemd[1]: Stopping kubelet.service... Aug 13 00:54:58.854399 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:54:58.854557 systemd[1]: Stopped kubelet.service. Aug 13 00:54:58.856042 systemd[1]: Starting kubelet.service... Aug 13 00:54:58.946568 systemd[1]: Started kubelet.service. Aug 13 00:54:58.994999 kubelet[1560]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:54:58.994999 kubelet[1560]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:54:58.994999 kubelet[1560]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:54:58.995505 kubelet[1560]: I0813 00:54:58.995038 1560 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:54:59.280062 kubelet[1560]: I0813 00:54:59.280001 1560 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:54:59.280062 kubelet[1560]: I0813 00:54:59.280038 1560 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:54:59.280350 kubelet[1560]: I0813 00:54:59.280324 1560 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:54:59.361055 kubelet[1560]: E0813 00:54:59.361002 1560 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.89:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:59.363145 kubelet[1560]: I0813 00:54:59.363119 1560 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:54:59.376323 kubelet[1560]: E0813 00:54:59.376271 1560 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:54:59.376323 kubelet[1560]: I0813 00:54:59.376317 1560 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:54:59.383176 kubelet[1560]: I0813 00:54:59.383122 1560 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:54:59.384321 kubelet[1560]: I0813 00:54:59.384298 1560 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:54:59.384532 kubelet[1560]: I0813 00:54:59.384489 1560 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:54:59.385038 kubelet[1560]: I0813 00:54:59.384530 1560 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:54:59.385235 kubelet[1560]: I0813 00:54:59.385052 1560 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:54:59.385235 kubelet[1560]: I0813 00:54:59.385065 1560 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:54:59.385235 kubelet[1560]: I0813 00:54:59.385216 1560 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:54:59.395913 kubelet[1560]: I0813 00:54:59.395870 1560 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:54:59.396035 kubelet[1560]: I0813 00:54:59.395924 1560 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:54:59.396035 kubelet[1560]: I0813 00:54:59.395979 1560 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:54:59.396269 kubelet[1560]: I0813 00:54:59.396247 1560 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:54:59.406004 kubelet[1560]: W0813 00:54:59.405889 1560 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Aug 13 00:54:59.406004 kubelet[1560]: E0813 00:54:59.405967 1560 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:59.406138 kubelet[1560]: I0813 00:54:59.406036 1560 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 00:54:59.406287 kubelet[1560]: W0813 00:54:59.406198 1560 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Aug 13 00:54:59.406287 kubelet[1560]: E0813 00:54:59.406279 1560 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:59.406430 kubelet[1560]: I0813 00:54:59.406413 1560 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:54:59.407824 kubelet[1560]: W0813 00:54:59.407795 1560 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:54:59.415578 kubelet[1560]: I0813 00:54:59.415543 1560 server.go:1274] "Started kubelet" Aug 13 00:54:59.418413 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Aug 13 00:54:59.418579 kubelet[1560]: I0813 00:54:59.418555 1560 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:54:59.424960 kubelet[1560]: I0813 00:54:59.424884 1560 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:54:59.426176 kubelet[1560]: I0813 00:54:59.426148 1560 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:54:59.426769 kubelet[1560]: I0813 00:54:59.426601 1560 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:54:59.427273 kubelet[1560]: I0813 00:54:59.427028 1560 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:54:59.438298 kubelet[1560]: E0813 00:54:59.438264 1560 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:59.438476 kubelet[1560]: I0813 00:54:59.438347 1560 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:54:59.438602 kubelet[1560]: I0813 00:54:59.438584 1560 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:54:59.438774 kubelet[1560]: I0813 00:54:59.438741 1560 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:54:59.439230 kubelet[1560]: W0813 00:54:59.439188 1560 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Aug 13 00:54:59.439313 kubelet[1560]: E0813 00:54:59.439243 1560 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:59.439421 kubelet[1560]: I0813 00:54:59.439404 1560 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:54:59.439481 kubelet[1560]: I0813 00:54:59.439427 1560 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:54:59.439550 kubelet[1560]: I0813 00:54:59.439533 1560 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:54:59.441106 kubelet[1560]: E0813 00:54:59.441077 1560 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="200ms" Aug 13 00:54:59.441846 kubelet[1560]: E0813 00:54:59.441724 1560 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:54:59.441927 kubelet[1560]: I0813 00:54:59.441901 1560 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:54:59.455143 kubelet[1560]: E0813 00:54:59.445205 1560 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.89:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.89:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b2d7e19f44975 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 00:54:59.415509365 +0000 UTC m=+0.465093850,LastTimestamp:2025-08-13 00:54:59.415509365 +0000 UTC m=+0.465093850,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 00:54:59.463939 kubelet[1560]: I0813 00:54:59.463884 1560 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:54:59.463939 kubelet[1560]: I0813 00:54:59.463910 1560 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:54:59.463939 kubelet[1560]: I0813 00:54:59.463945 1560 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:54:59.469069 kubelet[1560]: I0813 00:54:59.469006 1560 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:54:59.470190 kubelet[1560]: I0813 00:54:59.470166 1560 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:54:59.470257 kubelet[1560]: I0813 00:54:59.470200 1560 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:54:59.470257 kubelet[1560]: I0813 00:54:59.470229 1560 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:54:59.470322 kubelet[1560]: E0813 00:54:59.470268 1560 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:54:59.470858 kubelet[1560]: W0813 00:54:59.470828 1560 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Aug 13 00:54:59.470962 kubelet[1560]: E0813 00:54:59.470878 1560 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:59.538706 kubelet[1560]: E0813 00:54:59.538555 1560 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:59.571164 kubelet[1560]: E0813 00:54:59.571052 1560 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 00:54:59.639585 kubelet[1560]: E0813 00:54:59.639508 1560 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:59.642122 kubelet[1560]: E0813 00:54:59.642094 1560 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="400ms" Aug 13 00:54:59.740508 kubelet[1560]: E0813 00:54:59.740403 1560 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:59.771826 kubelet[1560]: E0813 00:54:59.771731 1560 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 00:54:59.841507 kubelet[1560]: E0813 00:54:59.841282 1560 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:59.941796 kubelet[1560]: E0813 00:54:59.941717 1560 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:55:00.029127 kubelet[1560]: I0813 00:55:00.029043 1560 policy_none.go:49] "None policy: Start" Aug 13 00:55:00.030229 kubelet[1560]: I0813 00:55:00.030206 1560 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:55:00.030281 kubelet[1560]: I0813 00:55:00.030241 1560 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:55:00.037775 systemd[1]: Created slice kubepods.slice. Aug 13 00:55:00.041862 kubelet[1560]: E0813 00:55:00.041808 1560 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:55:00.042250 systemd[1]: Created slice kubepods-burstable.slice. Aug 13 00:55:00.042641 kubelet[1560]: E0813 00:55:00.042597 1560 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="800ms" Aug 13 00:55:00.045056 systemd[1]: Created slice kubepods-besteffort.slice. Aug 13 00:55:00.051387 kubelet[1560]: I0813 00:55:00.051336 1560 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:55:00.051624 kubelet[1560]: I0813 00:55:00.051608 1560 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:55:00.051683 kubelet[1560]: I0813 00:55:00.051630 1560 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:55:00.052066 kubelet[1560]: I0813 00:55:00.051972 1560 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:55:00.054480 kubelet[1560]: E0813 00:55:00.053969 1560 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 13 00:55:00.153566 kubelet[1560]: I0813 00:55:00.153439 1560 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:55:00.153881 kubelet[1560]: E0813 00:55:00.153855 1560 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Aug 13 00:55:00.179021 systemd[1]: Created slice kubepods-burstable-pod235bc0bb5767f4f1d9dc29aaf3c38c97.slice. Aug 13 00:55:00.189557 systemd[1]: Created slice kubepods-burstable-pod407c569889bb86d746b0274843003fd0.slice. Aug 13 00:55:00.193655 systemd[1]: Created slice kubepods-burstable-pod27e4a50e94f48ec00f6bd509cb48ed05.slice. Aug 13 00:55:00.243409 kubelet[1560]: I0813 00:55:00.243360 1560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:55:00.243409 kubelet[1560]: I0813 00:55:00.243410 1560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:55:00.243647 kubelet[1560]: I0813 00:55:00.243443 1560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:55:00.243647 kubelet[1560]: I0813 00:55:00.243458 1560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:55:00.243647 kubelet[1560]: I0813 00:55:00.243492 1560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/235bc0bb5767f4f1d9dc29aaf3c38c97-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"235bc0bb5767f4f1d9dc29aaf3c38c97\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:55:00.243647 kubelet[1560]: I0813 00:55:00.243537 1560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/235bc0bb5767f4f1d9dc29aaf3c38c97-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"235bc0bb5767f4f1d9dc29aaf3c38c97\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:55:00.243647 kubelet[1560]: I0813 00:55:00.243585 1560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:55:00.243936 kubelet[1560]: I0813 00:55:00.243610 1560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 13 00:55:00.243936 kubelet[1560]: I0813 00:55:00.243632 1560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/235bc0bb5767f4f1d9dc29aaf3c38c97-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"235bc0bb5767f4f1d9dc29aaf3c38c97\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:55:00.355479 kubelet[1560]: I0813 00:55:00.355410 1560 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:55:00.355897 kubelet[1560]: E0813 00:55:00.355867 1560 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Aug 13 00:55:00.488717 kubelet[1560]: E0813 00:55:00.488669 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:00.489515 env[1199]: time="2025-08-13T00:55:00.489459516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:235bc0bb5767f4f1d9dc29aaf3c38c97,Namespace:kube-system,Attempt:0,}" Aug 13 00:55:00.492079 kubelet[1560]: E0813 00:55:00.492043 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:00.492487 env[1199]: time="2025-08-13T00:55:00.492443454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,}" Aug 13 00:55:00.495759 kubelet[1560]: E0813 00:55:00.495713 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:00.496183 env[1199]: time="2025-08-13T00:55:00.496112108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,}" Aug 13 00:55:00.678737 kubelet[1560]: W0813 00:55:00.678686 1560 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Aug 13 00:55:00.678892 kubelet[1560]: E0813 00:55:00.678793 1560 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:55:00.757906 kubelet[1560]: I0813 00:55:00.757777 1560 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:55:00.758143 kubelet[1560]: E0813 00:55:00.758115 1560 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Aug 13 00:55:00.804054 kubelet[1560]: E0813 00:55:00.803876 1560 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.89:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.89:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b2d7e19f44975 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 00:54:59.415509365 +0000 UTC m=+0.465093850,LastTimestamp:2025-08-13 00:54:59.415509365 +0000 UTC m=+0.465093850,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 00:55:00.835590 kubelet[1560]: W0813 00:55:00.835485 1560 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Aug 13 00:55:00.835775 kubelet[1560]: E0813 00:55:00.835621 1560 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:55:00.843497 kubelet[1560]: E0813 00:55:00.843445 1560 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="1.6s" Aug 13 00:55:00.891684 kubelet[1560]: W0813 00:55:00.891565 1560 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Aug 13 00:55:00.891684 kubelet[1560]: E0813 00:55:00.891678 1560 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:55:00.987302 kubelet[1560]: W0813 00:55:00.987212 1560 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Aug 13 00:55:00.987302 kubelet[1560]: E0813 00:55:00.987296 1560 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:55:01.147406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount227035600.mount: Deactivated successfully. Aug 13 00:55:01.177690 env[1199]: time="2025-08-13T00:55:01.177622076Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:01.182638 env[1199]: time="2025-08-13T00:55:01.182569017Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:01.183891 env[1199]: time="2025-08-13T00:55:01.183845469Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:01.185926 env[1199]: time="2025-08-13T00:55:01.185877741Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:01.187601 env[1199]: time="2025-08-13T00:55:01.187575436Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:01.189163 env[1199]: time="2025-08-13T00:55:01.189111661Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:01.190569 env[1199]: time="2025-08-13T00:55:01.190536103Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:01.191989 env[1199]: time="2025-08-13T00:55:01.191962679Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:01.193930 env[1199]: time="2025-08-13T00:55:01.193900910Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:01.196692 env[1199]: time="2025-08-13T00:55:01.196644145Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:01.198133 env[1199]: time="2025-08-13T00:55:01.198077086Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:01.201330 env[1199]: time="2025-08-13T00:55:01.201286928Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:01.229531 env[1199]: time="2025-08-13T00:55:01.229087161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:55:01.229531 env[1199]: time="2025-08-13T00:55:01.229138611Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:55:01.229531 env[1199]: time="2025-08-13T00:55:01.229152344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:55:01.229531 env[1199]: time="2025-08-13T00:55:01.229321893Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/38679146be0fb56dce471fe6dfe57ba771df685e327f40172f10bbe8e6bb44be pid=1604 runtime=io.containerd.runc.v2 Aug 13 00:55:01.239098 env[1199]: time="2025-08-13T00:55:01.238892983Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:55:01.239098 env[1199]: time="2025-08-13T00:55:01.238938770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:55:01.239098 env[1199]: time="2025-08-13T00:55:01.238950668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:55:01.239402 env[1199]: time="2025-08-13T00:55:01.239123305Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b3e40e585efb9c610878e4379c5c43b73bdcb6c65475a879e9a6584863e79e74 pid=1635 runtime=io.containerd.runc.v2 Aug 13 00:55:01.242268 env[1199]: time="2025-08-13T00:55:01.242057639Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:55:01.242268 env[1199]: time="2025-08-13T00:55:01.242103036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:55:01.242268 env[1199]: time="2025-08-13T00:55:01.242118011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:55:01.242500 env[1199]: time="2025-08-13T00:55:01.242287390Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1e6155b09e4fe4bbc1384051921a0286fb02f51917de5317cbe48ea907577296 pid=1620 runtime=io.containerd.runc.v2 Aug 13 00:55:01.276998 systemd[1]: Started cri-containerd-38679146be0fb56dce471fe6dfe57ba771df685e327f40172f10bbe8e6bb44be.scope. Aug 13 00:55:01.285788 systemd[1]: Started cri-containerd-1e6155b09e4fe4bbc1384051921a0286fb02f51917de5317cbe48ea907577296.scope. Aug 13 00:55:01.289796 systemd[1]: Started cri-containerd-b3e40e585efb9c610878e4379c5c43b73bdcb6c65475a879e9a6584863e79e74.scope. Aug 13 00:55:01.459510 env[1199]: time="2025-08-13T00:55:01.446910954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:235bc0bb5767f4f1d9dc29aaf3c38c97,Namespace:kube-system,Attempt:0,} returns sandbox id \"38679146be0fb56dce471fe6dfe57ba771df685e327f40172f10bbe8e6bb44be\"" Aug 13 00:55:01.459510 env[1199]: time="2025-08-13T00:55:01.452279475Z" level=info msg="CreateContainer within sandbox \"38679146be0fb56dce471fe6dfe57ba771df685e327f40172f10bbe8e6bb44be\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:55:01.459900 kubelet[1560]: E0813 00:55:01.448687 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:01.475312 kubelet[1560]: E0813 00:55:01.475230 1560 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.89:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:55:01.521309 env[1199]: time="2025-08-13T00:55:01.521263359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3e40e585efb9c610878e4379c5c43b73bdcb6c65475a879e9a6584863e79e74\"" Aug 13 00:55:01.523192 kubelet[1560]: E0813 00:55:01.522970 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:01.525204 env[1199]: time="2025-08-13T00:55:01.525165441Z" level=info msg="CreateContainer within sandbox \"b3e40e585efb9c610878e4379c5c43b73bdcb6c65475a879e9a6584863e79e74\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:55:01.527513 env[1199]: time="2025-08-13T00:55:01.527484507Z" level=info msg="CreateContainer within sandbox \"38679146be0fb56dce471fe6dfe57ba771df685e327f40172f10bbe8e6bb44be\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1a40514ef40109238b996f35d99c0d1ff8c1445f771f87670bb3e244d83f0186\"" Aug 13 00:55:01.528140 env[1199]: time="2025-08-13T00:55:01.528097470Z" level=info msg="StartContainer for \"1a40514ef40109238b996f35d99c0d1ff8c1445f771f87670bb3e244d83f0186\"" Aug 13 00:55:01.541382 env[1199]: time="2025-08-13T00:55:01.541327990Z" level=info msg="CreateContainer within sandbox \"b3e40e585efb9c610878e4379c5c43b73bdcb6c65475a879e9a6584863e79e74\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6014326da122a8ba0651cd73d4269ef54a3111fe79bc4458b894af0bfc537231\"" Aug 13 00:55:01.542099 env[1199]: time="2025-08-13T00:55:01.542080320Z" level=info msg="StartContainer for \"6014326da122a8ba0651cd73d4269ef54a3111fe79bc4458b894af0bfc537231\"" Aug 13 00:55:01.544508 env[1199]: time="2025-08-13T00:55:01.544479976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e6155b09e4fe4bbc1384051921a0286fb02f51917de5317cbe48ea907577296\"" Aug 13 00:55:01.545375 kubelet[1560]: E0813 00:55:01.545347 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:01.547071 env[1199]: time="2025-08-13T00:55:01.547047307Z" level=info msg="CreateContainer within sandbox \"1e6155b09e4fe4bbc1384051921a0286fb02f51917de5317cbe48ea907577296\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:55:01.554677 systemd[1]: Started cri-containerd-1a40514ef40109238b996f35d99c0d1ff8c1445f771f87670bb3e244d83f0186.scope. Aug 13 00:55:01.560194 env[1199]: time="2025-08-13T00:55:01.560154848Z" level=info msg="CreateContainer within sandbox \"1e6155b09e4fe4bbc1384051921a0286fb02f51917de5317cbe48ea907577296\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"53a64ec7e59a11950287c20ba0a0631126bd2b01ecad9c5b1a47cce097f8b47e\"" Aug 13 00:55:01.560348 kubelet[1560]: I0813 00:55:01.560254 1560 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:55:01.560738 kubelet[1560]: E0813 00:55:01.560709 1560 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Aug 13 00:55:01.562567 env[1199]: time="2025-08-13T00:55:01.560940707Z" level=info msg="StartContainer for \"53a64ec7e59a11950287c20ba0a0631126bd2b01ecad9c5b1a47cce097f8b47e\"" Aug 13 00:55:01.564662 systemd[1]: Started cri-containerd-6014326da122a8ba0651cd73d4269ef54a3111fe79bc4458b894af0bfc537231.scope. Aug 13 00:55:01.588310 systemd[1]: Started cri-containerd-53a64ec7e59a11950287c20ba0a0631126bd2b01ecad9c5b1a47cce097f8b47e.scope. Aug 13 00:55:01.618361 env[1199]: time="2025-08-13T00:55:01.618305723Z" level=info msg="StartContainer for \"1a40514ef40109238b996f35d99c0d1ff8c1445f771f87670bb3e244d83f0186\" returns successfully" Aug 13 00:55:01.632534 env[1199]: time="2025-08-13T00:55:01.632411883Z" level=info msg="StartContainer for \"6014326da122a8ba0651cd73d4269ef54a3111fe79bc4458b894af0bfc537231\" returns successfully" Aug 13 00:55:01.640096 env[1199]: time="2025-08-13T00:55:01.640032324Z" level=info msg="StartContainer for \"53a64ec7e59a11950287c20ba0a0631126bd2b01ecad9c5b1a47cce097f8b47e\" returns successfully" Aug 13 00:55:02.483522 kubelet[1560]: E0813 00:55:02.483487 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:02.485403 kubelet[1560]: E0813 00:55:02.485382 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:02.489872 kubelet[1560]: E0813 00:55:02.489851 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:03.162649 kubelet[1560]: I0813 00:55:03.162600 1560 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:55:03.486094 kubelet[1560]: E0813 00:55:03.486031 1560 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 13 00:55:03.491237 kubelet[1560]: E0813 00:55:03.491200 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:03.491524 kubelet[1560]: E0813 00:55:03.491494 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:03.603533 kubelet[1560]: I0813 00:55:03.603475 1560 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 13 00:55:03.603533 kubelet[1560]: E0813 00:55:03.603524 1560 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Aug 13 00:55:03.646310 kubelet[1560]: E0813 00:55:03.646247 1560 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:55:03.747237 kubelet[1560]: E0813 00:55:03.746986 1560 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:55:04.399387 kubelet[1560]: I0813 00:55:04.399308 1560 apiserver.go:52] "Watching apiserver" Aug 13 00:55:04.439391 kubelet[1560]: I0813 00:55:04.439326 1560 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:55:04.507440 kubelet[1560]: E0813 00:55:04.507382 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:05.411233 systemd[1]: Reloading. Aug 13 00:55:05.482446 /usr/lib/systemd/system-generators/torcx-generator[1857]: time="2025-08-13T00:55:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:55:05.482500 /usr/lib/systemd/system-generators/torcx-generator[1857]: time="2025-08-13T00:55:05Z" level=info msg="torcx already run" Aug 13 00:55:05.493192 kubelet[1560]: E0813 00:55:05.493133 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:05.555076 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:55:05.555096 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:55:05.577796 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:55:05.691292 systemd[1]: Stopping kubelet.service... Aug 13 00:55:05.717011 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:55:05.717242 systemd[1]: Stopped kubelet.service. Aug 13 00:55:05.719228 systemd[1]: Starting kubelet.service... Aug 13 00:55:05.818558 systemd[1]: Started kubelet.service. Aug 13 00:55:05.862423 kubelet[1902]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:55:05.862423 kubelet[1902]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:55:05.862423 kubelet[1902]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:55:05.862980 kubelet[1902]: I0813 00:55:05.862484 1902 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:55:05.869426 kubelet[1902]: I0813 00:55:05.869387 1902 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:55:05.869426 kubelet[1902]: I0813 00:55:05.869413 1902 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:55:05.869694 kubelet[1902]: I0813 00:55:05.869672 1902 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:55:05.870854 kubelet[1902]: I0813 00:55:05.870832 1902 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:55:05.872597 kubelet[1902]: I0813 00:55:05.872573 1902 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:55:05.875727 kubelet[1902]: E0813 00:55:05.875443 1902 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:55:05.875834 kubelet[1902]: I0813 00:55:05.875817 1902 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:55:05.880163 kubelet[1902]: I0813 00:55:05.880121 1902 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:55:05.880287 kubelet[1902]: I0813 00:55:05.880229 1902 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:55:05.880379 kubelet[1902]: I0813 00:55:05.880338 1902 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:55:05.880575 kubelet[1902]: I0813 00:55:05.880371 1902 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:55:05.880693 kubelet[1902]: I0813 00:55:05.880579 1902 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:55:05.880693 kubelet[1902]: I0813 00:55:05.880589 1902 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:55:05.880693 kubelet[1902]: I0813 00:55:05.880625 1902 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:55:05.880800 kubelet[1902]: I0813 00:55:05.880715 1902 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:55:05.880800 kubelet[1902]: I0813 00:55:05.880738 1902 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:55:05.880800 kubelet[1902]: I0813 00:55:05.880769 1902 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:55:05.880800 kubelet[1902]: I0813 00:55:05.880793 1902 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:55:05.881750 kubelet[1902]: I0813 00:55:05.881730 1902 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 00:55:05.882168 kubelet[1902]: I0813 00:55:05.882150 1902 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:55:05.882606 kubelet[1902]: I0813 00:55:05.882590 1902 server.go:1274] "Started kubelet" Aug 13 00:55:05.887371 kubelet[1902]: I0813 00:55:05.886759 1902 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:55:05.887371 kubelet[1902]: I0813 00:55:05.887155 1902 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:55:05.887371 kubelet[1902]: I0813 00:55:05.887274 1902 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:55:05.888892 kubelet[1902]: I0813 00:55:05.888867 1902 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:55:05.889722 kubelet[1902]: I0813 00:55:05.889706 1902 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:55:05.891192 kubelet[1902]: E0813 00:55:05.891168 1902 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:55:05.891442 kubelet[1902]: I0813 00:55:05.891411 1902 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:55:05.891591 kubelet[1902]: I0813 00:55:05.891537 1902 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:55:05.891683 kubelet[1902]: I0813 00:55:05.891662 1902 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:55:05.892258 kubelet[1902]: I0813 00:55:05.892241 1902 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:55:05.892908 kubelet[1902]: I0813 00:55:05.892889 1902 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:55:05.893017 kubelet[1902]: I0813 00:55:05.892995 1902 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:55:05.894753 kubelet[1902]: I0813 00:55:05.894706 1902 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:55:05.909856 kubelet[1902]: I0813 00:55:05.909784 1902 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:55:05.910967 kubelet[1902]: I0813 00:55:05.910939 1902 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:55:05.910967 kubelet[1902]: I0813 00:55:05.910969 1902 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:55:05.911089 kubelet[1902]: I0813 00:55:05.910988 1902 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:55:05.911089 kubelet[1902]: E0813 00:55:05.911029 1902 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:55:05.927235 kubelet[1902]: I0813 00:55:05.927195 1902 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:55:05.927235 kubelet[1902]: I0813 00:55:05.927214 1902 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:55:05.927235 kubelet[1902]: I0813 00:55:05.927234 1902 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:55:05.927484 kubelet[1902]: I0813 00:55:05.927380 1902 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:55:05.927484 kubelet[1902]: I0813 00:55:05.927389 1902 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:55:05.927484 kubelet[1902]: I0813 00:55:05.927406 1902 policy_none.go:49] "None policy: Start" Aug 13 00:55:05.928043 kubelet[1902]: I0813 00:55:05.928023 1902 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:55:05.928101 kubelet[1902]: I0813 00:55:05.928072 1902 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:55:05.928229 kubelet[1902]: I0813 00:55:05.928216 1902 state_mem.go:75] "Updated machine memory state" Aug 13 00:55:05.931833 kubelet[1902]: I0813 00:55:05.931811 1902 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:55:05.931973 kubelet[1902]: I0813 00:55:05.931951 1902 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:55:05.932032 kubelet[1902]: I0813 00:55:05.931970 1902 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:55:05.932438 kubelet[1902]: I0813 00:55:05.932300 1902 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:55:06.038154 kubelet[1902]: I0813 00:55:06.038040 1902 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:55:06.092885 kubelet[1902]: I0813 00:55:06.092840 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:55:06.092885 kubelet[1902]: I0813 00:55:06.092885 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/235bc0bb5767f4f1d9dc29aaf3c38c97-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"235bc0bb5767f4f1d9dc29aaf3c38c97\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:55:06.093068 kubelet[1902]: I0813 00:55:06.092909 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:55:06.093068 kubelet[1902]: I0813 00:55:06.092930 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:55:06.093068 kubelet[1902]: I0813 00:55:06.092949 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:55:06.093068 kubelet[1902]: I0813 00:55:06.092973 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:55:06.093068 kubelet[1902]: I0813 00:55:06.092994 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 13 00:55:06.093212 kubelet[1902]: I0813 00:55:06.093026 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/235bc0bb5767f4f1d9dc29aaf3c38c97-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"235bc0bb5767f4f1d9dc29aaf3c38c97\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:55:06.093212 kubelet[1902]: I0813 00:55:06.093053 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/235bc0bb5767f4f1d9dc29aaf3c38c97-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"235bc0bb5767f4f1d9dc29aaf3c38c97\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:55:06.253181 kubelet[1902]: E0813 00:55:06.253110 1902 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 13 00:55:06.253441 kubelet[1902]: E0813 00:55:06.253322 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:06.266857 kubelet[1902]: I0813 00:55:06.266825 1902 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Aug 13 00:55:06.267100 kubelet[1902]: I0813 00:55:06.267079 1902 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 13 00:55:06.371723 kubelet[1902]: E0813 00:55:06.371594 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:06.371723 kubelet[1902]: E0813 00:55:06.371594 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:06.572505 sudo[1938]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 00:55:06.572712 sudo[1938]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Aug 13 00:55:06.882008 kubelet[1902]: I0813 00:55:06.881961 1902 apiserver.go:52] "Watching apiserver" Aug 13 00:55:06.891830 kubelet[1902]: I0813 00:55:06.891709 1902 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:55:06.923158 kubelet[1902]: E0813 00:55:06.923122 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:06.923399 kubelet[1902]: E0813 00:55:06.923279 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:07.174130 sudo[1938]: pam_unix(sudo:session): session closed for user root Aug 13 00:55:07.179745 kubelet[1902]: I0813 00:55:07.179690 1902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.179670773 podStartE2EDuration="1.179670773s" podCreationTimestamp="2025-08-13 00:55:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:55:07.179407219 +0000 UTC m=+1.357359491" watchObservedRunningTime="2025-08-13 00:55:07.179670773 +0000 UTC m=+1.357623045" Aug 13 00:55:07.187738 kubelet[1902]: E0813 00:55:07.187711 1902 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 13 00:55:07.187905 kubelet[1902]: E0813 00:55:07.187884 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:07.394478 kubelet[1902]: I0813 00:55:07.394386 1902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.39436199 podStartE2EDuration="3.39436199s" podCreationTimestamp="2025-08-13 00:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:55:07.385695121 +0000 UTC m=+1.563647403" watchObservedRunningTime="2025-08-13 00:55:07.39436199 +0000 UTC m=+1.572314262" Aug 13 00:55:07.394690 kubelet[1902]: I0813 00:55:07.394540 1902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.394531055 podStartE2EDuration="1.394531055s" podCreationTimestamp="2025-08-13 00:55:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:55:07.393528904 +0000 UTC m=+1.571481176" watchObservedRunningTime="2025-08-13 00:55:07.394531055 +0000 UTC m=+1.572483347" Aug 13 00:55:07.924058 kubelet[1902]: E0813 00:55:07.924010 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:07.924632 kubelet[1902]: E0813 00:55:07.924582 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:09.139098 sudo[1299]: pam_unix(sudo:session): session closed for user root Aug 13 00:55:09.140868 sshd[1296]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:09.143916 systemd[1]: sshd@4-10.0.0.89:22-10.0.0.1:55846.service: Deactivated successfully. Aug 13 00:55:09.144652 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:55:09.144788 systemd[1]: session-5.scope: Consumed 4.865s CPU time. Aug 13 00:55:09.145255 systemd-logind[1190]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:55:09.146187 systemd-logind[1190]: Removed session 5. Aug 13 00:55:10.848182 kubelet[1902]: I0813 00:55:10.848131 1902 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:55:10.848632 env[1199]: time="2025-08-13T00:55:10.848571864Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:55:10.848952 kubelet[1902]: I0813 00:55:10.848922 1902 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:55:11.490239 kubelet[1902]: E0813 00:55:11.490152 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:11.930635 kubelet[1902]: E0813 00:55:11.930588 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:12.113653 systemd[1]: Created slice kubepods-besteffort-podc4d88630_f8bc_44cb_8f17_b7bc35945356.slice. Aug 13 00:55:12.128398 kubelet[1902]: I0813 00:55:12.128347 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4d88630-f8bc-44cb-8f17-b7bc35945356-lib-modules\") pod \"kube-proxy-zfj6j\" (UID: \"c4d88630-f8bc-44cb-8f17-b7bc35945356\") " pod="kube-system/kube-proxy-zfj6j" Aug 13 00:55:12.128398 kubelet[1902]: I0813 00:55:12.128387 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggl7t\" (UniqueName: \"kubernetes.io/projected/c4d88630-f8bc-44cb-8f17-b7bc35945356-kube-api-access-ggl7t\") pod \"kube-proxy-zfj6j\" (UID: \"c4d88630-f8bc-44cb-8f17-b7bc35945356\") " pod="kube-system/kube-proxy-zfj6j" Aug 13 00:55:12.128604 kubelet[1902]: I0813 00:55:12.128413 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c4d88630-f8bc-44cb-8f17-b7bc35945356-kube-proxy\") pod \"kube-proxy-zfj6j\" (UID: \"c4d88630-f8bc-44cb-8f17-b7bc35945356\") " pod="kube-system/kube-proxy-zfj6j" Aug 13 00:55:12.128604 kubelet[1902]: I0813 00:55:12.128433 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c4d88630-f8bc-44cb-8f17-b7bc35945356-xtables-lock\") pod \"kube-proxy-zfj6j\" (UID: \"c4d88630-f8bc-44cb-8f17-b7bc35945356\") " pod="kube-system/kube-proxy-zfj6j" Aug 13 00:55:12.355279 systemd[1]: Created slice kubepods-burstable-pod80870bc1_a32f_4ee1_99e0_1caea40cf072.slice. Aug 13 00:55:12.358938 kubelet[1902]: I0813 00:55:12.358887 1902 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Aug 13 00:55:12.396831 systemd[1]: Created slice kubepods-besteffort-pod2223223b_e61d_4d69_855d_6c0d95269d1b.slice. Aug 13 00:55:12.420692 kubelet[1902]: E0813 00:55:12.420658 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:12.421288 env[1199]: time="2025-08-13T00:55:12.421250765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zfj6j,Uid:c4d88630-f8bc-44cb-8f17-b7bc35945356,Namespace:kube-system,Attempt:0,}" Aug 13 00:55:12.431122 kubelet[1902]: I0813 00:55:12.431096 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-bpf-maps\") pod \"cilium-wb6xz\" (UID: \"80870bc1-a32f-4ee1-99e0-1caea40cf072\") " pod="kube-system/cilium-wb6xz" Aug 13 00:55:12.431248 kubelet[1902]: I0813 00:55:12.431126 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-hostproc\") pod \"cilium-wb6xz\" (UID: \"80870bc1-a32f-4ee1-99e0-1caea40cf072\") " pod="kube-system/cilium-wb6xz" Aug 13 00:55:12.431248 kubelet[1902]: I0813 00:55:12.431143 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-cilium-run\") pod \"cilium-wb6xz\" (UID: \"80870bc1-a32f-4ee1-99e0-1caea40cf072\") " pod="kube-system/cilium-wb6xz" Aug 13 00:55:12.431248 kubelet[1902]: I0813 00:55:12.431158 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-host-proc-sys-kernel\") pod \"cilium-wb6xz\" (UID: \"80870bc1-a32f-4ee1-99e0-1caea40cf072\") " pod="kube-system/cilium-wb6xz" Aug 13 00:55:12.431248 kubelet[1902]: I0813 00:55:12.431174 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2223223b-e61d-4d69-855d-6c0d95269d1b-cilium-config-path\") pod \"cilium-operator-5d85765b45-bzqg7\" (UID: \"2223223b-e61d-4d69-855d-6c0d95269d1b\") " pod="kube-system/cilium-operator-5d85765b45-bzqg7" Aug 13 00:55:12.431248 kubelet[1902]: I0813 00:55:12.431191 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-cni-path\") pod \"cilium-wb6xz\" (UID: \"80870bc1-a32f-4ee1-99e0-1caea40cf072\") " pod="kube-system/cilium-wb6xz" Aug 13 00:55:12.431378 kubelet[1902]: I0813 00:55:12.431205 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/80870bc1-a32f-4ee1-99e0-1caea40cf072-hubble-tls\") pod \"cilium-wb6xz\" (UID: \"80870bc1-a32f-4ee1-99e0-1caea40cf072\") " pod="kube-system/cilium-wb6xz" Aug 13 00:55:12.431378 kubelet[1902]: I0813 00:55:12.431217 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-cilium-cgroup\") pod \"cilium-wb6xz\" (UID: \"80870bc1-a32f-4ee1-99e0-1caea40cf072\") " pod="kube-system/cilium-wb6xz" Aug 13 00:55:12.431378 kubelet[1902]: I0813 00:55:12.431231 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-etc-cni-netd\") pod \"cilium-wb6xz\" (UID: \"80870bc1-a32f-4ee1-99e0-1caea40cf072\") " pod="kube-system/cilium-wb6xz" Aug 13 00:55:12.431378 kubelet[1902]: I0813 00:55:12.431246 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-lib-modules\") pod \"cilium-wb6xz\" (UID: \"80870bc1-a32f-4ee1-99e0-1caea40cf072\") " pod="kube-system/cilium-wb6xz" Aug 13 00:55:12.431378 kubelet[1902]: I0813 00:55:12.431320 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-host-proc-sys-net\") pod \"cilium-wb6xz\" (UID: \"80870bc1-a32f-4ee1-99e0-1caea40cf072\") " pod="kube-system/cilium-wb6xz" Aug 13 00:55:12.431526 kubelet[1902]: I0813 00:55:12.431386 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/80870bc1-a32f-4ee1-99e0-1caea40cf072-clustermesh-secrets\") pod \"cilium-wb6xz\" (UID: \"80870bc1-a32f-4ee1-99e0-1caea40cf072\") " pod="kube-system/cilium-wb6xz" Aug 13 00:55:12.431526 kubelet[1902]: I0813 00:55:12.431401 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/80870bc1-a32f-4ee1-99e0-1caea40cf072-cilium-config-path\") pod \"cilium-wb6xz\" (UID: \"80870bc1-a32f-4ee1-99e0-1caea40cf072\") " pod="kube-system/cilium-wb6xz" Aug 13 00:55:12.431526 kubelet[1902]: I0813 00:55:12.431427 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxdh5\" (UniqueName: \"kubernetes.io/projected/80870bc1-a32f-4ee1-99e0-1caea40cf072-kube-api-access-vxdh5\") pod \"cilium-wb6xz\" (UID: \"80870bc1-a32f-4ee1-99e0-1caea40cf072\") " pod="kube-system/cilium-wb6xz" Aug 13 00:55:12.431526 kubelet[1902]: I0813 00:55:12.431443 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2k7k\" (UniqueName: \"kubernetes.io/projected/2223223b-e61d-4d69-855d-6c0d95269d1b-kube-api-access-x2k7k\") pod \"cilium-operator-5d85765b45-bzqg7\" (UID: \"2223223b-e61d-4d69-855d-6c0d95269d1b\") " pod="kube-system/cilium-operator-5d85765b45-bzqg7" Aug 13 00:55:12.431526 kubelet[1902]: I0813 00:55:12.431459 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-xtables-lock\") pod \"cilium-wb6xz\" (UID: \"80870bc1-a32f-4ee1-99e0-1caea40cf072\") " pod="kube-system/cilium-wb6xz" Aug 13 00:55:12.441266 env[1199]: time="2025-08-13T00:55:12.441185042Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:55:12.441266 env[1199]: time="2025-08-13T00:55:12.441223414Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:55:12.441266 env[1199]: time="2025-08-13T00:55:12.441248558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:55:12.441517 env[1199]: time="2025-08-13T00:55:12.441459078Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0619d3fd6457386ec2c62c58f82ebdbc77d724763f0e7c55af31d5b96b0b32e5 pid=1992 runtime=io.containerd.runc.v2 Aug 13 00:55:12.453723 systemd[1]: run-containerd-runc-k8s.io-0619d3fd6457386ec2c62c58f82ebdbc77d724763f0e7c55af31d5b96b0b32e5-runc.Fg41qc.mount: Deactivated successfully. Aug 13 00:55:12.456681 systemd[1]: Started cri-containerd-0619d3fd6457386ec2c62c58f82ebdbc77d724763f0e7c55af31d5b96b0b32e5.scope. Aug 13 00:55:12.491415 env[1199]: time="2025-08-13T00:55:12.491355660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zfj6j,Uid:c4d88630-f8bc-44cb-8f17-b7bc35945356,Namespace:kube-system,Attempt:0,} returns sandbox id \"0619d3fd6457386ec2c62c58f82ebdbc77d724763f0e7c55af31d5b96b0b32e5\"" Aug 13 00:55:12.492110 kubelet[1902]: E0813 00:55:12.492074 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:12.495435 env[1199]: time="2025-08-13T00:55:12.495224298Z" level=info msg="CreateContainer within sandbox \"0619d3fd6457386ec2c62c58f82ebdbc77d724763f0e7c55af31d5b96b0b32e5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:55:12.855584 env[1199]: time="2025-08-13T00:55:12.855502988Z" level=info msg="CreateContainer within sandbox \"0619d3fd6457386ec2c62c58f82ebdbc77d724763f0e7c55af31d5b96b0b32e5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"82309fac2116bd73f7f727248a26e83c0b37c633992113703dc47d0c6d774743\"" Aug 13 00:55:12.856218 env[1199]: time="2025-08-13T00:55:12.856192732Z" level=info msg="StartContainer for \"82309fac2116bd73f7f727248a26e83c0b37c633992113703dc47d0c6d774743\"" Aug 13 00:55:12.872876 systemd[1]: Started cri-containerd-82309fac2116bd73f7f727248a26e83c0b37c633992113703dc47d0c6d774743.scope. Aug 13 00:55:12.904497 env[1199]: time="2025-08-13T00:55:12.901736944Z" level=info msg="StartContainer for \"82309fac2116bd73f7f727248a26e83c0b37c633992113703dc47d0c6d774743\" returns successfully" Aug 13 00:55:12.934853 kubelet[1902]: E0813 00:55:12.934580 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:12.960013 kubelet[1902]: E0813 00:55:12.959975 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:12.960646 env[1199]: time="2025-08-13T00:55:12.960585306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wb6xz,Uid:80870bc1-a32f-4ee1-99e0-1caea40cf072,Namespace:kube-system,Attempt:0,}" Aug 13 00:55:12.975766 env[1199]: time="2025-08-13T00:55:12.975702420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:55:12.975766 env[1199]: time="2025-08-13T00:55:12.975767850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:55:12.975960 env[1199]: time="2025-08-13T00:55:12.975791680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:55:12.976139 env[1199]: time="2025-08-13T00:55:12.976092373Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1f7bd778347accccf09afb9703ef2897b492b9d01b0289c66fea8b02af43c874 pid=2084 runtime=io.containerd.runc.v2 Aug 13 00:55:12.988084 systemd[1]: Started cri-containerd-1f7bd778347accccf09afb9703ef2897b492b9d01b0289c66fea8b02af43c874.scope. Aug 13 00:55:12.999614 kubelet[1902]: E0813 00:55:12.999584 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:13.002896 env[1199]: time="2025-08-13T00:55:13.002850203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-bzqg7,Uid:2223223b-e61d-4d69-855d-6c0d95269d1b,Namespace:kube-system,Attempt:0,}" Aug 13 00:55:13.018844 env[1199]: time="2025-08-13T00:55:13.018786769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wb6xz,Uid:80870bc1-a32f-4ee1-99e0-1caea40cf072,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f7bd778347accccf09afb9703ef2897b492b9d01b0289c66fea8b02af43c874\"" Aug 13 00:55:13.019627 kubelet[1902]: E0813 00:55:13.019596 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:13.026857 env[1199]: time="2025-08-13T00:55:13.026801952Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 00:55:13.032942 env[1199]: time="2025-08-13T00:55:13.032873617Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:55:13.033117 env[1199]: time="2025-08-13T00:55:13.032945269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:55:13.033117 env[1199]: time="2025-08-13T00:55:13.032968809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:55:13.033202 env[1199]: time="2025-08-13T00:55:13.033115160Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0f20b468b07448675e6f9c639633773830606601969e2f47bd5087c38f2c4568 pid=2157 runtime=io.containerd.runc.v2 Aug 13 00:55:13.043863 systemd[1]: Started cri-containerd-0f20b468b07448675e6f9c639633773830606601969e2f47bd5087c38f2c4568.scope. Aug 13 00:55:13.078290 env[1199]: time="2025-08-13T00:55:13.078229223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-bzqg7,Uid:2223223b-e61d-4d69-855d-6c0d95269d1b,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f20b468b07448675e6f9c639633773830606601969e2f47bd5087c38f2c4568\"" Aug 13 00:55:13.078864 kubelet[1902]: E0813 00:55:13.078832 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:13.823312 kubelet[1902]: E0813 00:55:13.823284 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:13.837243 kubelet[1902]: I0813 00:55:13.837192 1902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zfj6j" podStartSLOduration=2.837171103 podStartE2EDuration="2.837171103s" podCreationTimestamp="2025-08-13 00:55:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:55:12.946657194 +0000 UTC m=+7.124609496" watchObservedRunningTime="2025-08-13 00:55:13.837171103 +0000 UTC m=+8.015123375" Aug 13 00:55:13.928085 update_engine[1192]: I0813 00:55:13.927982 1192 update_attempter.cc:509] Updating boot flags... Aug 13 00:55:13.939830 kubelet[1902]: E0813 00:55:13.939418 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:17.763147 kubelet[1902]: E0813 00:55:17.763057 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:21.771569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount625088537.mount: Deactivated successfully. Aug 13 00:55:26.345346 env[1199]: time="2025-08-13T00:55:26.345258055Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:26.348408 env[1199]: time="2025-08-13T00:55:26.348283985Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:26.353114 env[1199]: time="2025-08-13T00:55:26.352871116Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:26.354451 env[1199]: time="2025-08-13T00:55:26.354358861Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 00:55:26.357365 env[1199]: time="2025-08-13T00:55:26.357285410Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 00:55:26.359187 env[1199]: time="2025-08-13T00:55:26.359132217Z" level=info msg="CreateContainer within sandbox \"1f7bd778347accccf09afb9703ef2897b492b9d01b0289c66fea8b02af43c874\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:55:26.389085 env[1199]: time="2025-08-13T00:55:26.388990116Z" level=info msg="CreateContainer within sandbox \"1f7bd778347accccf09afb9703ef2897b492b9d01b0289c66fea8b02af43c874\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"58856f6b5edf77b040df80fa33d4cf84080e17e53d0d2815cadd867186e70728\"" Aug 13 00:55:26.389973 env[1199]: time="2025-08-13T00:55:26.389913144Z" level=info msg="StartContainer for \"58856f6b5edf77b040df80fa33d4cf84080e17e53d0d2815cadd867186e70728\"" Aug 13 00:55:26.413073 systemd[1]: Started cri-containerd-58856f6b5edf77b040df80fa33d4cf84080e17e53d0d2815cadd867186e70728.scope. Aug 13 00:55:26.448823 env[1199]: time="2025-08-13T00:55:26.448761688Z" level=info msg="StartContainer for \"58856f6b5edf77b040df80fa33d4cf84080e17e53d0d2815cadd867186e70728\" returns successfully" Aug 13 00:55:26.462477 systemd[1]: cri-containerd-58856f6b5edf77b040df80fa33d4cf84080e17e53d0d2815cadd867186e70728.scope: Deactivated successfully. Aug 13 00:55:26.778255 env[1199]: time="2025-08-13T00:55:26.778171030Z" level=info msg="shim disconnected" id=58856f6b5edf77b040df80fa33d4cf84080e17e53d0d2815cadd867186e70728 Aug 13 00:55:26.778255 env[1199]: time="2025-08-13T00:55:26.778234789Z" level=warning msg="cleaning up after shim disconnected" id=58856f6b5edf77b040df80fa33d4cf84080e17e53d0d2815cadd867186e70728 namespace=k8s.io Aug 13 00:55:26.778255 env[1199]: time="2025-08-13T00:55:26.778244328Z" level=info msg="cleaning up dead shim" Aug 13 00:55:26.790888 env[1199]: time="2025-08-13T00:55:26.790779555Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:55:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2346 runtime=io.containerd.runc.v2\n" Aug 13 00:55:26.965843 kubelet[1902]: E0813 00:55:26.965730 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:26.967659 env[1199]: time="2025-08-13T00:55:26.967591571Z" level=info msg="CreateContainer within sandbox \"1f7bd778347accccf09afb9703ef2897b492b9d01b0289c66fea8b02af43c874\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:55:26.994376 env[1199]: time="2025-08-13T00:55:26.994142115Z" level=info msg="CreateContainer within sandbox \"1f7bd778347accccf09afb9703ef2897b492b9d01b0289c66fea8b02af43c874\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"55252213f8c3501c72697e6a5bbecff73ed0216ca78124a69b2f8b205cd76c1a\"" Aug 13 00:55:26.995168 env[1199]: time="2025-08-13T00:55:26.995095984Z" level=info msg="StartContainer for \"55252213f8c3501c72697e6a5bbecff73ed0216ca78124a69b2f8b205cd76c1a\"" Aug 13 00:55:27.013540 systemd[1]: Started cri-containerd-55252213f8c3501c72697e6a5bbecff73ed0216ca78124a69b2f8b205cd76c1a.scope. Aug 13 00:55:27.050232 env[1199]: time="2025-08-13T00:55:27.049605217Z" level=info msg="StartContainer for \"55252213f8c3501c72697e6a5bbecff73ed0216ca78124a69b2f8b205cd76c1a\" returns successfully" Aug 13 00:55:27.062797 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:55:27.063116 systemd[1]: Stopped systemd-sysctl.service. Aug 13 00:55:27.063530 systemd[1]: Stopping systemd-sysctl.service... Aug 13 00:55:27.065886 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:55:27.066264 systemd[1]: cri-containerd-55252213f8c3501c72697e6a5bbecff73ed0216ca78124a69b2f8b205cd76c1a.scope: Deactivated successfully. Aug 13 00:55:27.078155 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:55:27.100420 env[1199]: time="2025-08-13T00:55:27.100362548Z" level=info msg="shim disconnected" id=55252213f8c3501c72697e6a5bbecff73ed0216ca78124a69b2f8b205cd76c1a Aug 13 00:55:27.100420 env[1199]: time="2025-08-13T00:55:27.100414933Z" level=warning msg="cleaning up after shim disconnected" id=55252213f8c3501c72697e6a5bbecff73ed0216ca78124a69b2f8b205cd76c1a namespace=k8s.io Aug 13 00:55:27.100420 env[1199]: time="2025-08-13T00:55:27.100425504Z" level=info msg="cleaning up dead shim" Aug 13 00:55:27.113360 env[1199]: time="2025-08-13T00:55:27.113273513Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:55:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2409 runtime=io.containerd.runc.v2\n" Aug 13 00:55:27.379746 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58856f6b5edf77b040df80fa33d4cf84080e17e53d0d2815cadd867186e70728-rootfs.mount: Deactivated successfully. Aug 13 00:55:27.837185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3884429978.mount: Deactivated successfully. Aug 13 00:55:27.969544 kubelet[1902]: E0813 00:55:27.969487 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:27.972890 env[1199]: time="2025-08-13T00:55:27.972852232Z" level=info msg="CreateContainer within sandbox \"1f7bd778347accccf09afb9703ef2897b492b9d01b0289c66fea8b02af43c874\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:55:27.988451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1020305706.mount: Deactivated successfully. Aug 13 00:55:27.996782 env[1199]: time="2025-08-13T00:55:27.996723968Z" level=info msg="CreateContainer within sandbox \"1f7bd778347accccf09afb9703ef2897b492b9d01b0289c66fea8b02af43c874\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"29adc03ab5b67701b540b7543041d4df527d93cf91f85e22d515f337dd6bc635\"" Aug 13 00:55:27.999048 env[1199]: time="2025-08-13T00:55:27.997440266Z" level=info msg="StartContainer for \"29adc03ab5b67701b540b7543041d4df527d93cf91f85e22d515f337dd6bc635\"" Aug 13 00:55:28.016224 systemd[1]: Started cri-containerd-29adc03ab5b67701b540b7543041d4df527d93cf91f85e22d515f337dd6bc635.scope. Aug 13 00:55:28.055571 systemd[1]: cri-containerd-29adc03ab5b67701b540b7543041d4df527d93cf91f85e22d515f337dd6bc635.scope: Deactivated successfully. Aug 13 00:55:28.056695 env[1199]: time="2025-08-13T00:55:28.056643856Z" level=info msg="StartContainer for \"29adc03ab5b67701b540b7543041d4df527d93cf91f85e22d515f337dd6bc635\" returns successfully" Aug 13 00:55:28.151853 env[1199]: time="2025-08-13T00:55:28.151713932Z" level=info msg="shim disconnected" id=29adc03ab5b67701b540b7543041d4df527d93cf91f85e22d515f337dd6bc635 Aug 13 00:55:28.151853 env[1199]: time="2025-08-13T00:55:28.151765144Z" level=warning msg="cleaning up after shim disconnected" id=29adc03ab5b67701b540b7543041d4df527d93cf91f85e22d515f337dd6bc635 namespace=k8s.io Aug 13 00:55:28.151853 env[1199]: time="2025-08-13T00:55:28.151774041Z" level=info msg="cleaning up dead shim" Aug 13 00:55:28.158996 env[1199]: time="2025-08-13T00:55:28.158905330Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:55:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2466 runtime=io.containerd.runc.v2\n" Aug 13 00:55:28.503229 env[1199]: time="2025-08-13T00:55:28.503174367Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:28.507448 env[1199]: time="2025-08-13T00:55:28.507368134Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:28.509128 env[1199]: time="2025-08-13T00:55:28.509098880Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:28.509558 env[1199]: time="2025-08-13T00:55:28.509523059Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 00:55:28.511684 env[1199]: time="2025-08-13T00:55:28.511649998Z" level=info msg="CreateContainer within sandbox \"0f20b468b07448675e6f9c639633773830606601969e2f47bd5087c38f2c4568\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 00:55:28.525092 env[1199]: time="2025-08-13T00:55:28.525036571Z" level=info msg="CreateContainer within sandbox \"0f20b468b07448675e6f9c639633773830606601969e2f47bd5087c38f2c4568\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e65bffc99e9e93eaa3c565103b6da53808c2e26b1a9be36e38a9a47f1743b1ea\"" Aug 13 00:55:28.525583 env[1199]: time="2025-08-13T00:55:28.525552724Z" level=info msg="StartContainer for \"e65bffc99e9e93eaa3c565103b6da53808c2e26b1a9be36e38a9a47f1743b1ea\"" Aug 13 00:55:28.543321 systemd[1]: Started cri-containerd-e65bffc99e9e93eaa3c565103b6da53808c2e26b1a9be36e38a9a47f1743b1ea.scope. Aug 13 00:55:28.568195 env[1199]: time="2025-08-13T00:55:28.568122590Z" level=info msg="StartContainer for \"e65bffc99e9e93eaa3c565103b6da53808c2e26b1a9be36e38a9a47f1743b1ea\" returns successfully" Aug 13 00:55:28.974121 kubelet[1902]: E0813 00:55:28.973948 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:28.976287 kubelet[1902]: E0813 00:55:28.976233 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:28.976644 env[1199]: time="2025-08-13T00:55:28.976611072Z" level=info msg="CreateContainer within sandbox \"1f7bd778347accccf09afb9703ef2897b492b9d01b0289c66fea8b02af43c874\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:55:29.156393 env[1199]: time="2025-08-13T00:55:29.156306278Z" level=info msg="CreateContainer within sandbox \"1f7bd778347accccf09afb9703ef2897b492b9d01b0289c66fea8b02af43c874\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2fe9d9c3f67e4930bf57b6151f455749b93fe7e64119208af50f52e95ad55a4b\"" Aug 13 00:55:29.157067 env[1199]: time="2025-08-13T00:55:29.157017809Z" level=info msg="StartContainer for \"2fe9d9c3f67e4930bf57b6151f455749b93fe7e64119208af50f52e95ad55a4b\"" Aug 13 00:55:29.189912 systemd[1]: Started cri-containerd-2fe9d9c3f67e4930bf57b6151f455749b93fe7e64119208af50f52e95ad55a4b.scope. Aug 13 00:55:29.231651 systemd[1]: cri-containerd-2fe9d9c3f67e4930bf57b6151f455749b93fe7e64119208af50f52e95ad55a4b.scope: Deactivated successfully. Aug 13 00:55:29.232850 env[1199]: time="2025-08-13T00:55:29.232808278Z" level=info msg="StartContainer for \"2fe9d9c3f67e4930bf57b6151f455749b93fe7e64119208af50f52e95ad55a4b\" returns successfully" Aug 13 00:55:29.300401 env[1199]: time="2025-08-13T00:55:29.300339880Z" level=info msg="shim disconnected" id=2fe9d9c3f67e4930bf57b6151f455749b93fe7e64119208af50f52e95ad55a4b Aug 13 00:55:29.300401 env[1199]: time="2025-08-13T00:55:29.300391293Z" level=warning msg="cleaning up after shim disconnected" id=2fe9d9c3f67e4930bf57b6151f455749b93fe7e64119208af50f52e95ad55a4b namespace=k8s.io Aug 13 00:55:29.300401 env[1199]: time="2025-08-13T00:55:29.300401785Z" level=info msg="cleaning up dead shim" Aug 13 00:55:29.308415 env[1199]: time="2025-08-13T00:55:29.308351454Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:55:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2560 runtime=io.containerd.runc.v2\n" Aug 13 00:55:29.379838 systemd[1]: run-containerd-runc-k8s.io-e65bffc99e9e93eaa3c565103b6da53808c2e26b1a9be36e38a9a47f1743b1ea-runc.oCvNR0.mount: Deactivated successfully. Aug 13 00:55:29.980731 kubelet[1902]: E0813 00:55:29.980694 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:29.981254 kubelet[1902]: E0813 00:55:29.980750 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:29.983205 env[1199]: time="2025-08-13T00:55:29.983164987Z" level=info msg="CreateContainer within sandbox \"1f7bd778347accccf09afb9703ef2897b492b9d01b0289c66fea8b02af43c874\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:55:29.999789 kubelet[1902]: I0813 00:55:29.998799 1902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-bzqg7" podStartSLOduration=2.568436624 podStartE2EDuration="17.998766192s" podCreationTimestamp="2025-08-13 00:55:12 +0000 UTC" firstStartedPulling="2025-08-13 00:55:13.080092982 +0000 UTC m=+7.258045254" lastFinishedPulling="2025-08-13 00:55:28.51042255 +0000 UTC m=+22.688374822" observedRunningTime="2025-08-13 00:55:29.022233072 +0000 UTC m=+23.200185344" watchObservedRunningTime="2025-08-13 00:55:29.998766192 +0000 UTC m=+24.176718464" Aug 13 00:55:30.002905 env[1199]: time="2025-08-13T00:55:30.002850385Z" level=info msg="CreateContainer within sandbox \"1f7bd778347accccf09afb9703ef2897b492b9d01b0289c66fea8b02af43c874\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"002df9ba586dff1a5182c9004ab8939f1321f6b8bddfde4d9feaecbbe95254e9\"" Aug 13 00:55:30.003536 env[1199]: time="2025-08-13T00:55:30.003498968Z" level=info msg="StartContainer for \"002df9ba586dff1a5182c9004ab8939f1321f6b8bddfde4d9feaecbbe95254e9\"" Aug 13 00:55:30.021287 systemd[1]: Started cri-containerd-002df9ba586dff1a5182c9004ab8939f1321f6b8bddfde4d9feaecbbe95254e9.scope. Aug 13 00:55:30.058738 env[1199]: time="2025-08-13T00:55:30.058674386Z" level=info msg="StartContainer for \"002df9ba586dff1a5182c9004ab8939f1321f6b8bddfde4d9feaecbbe95254e9\" returns successfully" Aug 13 00:55:30.140962 kubelet[1902]: I0813 00:55:30.140925 1902 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 00:55:30.178706 systemd[1]: Created slice kubepods-burstable-podddac4c81_49af_4515_9773_06d643d33e28.slice. Aug 13 00:55:30.187035 systemd[1]: Created slice kubepods-burstable-pod0f989625_f96e_4e5e_9031_f13b1d747330.slice. Aug 13 00:55:30.345893 kubelet[1902]: I0813 00:55:30.345762 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ddac4c81-49af-4515-9773-06d643d33e28-config-volume\") pod \"coredns-7c65d6cfc9-nhxmw\" (UID: \"ddac4c81-49af-4515-9773-06d643d33e28\") " pod="kube-system/coredns-7c65d6cfc9-nhxmw" Aug 13 00:55:30.345893 kubelet[1902]: I0813 00:55:30.345806 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f989625-f96e-4e5e-9031-f13b1d747330-config-volume\") pod \"coredns-7c65d6cfc9-xqkbn\" (UID: \"0f989625-f96e-4e5e-9031-f13b1d747330\") " pod="kube-system/coredns-7c65d6cfc9-xqkbn" Aug 13 00:55:30.345893 kubelet[1902]: I0813 00:55:30.345830 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnfng\" (UniqueName: \"kubernetes.io/projected/0f989625-f96e-4e5e-9031-f13b1d747330-kube-api-access-lnfng\") pod \"coredns-7c65d6cfc9-xqkbn\" (UID: \"0f989625-f96e-4e5e-9031-f13b1d747330\") " pod="kube-system/coredns-7c65d6cfc9-xqkbn" Aug 13 00:55:30.345893 kubelet[1902]: I0813 00:55:30.345847 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lcnh\" (UniqueName: \"kubernetes.io/projected/ddac4c81-49af-4515-9773-06d643d33e28-kube-api-access-5lcnh\") pod \"coredns-7c65d6cfc9-nhxmw\" (UID: \"ddac4c81-49af-4515-9773-06d643d33e28\") " pod="kube-system/coredns-7c65d6cfc9-nhxmw" Aug 13 00:55:30.783553 kubelet[1902]: E0813 00:55:30.783494 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:30.784575 env[1199]: time="2025-08-13T00:55:30.784499735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nhxmw,Uid:ddac4c81-49af-4515-9773-06d643d33e28,Namespace:kube-system,Attempt:0,}" Aug 13 00:55:30.790261 kubelet[1902]: E0813 00:55:30.790196 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:30.790948 env[1199]: time="2025-08-13T00:55:30.790894211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xqkbn,Uid:0f989625-f96e-4e5e-9031-f13b1d747330,Namespace:kube-system,Attempt:0,}" Aug 13 00:55:30.985300 kubelet[1902]: E0813 00:55:30.985263 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:31.000084 kubelet[1902]: I0813 00:55:31.000006 1902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wb6xz" podStartSLOduration=5.664040445 podStartE2EDuration="18.999986433s" podCreationTimestamp="2025-08-13 00:55:12 +0000 UTC" firstStartedPulling="2025-08-13 00:55:13.020987483 +0000 UTC m=+7.198939755" lastFinishedPulling="2025-08-13 00:55:26.356933471 +0000 UTC m=+20.534885743" observedRunningTime="2025-08-13 00:55:30.998768335 +0000 UTC m=+25.176720607" watchObservedRunningTime="2025-08-13 00:55:30.999986433 +0000 UTC m=+25.177938705" Aug 13 00:55:31.986883 kubelet[1902]: E0813 00:55:31.986849 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:32.360064 systemd-networkd[1021]: cilium_host: Link UP Aug 13 00:55:32.360259 systemd-networkd[1021]: cilium_net: Link UP Aug 13 00:55:32.364652 systemd-networkd[1021]: cilium_net: Gained carrier Aug 13 00:55:32.365915 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Aug 13 00:55:32.365983 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Aug 13 00:55:32.366016 systemd-networkd[1021]: cilium_host: Gained carrier Aug 13 00:55:32.403640 systemd[1]: Started sshd@5-10.0.0.89:22-10.0.0.1:57488.service. Aug 13 00:55:32.446186 sshd[2779]: Accepted publickey for core from 10.0.0.1 port 57488 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:55:32.448216 sshd[2779]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:32.454083 systemd[1]: Started session-6.scope. Aug 13 00:55:32.454785 systemd-logind[1190]: New session 6 of user core. Aug 13 00:55:32.461894 systemd-networkd[1021]: cilium_vxlan: Link UP Aug 13 00:55:32.461900 systemd-networkd[1021]: cilium_vxlan: Gained carrier Aug 13 00:55:32.584317 sshd[2779]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:32.586512 systemd[1]: sshd@5-10.0.0.89:22-10.0.0.1:57488.service: Deactivated successfully. Aug 13 00:55:32.587161 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:55:32.587621 systemd-logind[1190]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:55:32.588220 systemd-logind[1190]: Removed session 6. Aug 13 00:55:32.623680 systemd-networkd[1021]: cilium_net: Gained IPv6LL Aug 13 00:55:32.677503 kernel: NET: Registered PF_ALG protocol family Aug 13 00:55:32.988658 kubelet[1902]: E0813 00:55:32.988346 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:33.330663 systemd-networkd[1021]: lxc_health: Link UP Aug 13 00:55:33.339445 systemd-networkd[1021]: cilium_host: Gained IPv6LL Aug 13 00:55:33.342043 systemd-networkd[1021]: lxc_health: Gained carrier Aug 13 00:55:33.342491 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Aug 13 00:55:33.831668 systemd-networkd[1021]: lxc005715d7f557: Link UP Aug 13 00:55:33.839499 kernel: eth0: renamed from tmp9cb89 Aug 13 00:55:33.852708 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 00:55:33.852827 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc005715d7f557: link becomes ready Aug 13 00:55:33.853397 systemd-networkd[1021]: lxc005715d7f557: Gained carrier Aug 13 00:55:33.853531 systemd-networkd[1021]: cilium_vxlan: Gained IPv6LL Aug 13 00:55:33.854995 systemd-networkd[1021]: lxcd9815cbedf28: Link UP Aug 13 00:55:33.869541 kernel: eth0: renamed from tmpa48dd Aug 13 00:55:33.877342 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd9815cbedf28: link becomes ready Aug 13 00:55:33.877588 systemd-networkd[1021]: lxcd9815cbedf28: Gained carrier Aug 13 00:55:34.969963 kubelet[1902]: E0813 00:55:34.969924 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:34.992605 kubelet[1902]: E0813 00:55:34.992566 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:35.319622 systemd-networkd[1021]: lxc_health: Gained IPv6LL Aug 13 00:55:35.703637 systemd-networkd[1021]: lxcd9815cbedf28: Gained IPv6LL Aug 13 00:55:35.831626 systemd-networkd[1021]: lxc005715d7f557: Gained IPv6LL Aug 13 00:55:37.279213 env[1199]: time="2025-08-13T00:55:37.279083569Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:55:37.279213 env[1199]: time="2025-08-13T00:55:37.279153035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:55:37.279213 env[1199]: time="2025-08-13T00:55:37.279172574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:55:37.280265 env[1199]: time="2025-08-13T00:55:37.279519768Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9cb89de9f684300726def768a31289a2a48dae3cbe11048548ed30747e381f78 pid=3144 runtime=io.containerd.runc.v2 Aug 13 00:55:37.280265 env[1199]: time="2025-08-13T00:55:37.279643842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:55:37.280265 env[1199]: time="2025-08-13T00:55:37.279681517Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:55:37.280265 env[1199]: time="2025-08-13T00:55:37.279704592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:55:37.280265 env[1199]: time="2025-08-13T00:55:37.279867432Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a48dd823f81d93efbf3a815e62e06f8718e884bb5a0ee0bda8620874ae3c6116 pid=3152 runtime=io.containerd.runc.v2 Aug 13 00:55:37.300615 systemd[1]: run-containerd-runc-k8s.io-9cb89de9f684300726def768a31289a2a48dae3cbe11048548ed30747e381f78-runc.8MKlag.mount: Deactivated successfully. Aug 13 00:55:37.304565 systemd[1]: Started cri-containerd-9cb89de9f684300726def768a31289a2a48dae3cbe11048548ed30747e381f78.scope. Aug 13 00:55:37.306231 systemd[1]: Started cri-containerd-a48dd823f81d93efbf3a815e62e06f8718e884bb5a0ee0bda8620874ae3c6116.scope. Aug 13 00:55:37.321878 systemd-resolved[1143]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:55:37.325484 systemd-resolved[1143]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:55:37.361242 env[1199]: time="2025-08-13T00:55:37.361162350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xqkbn,Uid:0f989625-f96e-4e5e-9031-f13b1d747330,Namespace:kube-system,Attempt:0,} returns sandbox id \"a48dd823f81d93efbf3a815e62e06f8718e884bb5a0ee0bda8620874ae3c6116\"" Aug 13 00:55:37.364062 env[1199]: time="2025-08-13T00:55:37.363032774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nhxmw,Uid:ddac4c81-49af-4515-9773-06d643d33e28,Namespace:kube-system,Attempt:0,} returns sandbox id \"9cb89de9f684300726def768a31289a2a48dae3cbe11048548ed30747e381f78\"" Aug 13 00:55:37.364202 kubelet[1902]: E0813 00:55:37.363174 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:37.364898 kubelet[1902]: E0813 00:55:37.364262 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:37.368281 env[1199]: time="2025-08-13T00:55:37.367768731Z" level=info msg="CreateContainer within sandbox \"9cb89de9f684300726def768a31289a2a48dae3cbe11048548ed30747e381f78\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:55:37.368534 env[1199]: time="2025-08-13T00:55:37.368457487Z" level=info msg="CreateContainer within sandbox \"a48dd823f81d93efbf3a815e62e06f8718e884bb5a0ee0bda8620874ae3c6116\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:55:37.396737 env[1199]: time="2025-08-13T00:55:37.396645443Z" level=info msg="CreateContainer within sandbox \"a48dd823f81d93efbf3a815e62e06f8718e884bb5a0ee0bda8620874ae3c6116\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"36487b923b60b4c94303071ff950d02b9e859896d1a3fd632d254cb9a436d4c1\"" Aug 13 00:55:37.400555 env[1199]: time="2025-08-13T00:55:37.400489493Z" level=info msg="StartContainer for \"36487b923b60b4c94303071ff950d02b9e859896d1a3fd632d254cb9a436d4c1\"" Aug 13 00:55:37.409590 env[1199]: time="2025-08-13T00:55:37.409458798Z" level=info msg="CreateContainer within sandbox \"9cb89de9f684300726def768a31289a2a48dae3cbe11048548ed30747e381f78\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"589dd10d34a085944932ffaae262daad6b637ea014d95fc3ba98f66b399a7cb4\"" Aug 13 00:55:37.411120 env[1199]: time="2025-08-13T00:55:37.411028700Z" level=info msg="StartContainer for \"589dd10d34a085944932ffaae262daad6b637ea014d95fc3ba98f66b399a7cb4\"" Aug 13 00:55:37.433201 systemd[1]: Started cri-containerd-36487b923b60b4c94303071ff950d02b9e859896d1a3fd632d254cb9a436d4c1.scope. Aug 13 00:55:37.450295 systemd[1]: Started cri-containerd-589dd10d34a085944932ffaae262daad6b637ea014d95fc3ba98f66b399a7cb4.scope. Aug 13 00:55:37.470835 env[1199]: time="2025-08-13T00:55:37.470769106Z" level=info msg="StartContainer for \"36487b923b60b4c94303071ff950d02b9e859896d1a3fd632d254cb9a436d4c1\" returns successfully" Aug 13 00:55:37.489083 env[1199]: time="2025-08-13T00:55:37.488967437Z" level=info msg="StartContainer for \"589dd10d34a085944932ffaae262daad6b637ea014d95fc3ba98f66b399a7cb4\" returns successfully" Aug 13 00:55:37.599661 systemd[1]: Started sshd@6-10.0.0.89:22-10.0.0.1:57496.service. Aug 13 00:55:37.676607 sshd[3282]: Accepted publickey for core from 10.0.0.1 port 57496 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:55:37.682883 sshd[3282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:37.706304 systemd-logind[1190]: New session 7 of user core. Aug 13 00:55:37.714375 systemd[1]: Started session-7.scope. Aug 13 00:55:37.985158 sshd[3282]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:37.992334 systemd[1]: sshd@6-10.0.0.89:22-10.0.0.1:57496.service: Deactivated successfully. Aug 13 00:55:37.993765 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:55:37.998636 systemd-logind[1190]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:55:38.000815 systemd-logind[1190]: Removed session 7. Aug 13 00:55:38.003400 kubelet[1902]: E0813 00:55:38.003354 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:38.007003 kubelet[1902]: E0813 00:55:38.006963 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:38.051714 kubelet[1902]: I0813 00:55:38.051516 1902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-nhxmw" podStartSLOduration=26.051484622 podStartE2EDuration="26.051484622s" podCreationTimestamp="2025-08-13 00:55:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:55:38.029677972 +0000 UTC m=+32.207630244" watchObservedRunningTime="2025-08-13 00:55:38.051484622 +0000 UTC m=+32.229436894" Aug 13 00:55:38.291360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2425711588.mount: Deactivated successfully. Aug 13 00:55:39.014761 kubelet[1902]: E0813 00:55:39.012566 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:39.014761 kubelet[1902]: E0813 00:55:39.013916 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:40.014335 kubelet[1902]: E0813 00:55:40.014180 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:40.014335 kubelet[1902]: E0813 00:55:40.014260 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:42.988191 systemd[1]: Started sshd@7-10.0.0.89:22-10.0.0.1:58458.service. Aug 13 00:55:43.023944 sshd[3313]: Accepted publickey for core from 10.0.0.1 port 58458 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:55:43.025265 sshd[3313]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:43.029414 systemd-logind[1190]: New session 8 of user core. Aug 13 00:55:43.030262 systemd[1]: Started session-8.scope. Aug 13 00:55:43.151019 sshd[3313]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:43.154081 systemd[1]: sshd@7-10.0.0.89:22-10.0.0.1:58458.service: Deactivated successfully. Aug 13 00:55:43.155102 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:55:43.156047 systemd-logind[1190]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:55:43.157018 systemd-logind[1190]: Removed session 8. Aug 13 00:55:48.156648 systemd[1]: Started sshd@8-10.0.0.89:22-10.0.0.1:52986.service. Aug 13 00:55:48.372963 sshd[3332]: Accepted publickey for core from 10.0.0.1 port 52986 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:55:48.374573 sshd[3332]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:48.378976 systemd-logind[1190]: New session 9 of user core. Aug 13 00:55:48.379892 systemd[1]: Started session-9.scope. Aug 13 00:55:48.490772 sshd[3332]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:48.493024 systemd[1]: sshd@8-10.0.0.89:22-10.0.0.1:52986.service: Deactivated successfully. Aug 13 00:55:48.493883 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:55:48.494763 systemd-logind[1190]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:55:48.495584 systemd-logind[1190]: Removed session 9. Aug 13 00:55:53.495875 systemd[1]: Started sshd@9-10.0.0.89:22-10.0.0.1:52990.service. Aug 13 00:55:53.532640 sshd[3346]: Accepted publickey for core from 10.0.0.1 port 52990 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:55:53.533897 sshd[3346]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:53.537053 systemd-logind[1190]: New session 10 of user core. Aug 13 00:55:53.537839 systemd[1]: Started session-10.scope. Aug 13 00:55:53.641325 sshd[3346]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:53.643532 systemd[1]: sshd@9-10.0.0.89:22-10.0.0.1:52990.service: Deactivated successfully. Aug 13 00:55:53.644207 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:55:53.644913 systemd-logind[1190]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:55:53.645618 systemd-logind[1190]: Removed session 10. Aug 13 00:55:58.648802 systemd[1]: Started sshd@10-10.0.0.89:22-10.0.0.1:45798.service. Aug 13 00:55:58.685178 sshd[3360]: Accepted publickey for core from 10.0.0.1 port 45798 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:55:58.686581 sshd[3360]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:58.690558 systemd-logind[1190]: New session 11 of user core. Aug 13 00:55:58.691429 systemd[1]: Started session-11.scope. Aug 13 00:55:58.801634 sshd[3360]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:58.804622 systemd[1]: sshd@10-10.0.0.89:22-10.0.0.1:45798.service: Deactivated successfully. Aug 13 00:55:58.805190 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:55:58.807497 systemd[1]: Started sshd@11-10.0.0.89:22-10.0.0.1:45810.service. Aug 13 00:55:58.808253 systemd-logind[1190]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:55:58.809242 systemd-logind[1190]: Removed session 11. Aug 13 00:55:58.843169 sshd[3375]: Accepted publickey for core from 10.0.0.1 port 45810 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:55:58.844360 sshd[3375]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:58.847820 systemd-logind[1190]: New session 12 of user core. Aug 13 00:55:58.848821 systemd[1]: Started session-12.scope. Aug 13 00:55:58.998850 sshd[3375]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:59.001965 systemd-logind[1190]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:55:59.002112 systemd[1]: sshd@11-10.0.0.89:22-10.0.0.1:45810.service: Deactivated successfully. Aug 13 00:55:59.002651 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:55:59.004059 systemd[1]: Started sshd@12-10.0.0.89:22-10.0.0.1:45816.service. Aug 13 00:55:59.005173 systemd-logind[1190]: Removed session 12. Aug 13 00:55:59.049237 sshd[3387]: Accepted publickey for core from 10.0.0.1 port 45816 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:55:59.050580 sshd[3387]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:59.054445 systemd-logind[1190]: New session 13 of user core. Aug 13 00:55:59.055285 systemd[1]: Started session-13.scope. Aug 13 00:55:59.162140 sshd[3387]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:59.164275 systemd[1]: sshd@12-10.0.0.89:22-10.0.0.1:45816.service: Deactivated successfully. Aug 13 00:55:59.165082 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:55:59.165575 systemd-logind[1190]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:55:59.166286 systemd-logind[1190]: Removed session 13. Aug 13 00:56:04.169528 systemd[1]: Started sshd@13-10.0.0.89:22-10.0.0.1:45820.service. Aug 13 00:56:04.206568 sshd[3401]: Accepted publickey for core from 10.0.0.1 port 45820 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:56:04.208338 sshd[3401]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:04.212930 systemd-logind[1190]: New session 14 of user core. Aug 13 00:56:04.213749 systemd[1]: Started session-14.scope. Aug 13 00:56:04.325245 sshd[3401]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:04.327449 systemd[1]: sshd@13-10.0.0.89:22-10.0.0.1:45820.service: Deactivated successfully. Aug 13 00:56:04.328248 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:56:04.328989 systemd-logind[1190]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:56:04.329668 systemd-logind[1190]: Removed session 14. Aug 13 00:56:09.338160 systemd[1]: Started sshd@14-10.0.0.89:22-10.0.0.1:57988.service. Aug 13 00:56:09.404542 sshd[3417]: Accepted publickey for core from 10.0.0.1 port 57988 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:56:09.407660 sshd[3417]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:09.416125 systemd-logind[1190]: New session 15 of user core. Aug 13 00:56:09.417569 systemd[1]: Started session-15.scope. Aug 13 00:56:09.664433 sshd[3417]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:09.672914 systemd[1]: sshd@14-10.0.0.89:22-10.0.0.1:57988.service: Deactivated successfully. Aug 13 00:56:09.675779 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:56:09.677553 systemd-logind[1190]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:56:09.683588 systemd-logind[1190]: Removed session 15. Aug 13 00:56:14.672513 systemd[1]: Started sshd@15-10.0.0.89:22-10.0.0.1:57992.service. Aug 13 00:56:14.723746 sshd[3433]: Accepted publickey for core from 10.0.0.1 port 57992 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:56:14.727968 sshd[3433]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:14.741978 systemd-logind[1190]: New session 16 of user core. Aug 13 00:56:14.745734 systemd[1]: Started session-16.scope. Aug 13 00:56:14.983327 sshd[3433]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:14.984771 systemd[1]: Started sshd@16-10.0.0.89:22-10.0.0.1:57996.service. Aug 13 00:56:14.993582 systemd[1]: sshd@15-10.0.0.89:22-10.0.0.1:57992.service: Deactivated successfully. Aug 13 00:56:14.994594 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:56:14.996898 systemd-logind[1190]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:56:14.999802 systemd-logind[1190]: Removed session 16. Aug 13 00:56:15.061315 sshd[3445]: Accepted publickey for core from 10.0.0.1 port 57996 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:56:15.064697 sshd[3445]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:15.093809 systemd-logind[1190]: New session 17 of user core. Aug 13 00:56:15.100147 systemd[1]: Started session-17.scope. Aug 13 00:56:15.728718 sshd[3445]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:15.740580 systemd[1]: Started sshd@17-10.0.0.89:22-10.0.0.1:58002.service. Aug 13 00:56:15.741483 systemd[1]: sshd@16-10.0.0.89:22-10.0.0.1:57996.service: Deactivated successfully. Aug 13 00:56:15.743047 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:56:15.746333 systemd-logind[1190]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:56:15.752412 systemd-logind[1190]: Removed session 17. Aug 13 00:56:15.822014 sshd[3457]: Accepted publickey for core from 10.0.0.1 port 58002 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:56:15.832534 sshd[3457]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:15.857454 systemd[1]: Started session-18.scope. Aug 13 00:56:15.857651 systemd-logind[1190]: New session 18 of user core. Aug 13 00:56:18.140992 sshd[3457]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:18.154543 systemd[1]: Started sshd@18-10.0.0.89:22-10.0.0.1:48276.service. Aug 13 00:56:18.155897 systemd[1]: sshd@17-10.0.0.89:22-10.0.0.1:58002.service: Deactivated successfully. Aug 13 00:56:18.159564 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:56:18.163787 systemd-logind[1190]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:56:18.166826 systemd-logind[1190]: Removed session 18. Aug 13 00:56:18.255288 sshd[3476]: Accepted publickey for core from 10.0.0.1 port 48276 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:56:18.261724 sshd[3476]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:18.283964 systemd-logind[1190]: New session 19 of user core. Aug 13 00:56:18.286564 systemd[1]: Started session-19.scope. Aug 13 00:56:18.697733 sshd[3476]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:18.702267 systemd[1]: Started sshd@19-10.0.0.89:22-10.0.0.1:48292.service. Aug 13 00:56:18.709825 systemd[1]: sshd@18-10.0.0.89:22-10.0.0.1:48276.service: Deactivated successfully. Aug 13 00:56:18.710874 systemd-logind[1190]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:56:18.710960 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:56:18.714214 systemd-logind[1190]: Removed session 19. Aug 13 00:56:18.752496 sshd[3487]: Accepted publickey for core from 10.0.0.1 port 48292 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:56:18.754419 sshd[3487]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:18.760556 systemd-logind[1190]: New session 20 of user core. Aug 13 00:56:18.761999 systemd[1]: Started session-20.scope. Aug 13 00:56:19.001924 sshd[3487]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:19.006386 systemd[1]: sshd@19-10.0.0.89:22-10.0.0.1:48292.service: Deactivated successfully. Aug 13 00:56:19.007108 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:56:19.008324 systemd-logind[1190]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:56:19.011169 systemd-logind[1190]: Removed session 20. Aug 13 00:56:24.022246 systemd[1]: Started sshd@20-10.0.0.89:22-10.0.0.1:48294.service. Aug 13 00:56:24.100454 sshd[3501]: Accepted publickey for core from 10.0.0.1 port 48294 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:56:24.104772 sshd[3501]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:24.120182 systemd-logind[1190]: New session 21 of user core. Aug 13 00:56:24.126821 systemd[1]: Started session-21.scope. Aug 13 00:56:24.315725 sshd[3501]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:24.325099 systemd[1]: sshd@20-10.0.0.89:22-10.0.0.1:48294.service: Deactivated successfully. Aug 13 00:56:24.326285 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:56:24.330142 systemd-logind[1190]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:56:24.332210 systemd-logind[1190]: Removed session 21. Aug 13 00:56:25.916406 kubelet[1902]: E0813 00:56:25.916235 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:26.912914 kubelet[1902]: E0813 00:56:26.912107 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:29.329796 systemd[1]: Started sshd@21-10.0.0.89:22-10.0.0.1:33002.service. Aug 13 00:56:29.410868 sshd[3517]: Accepted publickey for core from 10.0.0.1 port 33002 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:56:29.417236 sshd[3517]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:29.445585 systemd-logind[1190]: New session 22 of user core. Aug 13 00:56:29.448229 systemd[1]: Started session-22.scope. Aug 13 00:56:29.664273 sshd[3517]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:29.671827 systemd[1]: sshd@21-10.0.0.89:22-10.0.0.1:33002.service: Deactivated successfully. Aug 13 00:56:29.674297 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:56:29.676641 systemd-logind[1190]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:56:29.682094 systemd-logind[1190]: Removed session 22. Aug 13 00:56:32.912530 kubelet[1902]: E0813 00:56:32.912431 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:33.918362 kubelet[1902]: E0813 00:56:33.918297 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:34.683676 systemd[1]: Started sshd@22-10.0.0.89:22-10.0.0.1:33004.service. Aug 13 00:56:34.736676 sshd[3531]: Accepted publickey for core from 10.0.0.1 port 33004 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:56:34.738152 sshd[3531]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:34.751519 systemd-logind[1190]: New session 23 of user core. Aug 13 00:56:34.752114 systemd[1]: Started session-23.scope. Aug 13 00:56:34.921972 sshd[3531]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:34.925640 systemd[1]: sshd@22-10.0.0.89:22-10.0.0.1:33004.service: Deactivated successfully. Aug 13 00:56:34.926730 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:56:34.927710 systemd-logind[1190]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:56:34.928913 systemd-logind[1190]: Removed session 23. Aug 13 00:56:39.925491 systemd[1]: Started sshd@23-10.0.0.89:22-10.0.0.1:58756.service. Aug 13 00:56:39.962401 sshd[3545]: Accepted publickey for core from 10.0.0.1 port 58756 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:56:39.963886 sshd[3545]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:39.967828 systemd-logind[1190]: New session 24 of user core. Aug 13 00:56:39.968969 systemd[1]: Started session-24.scope. Aug 13 00:56:40.075497 sshd[3545]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:40.079244 systemd[1]: sshd@23-10.0.0.89:22-10.0.0.1:58756.service: Deactivated successfully. Aug 13 00:56:40.079973 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:56:40.080627 systemd-logind[1190]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:56:40.082080 systemd[1]: Started sshd@24-10.0.0.89:22-10.0.0.1:58764.service. Aug 13 00:56:40.083049 systemd-logind[1190]: Removed session 24. Aug 13 00:56:40.117429 sshd[3558]: Accepted publickey for core from 10.0.0.1 port 58764 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:56:40.118610 sshd[3558]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:40.122143 systemd-logind[1190]: New session 25 of user core. Aug 13 00:56:40.123069 systemd[1]: Started session-25.scope. Aug 13 00:56:40.912390 kubelet[1902]: E0813 00:56:40.912330 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:41.463929 kubelet[1902]: I0813 00:56:41.463858 1902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-xqkbn" podStartSLOduration=89.46383224 podStartE2EDuration="1m29.46383224s" podCreationTimestamp="2025-08-13 00:55:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:55:38.084542322 +0000 UTC m=+32.262494604" watchObservedRunningTime="2025-08-13 00:56:41.46383224 +0000 UTC m=+95.641784512" Aug 13 00:56:41.471857 env[1199]: time="2025-08-13T00:56:41.471797624Z" level=info msg="StopContainer for \"e65bffc99e9e93eaa3c565103b6da53808c2e26b1a9be36e38a9a47f1743b1ea\" with timeout 30 (s)" Aug 13 00:56:41.472750 env[1199]: time="2025-08-13T00:56:41.472728730Z" level=info msg="Stop container \"e65bffc99e9e93eaa3c565103b6da53808c2e26b1a9be36e38a9a47f1743b1ea\" with signal terminated" Aug 13 00:56:41.483456 systemd[1]: cri-containerd-e65bffc99e9e93eaa3c565103b6da53808c2e26b1a9be36e38a9a47f1743b1ea.scope: Deactivated successfully. Aug 13 00:56:41.505934 env[1199]: time="2025-08-13T00:56:41.501086922Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:56:41.505126 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e65bffc99e9e93eaa3c565103b6da53808c2e26b1a9be36e38a9a47f1743b1ea-rootfs.mount: Deactivated successfully. Aug 13 00:56:41.507607 env[1199]: time="2025-08-13T00:56:41.507565540Z" level=info msg="StopContainer for \"002df9ba586dff1a5182c9004ab8939f1321f6b8bddfde4d9feaecbbe95254e9\" with timeout 2 (s)" Aug 13 00:56:41.507992 env[1199]: time="2025-08-13T00:56:41.507961435Z" level=info msg="Stop container \"002df9ba586dff1a5182c9004ab8939f1321f6b8bddfde4d9feaecbbe95254e9\" with signal terminated" Aug 13 00:56:41.516091 systemd-networkd[1021]: lxc_health: Link DOWN Aug 13 00:56:41.516100 systemd-networkd[1021]: lxc_health: Lost carrier Aug 13 00:56:41.518299 env[1199]: time="2025-08-13T00:56:41.518250240Z" level=info msg="shim disconnected" id=e65bffc99e9e93eaa3c565103b6da53808c2e26b1a9be36e38a9a47f1743b1ea Aug 13 00:56:41.518403 env[1199]: time="2025-08-13T00:56:41.518300607Z" level=warning msg="cleaning up after shim disconnected" id=e65bffc99e9e93eaa3c565103b6da53808c2e26b1a9be36e38a9a47f1743b1ea namespace=k8s.io Aug 13 00:56:41.518403 env[1199]: time="2025-08-13T00:56:41.518311878Z" level=info msg="cleaning up dead shim" Aug 13 00:56:41.526661 env[1199]: time="2025-08-13T00:56:41.526594747Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:56:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3609 runtime=io.containerd.runc.v2\n" Aug 13 00:56:41.530240 env[1199]: time="2025-08-13T00:56:41.530197750Z" level=info msg="StopContainer for \"e65bffc99e9e93eaa3c565103b6da53808c2e26b1a9be36e38a9a47f1743b1ea\" returns successfully" Aug 13 00:56:41.531016 env[1199]: time="2025-08-13T00:56:41.530988829Z" level=info msg="StopPodSandbox for \"0f20b468b07448675e6f9c639633773830606601969e2f47bd5087c38f2c4568\"" Aug 13 00:56:41.531095 env[1199]: time="2025-08-13T00:56:41.531072078Z" level=info msg="Container to stop \"e65bffc99e9e93eaa3c565103b6da53808c2e26b1a9be36e38a9a47f1743b1ea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:56:41.533134 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0f20b468b07448675e6f9c639633773830606601969e2f47bd5087c38f2c4568-shm.mount: Deactivated successfully. Aug 13 00:56:41.541781 systemd[1]: cri-containerd-0f20b468b07448675e6f9c639633773830606601969e2f47bd5087c38f2c4568.scope: Deactivated successfully. Aug 13 00:56:41.563155 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f20b468b07448675e6f9c639633773830606601969e2f47bd5087c38f2c4568-rootfs.mount: Deactivated successfully. Aug 13 00:56:41.563952 systemd[1]: cri-containerd-002df9ba586dff1a5182c9004ab8939f1321f6b8bddfde4d9feaecbbe95254e9.scope: Deactivated successfully. Aug 13 00:56:41.564286 systemd[1]: cri-containerd-002df9ba586dff1a5182c9004ab8939f1321f6b8bddfde4d9feaecbbe95254e9.scope: Consumed 6.803s CPU time. Aug 13 00:56:41.570203 env[1199]: time="2025-08-13T00:56:41.570142272Z" level=info msg="shim disconnected" id=0f20b468b07448675e6f9c639633773830606601969e2f47bd5087c38f2c4568 Aug 13 00:56:41.570203 env[1199]: time="2025-08-13T00:56:41.570194762Z" level=warning msg="cleaning up after shim disconnected" id=0f20b468b07448675e6f9c639633773830606601969e2f47bd5087c38f2c4568 namespace=k8s.io Aug 13 00:56:41.570203 env[1199]: time="2025-08-13T00:56:41.570204040Z" level=info msg="cleaning up dead shim" Aug 13 00:56:41.578727 env[1199]: time="2025-08-13T00:56:41.578664347Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:56:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3648 runtime=io.containerd.runc.v2\n" Aug 13 00:56:41.579126 env[1199]: time="2025-08-13T00:56:41.579092243Z" level=info msg="TearDown network for sandbox \"0f20b468b07448675e6f9c639633773830606601969e2f47bd5087c38f2c4568\" successfully" Aug 13 00:56:41.579126 env[1199]: time="2025-08-13T00:56:41.579123493Z" level=info msg="StopPodSandbox for \"0f20b468b07448675e6f9c639633773830606601969e2f47bd5087c38f2c4568\" returns successfully" Aug 13 00:56:41.582882 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-002df9ba586dff1a5182c9004ab8939f1321f6b8bddfde4d9feaecbbe95254e9-rootfs.mount: Deactivated successfully. Aug 13 00:56:41.594741 env[1199]: time="2025-08-13T00:56:41.594689455Z" level=info msg="shim disconnected" id=002df9ba586dff1a5182c9004ab8939f1321f6b8bddfde4d9feaecbbe95254e9 Aug 13 00:56:41.594741 env[1199]: time="2025-08-13T00:56:41.594737416Z" level=warning msg="cleaning up after shim disconnected" id=002df9ba586dff1a5182c9004ab8939f1321f6b8bddfde4d9feaecbbe95254e9 namespace=k8s.io Aug 13 00:56:41.594741 env[1199]: time="2025-08-13T00:56:41.594746043Z" level=info msg="cleaning up dead shim" Aug 13 00:56:41.603668 env[1199]: time="2025-08-13T00:56:41.603605241Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:56:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3668 runtime=io.containerd.runc.v2\n" Aug 13 00:56:41.606424 env[1199]: time="2025-08-13T00:56:41.606361078Z" level=info msg="StopContainer for \"002df9ba586dff1a5182c9004ab8939f1321f6b8bddfde4d9feaecbbe95254e9\" returns successfully" Aug 13 00:56:41.607190 env[1199]: time="2025-08-13T00:56:41.607052296Z" level=info msg="StopPodSandbox for \"1f7bd778347accccf09afb9703ef2897b492b9d01b0289c66fea8b02af43c874\"" Aug 13 00:56:41.607190 env[1199]: time="2025-08-13T00:56:41.607129463Z" level=info msg="Container to stop \"58856f6b5edf77b040df80fa33d4cf84080e17e53d0d2815cadd867186e70728\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:56:41.607190 env[1199]: time="2025-08-13T00:56:41.607147728Z" level=info msg="Container to stop \"29adc03ab5b67701b540b7543041d4df527d93cf91f85e22d515f337dd6bc635\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:56:41.607190 env[1199]: time="2025-08-13T00:56:41.607160623Z" level=info msg="Container to stop \"2fe9d9c3f67e4930bf57b6151f455749b93fe7e64119208af50f52e95ad55a4b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:56:41.607190 env[1199]: time="2025-08-13T00:56:41.607175671Z" level=info msg="Container to stop \"55252213f8c3501c72697e6a5bbecff73ed0216ca78124a69b2f8b205cd76c1a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:56:41.607476 env[1199]: time="2025-08-13T00:56:41.607196090Z" level=info msg="Container to stop \"002df9ba586dff1a5182c9004ab8939f1321f6b8bddfde4d9feaecbbe95254e9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:56:41.613501 systemd[1]: cri-containerd-1f7bd778347accccf09afb9703ef2897b492b9d01b0289c66fea8b02af43c874.scope: Deactivated successfully. Aug 13 00:56:41.639835 env[1199]: time="2025-08-13T00:56:41.639765626Z" level=info msg="shim disconnected" id=1f7bd778347accccf09afb9703ef2897b492b9d01b0289c66fea8b02af43c874 Aug 13 00:56:41.639835 env[1199]: time="2025-08-13T00:56:41.639819228Z" level=warning msg="cleaning up after shim disconnected" id=1f7bd778347accccf09afb9703ef2897b492b9d01b0289c66fea8b02af43c874 namespace=k8s.io Aug 13 00:56:41.639835 env[1199]: time="2025-08-13T00:56:41.639828726Z" level=info msg="cleaning up dead shim" Aug 13 00:56:41.643480 kubelet[1902]: I0813 00:56:41.643427 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2223223b-e61d-4d69-855d-6c0d95269d1b-cilium-config-path\") pod \"2223223b-e61d-4d69-855d-6c0d95269d1b\" (UID: \"2223223b-e61d-4d69-855d-6c0d95269d1b\") " Aug 13 00:56:41.643480 kubelet[1902]: I0813 00:56:41.643480 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2k7k\" (UniqueName: \"kubernetes.io/projected/2223223b-e61d-4d69-855d-6c0d95269d1b-kube-api-access-x2k7k\") pod \"2223223b-e61d-4d69-855d-6c0d95269d1b\" (UID: \"2223223b-e61d-4d69-855d-6c0d95269d1b\") " Aug 13 00:56:41.645629 kubelet[1902]: I0813 00:56:41.645597 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2223223b-e61d-4d69-855d-6c0d95269d1b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2223223b-e61d-4d69-855d-6c0d95269d1b" (UID: "2223223b-e61d-4d69-855d-6c0d95269d1b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:56:41.647125 kubelet[1902]: I0813 00:56:41.647062 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2223223b-e61d-4d69-855d-6c0d95269d1b-kube-api-access-x2k7k" (OuterVolumeSpecName: "kube-api-access-x2k7k") pod "2223223b-e61d-4d69-855d-6c0d95269d1b" (UID: "2223223b-e61d-4d69-855d-6c0d95269d1b"). InnerVolumeSpecName "kube-api-access-x2k7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:56:41.647568 env[1199]: time="2025-08-13T00:56:41.647526519Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:56:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3697 runtime=io.containerd.runc.v2\n" Aug 13 00:56:41.647890 env[1199]: time="2025-08-13T00:56:41.647828445Z" level=info msg="TearDown network for sandbox \"1f7bd778347accccf09afb9703ef2897b492b9d01b0289c66fea8b02af43c874\" successfully" Aug 13 00:56:41.647890 env[1199]: time="2025-08-13T00:56:41.647857310Z" level=info msg="StopPodSandbox for \"1f7bd778347accccf09afb9703ef2897b492b9d01b0289c66fea8b02af43c874\" returns successfully" Aug 13 00:56:41.744707 kubelet[1902]: I0813 00:56:41.744499 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-etc-cni-netd\") pod \"80870bc1-a32f-4ee1-99e0-1caea40cf072\" (UID: \"80870bc1-a32f-4ee1-99e0-1caea40cf072\") " Aug 13 00:56:41.744707 kubelet[1902]: I0813 00:56:41.744560 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxdh5\" (UniqueName: \"kubernetes.io/projected/80870bc1-a32f-4ee1-99e0-1caea40cf072-kube-api-access-vxdh5\") pod \"80870bc1-a32f-4ee1-99e0-1caea40cf072\" (UID: \"80870bc1-a32f-4ee1-99e0-1caea40cf072\") " Aug 13 00:56:41.744707 kubelet[1902]: I0813 00:56:41.744578 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-cni-path\") pod \"80870bc1-a32f-4ee1-99e0-1caea40cf072\" (UID: \"80870bc1-a32f-4ee1-99e0-1caea40cf072\") " Aug 13 00:56:41.744707 kubelet[1902]: I0813 00:56:41.744591 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-lib-modules\") pod \"80870bc1-a32f-4ee1-99e0-1caea40cf072\" (UID: \"80870bc1-a32f-4ee1-99e0-1caea40cf072\") " Aug 13 00:56:41.744707 kubelet[1902]: I0813 00:56:41.744609 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-bpf-maps\") pod \"80870bc1-a32f-4ee1-99e0-1caea40cf072\" (UID: \"80870bc1-a32f-4ee1-99e0-1caea40cf072\") " Aug 13 00:56:41.744707 kubelet[1902]: I0813 00:56:41.744624 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-xtables-lock\") pod \"80870bc1-a32f-4ee1-99e0-1caea40cf072\" (UID: \"80870bc1-a32f-4ee1-99e0-1caea40cf072\") " Aug 13 00:56:41.745115 kubelet[1902]: I0813 00:56:41.744638 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-cilium-run\") pod \"80870bc1-a32f-4ee1-99e0-1caea40cf072\" (UID: \"80870bc1-a32f-4ee1-99e0-1caea40cf072\") " Aug 13 00:56:41.745115 kubelet[1902]: I0813 00:56:41.744653 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-cilium-cgroup\") pod \"80870bc1-a32f-4ee1-99e0-1caea40cf072\" (UID: \"80870bc1-a32f-4ee1-99e0-1caea40cf072\") " Aug 13 00:56:41.745115 kubelet[1902]: I0813 00:56:41.744667 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-host-proc-sys-net\") pod \"80870bc1-a32f-4ee1-99e0-1caea40cf072\" (UID: \"80870bc1-a32f-4ee1-99e0-1caea40cf072\") " Aug 13 00:56:41.745115 kubelet[1902]: I0813 00:56:41.744682 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/80870bc1-a32f-4ee1-99e0-1caea40cf072-hubble-tls\") pod \"80870bc1-a32f-4ee1-99e0-1caea40cf072\" (UID: \"80870bc1-a32f-4ee1-99e0-1caea40cf072\") " Aug 13 00:56:41.745115 kubelet[1902]: I0813 00:56:41.744704 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/80870bc1-a32f-4ee1-99e0-1caea40cf072-clustermesh-secrets\") pod \"80870bc1-a32f-4ee1-99e0-1caea40cf072\" (UID: \"80870bc1-a32f-4ee1-99e0-1caea40cf072\") " Aug 13 00:56:41.745115 kubelet[1902]: I0813 00:56:41.744723 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-host-proc-sys-kernel\") pod \"80870bc1-a32f-4ee1-99e0-1caea40cf072\" (UID: \"80870bc1-a32f-4ee1-99e0-1caea40cf072\") " Aug 13 00:56:41.745297 kubelet[1902]: I0813 00:56:41.744737 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-hostproc\") pod \"80870bc1-a32f-4ee1-99e0-1caea40cf072\" (UID: \"80870bc1-a32f-4ee1-99e0-1caea40cf072\") " Aug 13 00:56:41.745297 kubelet[1902]: I0813 00:56:41.744756 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/80870bc1-a32f-4ee1-99e0-1caea40cf072-cilium-config-path\") pod \"80870bc1-a32f-4ee1-99e0-1caea40cf072\" (UID: \"80870bc1-a32f-4ee1-99e0-1caea40cf072\") " Aug 13 00:56:41.745297 kubelet[1902]: I0813 00:56:41.744794 1902 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2223223b-e61d-4d69-855d-6c0d95269d1b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:41.745297 kubelet[1902]: I0813 00:56:41.744803 1902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2k7k\" (UniqueName: \"kubernetes.io/projected/2223223b-e61d-4d69-855d-6c0d95269d1b-kube-api-access-x2k7k\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:41.747520 kubelet[1902]: I0813 00:56:41.744704 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "80870bc1-a32f-4ee1-99e0-1caea40cf072" (UID: "80870bc1-a32f-4ee1-99e0-1caea40cf072"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:41.747520 kubelet[1902]: I0813 00:56:41.744753 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "80870bc1-a32f-4ee1-99e0-1caea40cf072" (UID: "80870bc1-a32f-4ee1-99e0-1caea40cf072"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:41.747520 kubelet[1902]: I0813 00:56:41.744766 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-cni-path" (OuterVolumeSpecName: "cni-path") pod "80870bc1-a32f-4ee1-99e0-1caea40cf072" (UID: "80870bc1-a32f-4ee1-99e0-1caea40cf072"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:41.747520 kubelet[1902]: I0813 00:56:41.744776 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "80870bc1-a32f-4ee1-99e0-1caea40cf072" (UID: "80870bc1-a32f-4ee1-99e0-1caea40cf072"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:41.747520 kubelet[1902]: I0813 00:56:41.744786 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "80870bc1-a32f-4ee1-99e0-1caea40cf072" (UID: "80870bc1-a32f-4ee1-99e0-1caea40cf072"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:41.747846 kubelet[1902]: I0813 00:56:41.744796 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "80870bc1-a32f-4ee1-99e0-1caea40cf072" (UID: "80870bc1-a32f-4ee1-99e0-1caea40cf072"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:41.747846 kubelet[1902]: I0813 00:56:41.745306 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "80870bc1-a32f-4ee1-99e0-1caea40cf072" (UID: "80870bc1-a32f-4ee1-99e0-1caea40cf072"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:41.747846 kubelet[1902]: I0813 00:56:41.745320 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "80870bc1-a32f-4ee1-99e0-1caea40cf072" (UID: "80870bc1-a32f-4ee1-99e0-1caea40cf072"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:41.747846 kubelet[1902]: I0813 00:56:41.745412 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-hostproc" (OuterVolumeSpecName: "hostproc") pod "80870bc1-a32f-4ee1-99e0-1caea40cf072" (UID: "80870bc1-a32f-4ee1-99e0-1caea40cf072"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:41.747846 kubelet[1902]: I0813 00:56:41.745427 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "80870bc1-a32f-4ee1-99e0-1caea40cf072" (UID: "80870bc1-a32f-4ee1-99e0-1caea40cf072"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:41.748026 kubelet[1902]: I0813 00:56:41.747073 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80870bc1-a32f-4ee1-99e0-1caea40cf072-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "80870bc1-a32f-4ee1-99e0-1caea40cf072" (UID: "80870bc1-a32f-4ee1-99e0-1caea40cf072"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:56:41.748026 kubelet[1902]: I0813 00:56:41.747396 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80870bc1-a32f-4ee1-99e0-1caea40cf072-kube-api-access-vxdh5" (OuterVolumeSpecName: "kube-api-access-vxdh5") pod "80870bc1-a32f-4ee1-99e0-1caea40cf072" (UID: "80870bc1-a32f-4ee1-99e0-1caea40cf072"). InnerVolumeSpecName "kube-api-access-vxdh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:56:41.748026 kubelet[1902]: I0813 00:56:41.747827 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80870bc1-a32f-4ee1-99e0-1caea40cf072-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "80870bc1-a32f-4ee1-99e0-1caea40cf072" (UID: "80870bc1-a32f-4ee1-99e0-1caea40cf072"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:56:41.748966 kubelet[1902]: I0813 00:56:41.748918 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80870bc1-a32f-4ee1-99e0-1caea40cf072-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "80870bc1-a32f-4ee1-99e0-1caea40cf072" (UID: "80870bc1-a32f-4ee1-99e0-1caea40cf072"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:56:41.845295 kubelet[1902]: I0813 00:56:41.845202 1902 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/80870bc1-a32f-4ee1-99e0-1caea40cf072-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:41.845295 kubelet[1902]: I0813 00:56:41.845259 1902 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:41.845295 kubelet[1902]: I0813 00:56:41.845271 1902 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:41.845295 kubelet[1902]: I0813 00:56:41.845279 1902 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/80870bc1-a32f-4ee1-99e0-1caea40cf072-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:41.845295 kubelet[1902]: I0813 00:56:41.845287 1902 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:41.845295 kubelet[1902]: I0813 00:56:41.845293 1902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxdh5\" (UniqueName: \"kubernetes.io/projected/80870bc1-a32f-4ee1-99e0-1caea40cf072-kube-api-access-vxdh5\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:41.845295 kubelet[1902]: I0813 00:56:41.845300 1902 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:41.845295 kubelet[1902]: I0813 00:56:41.845307 1902 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:41.845715 kubelet[1902]: I0813 00:56:41.845314 1902 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:41.845715 kubelet[1902]: I0813 00:56:41.845320 1902 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:41.845715 kubelet[1902]: I0813 00:56:41.845326 1902 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:41.845715 kubelet[1902]: I0813 00:56:41.845337 1902 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:41.845715 kubelet[1902]: I0813 00:56:41.845344 1902 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/80870bc1-a32f-4ee1-99e0-1caea40cf072-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:41.845715 kubelet[1902]: I0813 00:56:41.845350 1902 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/80870bc1-a32f-4ee1-99e0-1caea40cf072-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:41.920416 systemd[1]: Removed slice kubepods-burstable-pod80870bc1_a32f_4ee1_99e0_1caea40cf072.slice. Aug 13 00:56:41.920657 systemd[1]: kubepods-burstable-pod80870bc1_a32f_4ee1_99e0_1caea40cf072.slice: Consumed 6.927s CPU time. Aug 13 00:56:41.921800 systemd[1]: Removed slice kubepods-besteffort-pod2223223b_e61d_4d69_855d_6c0d95269d1b.slice. Aug 13 00:56:42.249433 kubelet[1902]: I0813 00:56:42.249400 1902 scope.go:117] "RemoveContainer" containerID="e65bffc99e9e93eaa3c565103b6da53808c2e26b1a9be36e38a9a47f1743b1ea" Aug 13 00:56:42.251003 env[1199]: time="2025-08-13T00:56:42.250957782Z" level=info msg="RemoveContainer for \"e65bffc99e9e93eaa3c565103b6da53808c2e26b1a9be36e38a9a47f1743b1ea\"" Aug 13 00:56:42.258425 env[1199]: time="2025-08-13T00:56:42.258376747Z" level=info msg="RemoveContainer for \"e65bffc99e9e93eaa3c565103b6da53808c2e26b1a9be36e38a9a47f1743b1ea\" returns successfully" Aug 13 00:56:42.258740 kubelet[1902]: I0813 00:56:42.258701 1902 scope.go:117] "RemoveContainer" containerID="e65bffc99e9e93eaa3c565103b6da53808c2e26b1a9be36e38a9a47f1743b1ea" Aug 13 00:56:42.259103 env[1199]: time="2025-08-13T00:56:42.258965881Z" level=error msg="ContainerStatus for \"e65bffc99e9e93eaa3c565103b6da53808c2e26b1a9be36e38a9a47f1743b1ea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e65bffc99e9e93eaa3c565103b6da53808c2e26b1a9be36e38a9a47f1743b1ea\": not found" Aug 13 00:56:42.259891 kubelet[1902]: E0813 00:56:42.259853 1902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e65bffc99e9e93eaa3c565103b6da53808c2e26b1a9be36e38a9a47f1743b1ea\": not found" containerID="e65bffc99e9e93eaa3c565103b6da53808c2e26b1a9be36e38a9a47f1743b1ea" Aug 13 00:56:42.261286 kubelet[1902]: I0813 00:56:42.261171 1902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e65bffc99e9e93eaa3c565103b6da53808c2e26b1a9be36e38a9a47f1743b1ea"} err="failed to get container status \"e65bffc99e9e93eaa3c565103b6da53808c2e26b1a9be36e38a9a47f1743b1ea\": rpc error: code = NotFound desc = an error occurred when try to find container \"e65bffc99e9e93eaa3c565103b6da53808c2e26b1a9be36e38a9a47f1743b1ea\": not found" Aug 13 00:56:42.261286 kubelet[1902]: I0813 00:56:42.261283 1902 scope.go:117] "RemoveContainer" containerID="002df9ba586dff1a5182c9004ab8939f1321f6b8bddfde4d9feaecbbe95254e9" Aug 13 00:56:42.262792 env[1199]: time="2025-08-13T00:56:42.262759078Z" level=info msg="RemoveContainer for \"002df9ba586dff1a5182c9004ab8939f1321f6b8bddfde4d9feaecbbe95254e9\"" Aug 13 00:56:42.266084 env[1199]: time="2025-08-13T00:56:42.266045698Z" level=info msg="RemoveContainer for \"002df9ba586dff1a5182c9004ab8939f1321f6b8bddfde4d9feaecbbe95254e9\" returns successfully" Aug 13 00:56:42.266348 kubelet[1902]: I0813 00:56:42.266320 1902 scope.go:117] "RemoveContainer" containerID="2fe9d9c3f67e4930bf57b6151f455749b93fe7e64119208af50f52e95ad55a4b" Aug 13 00:56:42.268097 env[1199]: time="2025-08-13T00:56:42.268067045Z" level=info msg="RemoveContainer for \"2fe9d9c3f67e4930bf57b6151f455749b93fe7e64119208af50f52e95ad55a4b\"" Aug 13 00:56:42.271825 env[1199]: time="2025-08-13T00:56:42.271776192Z" level=info msg="RemoveContainer for \"2fe9d9c3f67e4930bf57b6151f455749b93fe7e64119208af50f52e95ad55a4b\" returns successfully" Aug 13 00:56:42.272155 kubelet[1902]: I0813 00:56:42.272105 1902 scope.go:117] "RemoveContainer" containerID="29adc03ab5b67701b540b7543041d4df527d93cf91f85e22d515f337dd6bc635" Aug 13 00:56:42.274264 env[1199]: time="2025-08-13T00:56:42.274231877Z" level=info msg="RemoveContainer for \"29adc03ab5b67701b540b7543041d4df527d93cf91f85e22d515f337dd6bc635\"" Aug 13 00:56:42.279168 env[1199]: time="2025-08-13T00:56:42.279101057Z" level=info msg="RemoveContainer for \"29adc03ab5b67701b540b7543041d4df527d93cf91f85e22d515f337dd6bc635\" returns successfully" Aug 13 00:56:42.279455 kubelet[1902]: I0813 00:56:42.279427 1902 scope.go:117] "RemoveContainer" containerID="55252213f8c3501c72697e6a5bbecff73ed0216ca78124a69b2f8b205cd76c1a" Aug 13 00:56:42.280662 env[1199]: time="2025-08-13T00:56:42.280634573Z" level=info msg="RemoveContainer for \"55252213f8c3501c72697e6a5bbecff73ed0216ca78124a69b2f8b205cd76c1a\"" Aug 13 00:56:42.284691 env[1199]: time="2025-08-13T00:56:42.284649392Z" level=info msg="RemoveContainer for \"55252213f8c3501c72697e6a5bbecff73ed0216ca78124a69b2f8b205cd76c1a\" returns successfully" Aug 13 00:56:42.284962 kubelet[1902]: I0813 00:56:42.284922 1902 scope.go:117] "RemoveContainer" containerID="58856f6b5edf77b040df80fa33d4cf84080e17e53d0d2815cadd867186e70728" Aug 13 00:56:42.286290 env[1199]: time="2025-08-13T00:56:42.286232874Z" level=info msg="RemoveContainer for \"58856f6b5edf77b040df80fa33d4cf84080e17e53d0d2815cadd867186e70728\"" Aug 13 00:56:42.289900 env[1199]: time="2025-08-13T00:56:42.289852821Z" level=info msg="RemoveContainer for \"58856f6b5edf77b040df80fa33d4cf84080e17e53d0d2815cadd867186e70728\" returns successfully" Aug 13 00:56:42.290080 kubelet[1902]: I0813 00:56:42.290038 1902 scope.go:117] "RemoveContainer" containerID="002df9ba586dff1a5182c9004ab8939f1321f6b8bddfde4d9feaecbbe95254e9" Aug 13 00:56:42.290482 env[1199]: time="2025-08-13T00:56:42.290409522Z" level=error msg="ContainerStatus for \"002df9ba586dff1a5182c9004ab8939f1321f6b8bddfde4d9feaecbbe95254e9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"002df9ba586dff1a5182c9004ab8939f1321f6b8bddfde4d9feaecbbe95254e9\": not found" Aug 13 00:56:42.290639 kubelet[1902]: E0813 00:56:42.290609 1902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"002df9ba586dff1a5182c9004ab8939f1321f6b8bddfde4d9feaecbbe95254e9\": not found" containerID="002df9ba586dff1a5182c9004ab8939f1321f6b8bddfde4d9feaecbbe95254e9" Aug 13 00:56:42.290729 kubelet[1902]: I0813 00:56:42.290644 1902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"002df9ba586dff1a5182c9004ab8939f1321f6b8bddfde4d9feaecbbe95254e9"} err="failed to get container status \"002df9ba586dff1a5182c9004ab8939f1321f6b8bddfde4d9feaecbbe95254e9\": rpc error: code = NotFound desc = an error occurred when try to find container \"002df9ba586dff1a5182c9004ab8939f1321f6b8bddfde4d9feaecbbe95254e9\": not found" Aug 13 00:56:42.290729 kubelet[1902]: I0813 00:56:42.290668 1902 scope.go:117] "RemoveContainer" containerID="2fe9d9c3f67e4930bf57b6151f455749b93fe7e64119208af50f52e95ad55a4b" Aug 13 00:56:42.290870 env[1199]: time="2025-08-13T00:56:42.290822831Z" level=error msg="ContainerStatus for \"2fe9d9c3f67e4930bf57b6151f455749b93fe7e64119208af50f52e95ad55a4b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2fe9d9c3f67e4930bf57b6151f455749b93fe7e64119208af50f52e95ad55a4b\": not found" Aug 13 00:56:42.291011 kubelet[1902]: E0813 00:56:42.290989 1902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2fe9d9c3f67e4930bf57b6151f455749b93fe7e64119208af50f52e95ad55a4b\": not found" containerID="2fe9d9c3f67e4930bf57b6151f455749b93fe7e64119208af50f52e95ad55a4b" Aug 13 00:56:42.291088 kubelet[1902]: I0813 00:56:42.291008 1902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2fe9d9c3f67e4930bf57b6151f455749b93fe7e64119208af50f52e95ad55a4b"} err="failed to get container status \"2fe9d9c3f67e4930bf57b6151f455749b93fe7e64119208af50f52e95ad55a4b\": rpc error: code = NotFound desc = an error occurred when try to find container \"2fe9d9c3f67e4930bf57b6151f455749b93fe7e64119208af50f52e95ad55a4b\": not found" Aug 13 00:56:42.291088 kubelet[1902]: I0813 00:56:42.291039 1902 scope.go:117] "RemoveContainer" containerID="29adc03ab5b67701b540b7543041d4df527d93cf91f85e22d515f337dd6bc635" Aug 13 00:56:42.291205 env[1199]: time="2025-08-13T00:56:42.291163692Z" level=error msg="ContainerStatus for \"29adc03ab5b67701b540b7543041d4df527d93cf91f85e22d515f337dd6bc635\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"29adc03ab5b67701b540b7543041d4df527d93cf91f85e22d515f337dd6bc635\": not found" Aug 13 00:56:42.291306 kubelet[1902]: E0813 00:56:42.291270 1902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"29adc03ab5b67701b540b7543041d4df527d93cf91f85e22d515f337dd6bc635\": not found" containerID="29adc03ab5b67701b540b7543041d4df527d93cf91f85e22d515f337dd6bc635" Aug 13 00:56:42.291306 kubelet[1902]: I0813 00:56:42.291290 1902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"29adc03ab5b67701b540b7543041d4df527d93cf91f85e22d515f337dd6bc635"} err="failed to get container status \"29adc03ab5b67701b540b7543041d4df527d93cf91f85e22d515f337dd6bc635\": rpc error: code = NotFound desc = an error occurred when try to find container \"29adc03ab5b67701b540b7543041d4df527d93cf91f85e22d515f337dd6bc635\": not found" Aug 13 00:56:42.291306 kubelet[1902]: I0813 00:56:42.291307 1902 scope.go:117] "RemoveContainer" containerID="55252213f8c3501c72697e6a5bbecff73ed0216ca78124a69b2f8b205cd76c1a" Aug 13 00:56:42.291679 env[1199]: time="2025-08-13T00:56:42.291606797Z" level=error msg="ContainerStatus for \"55252213f8c3501c72697e6a5bbecff73ed0216ca78124a69b2f8b205cd76c1a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"55252213f8c3501c72697e6a5bbecff73ed0216ca78124a69b2f8b205cd76c1a\": not found" Aug 13 00:56:42.291793 kubelet[1902]: E0813 00:56:42.291775 1902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"55252213f8c3501c72697e6a5bbecff73ed0216ca78124a69b2f8b205cd76c1a\": not found" containerID="55252213f8c3501c72697e6a5bbecff73ed0216ca78124a69b2f8b205cd76c1a" Aug 13 00:56:42.291855 kubelet[1902]: I0813 00:56:42.291792 1902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"55252213f8c3501c72697e6a5bbecff73ed0216ca78124a69b2f8b205cd76c1a"} err="failed to get container status \"55252213f8c3501c72697e6a5bbecff73ed0216ca78124a69b2f8b205cd76c1a\": rpc error: code = NotFound desc = an error occurred when try to find container \"55252213f8c3501c72697e6a5bbecff73ed0216ca78124a69b2f8b205cd76c1a\": not found" Aug 13 00:56:42.291855 kubelet[1902]: I0813 00:56:42.291805 1902 scope.go:117] "RemoveContainer" containerID="58856f6b5edf77b040df80fa33d4cf84080e17e53d0d2815cadd867186e70728" Aug 13 00:56:42.292011 env[1199]: time="2025-08-13T00:56:42.291964068Z" level=error msg="ContainerStatus for \"58856f6b5edf77b040df80fa33d4cf84080e17e53d0d2815cadd867186e70728\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"58856f6b5edf77b040df80fa33d4cf84080e17e53d0d2815cadd867186e70728\": not found" Aug 13 00:56:42.292154 kubelet[1902]: E0813 00:56:42.292128 1902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"58856f6b5edf77b040df80fa33d4cf84080e17e53d0d2815cadd867186e70728\": not found" containerID="58856f6b5edf77b040df80fa33d4cf84080e17e53d0d2815cadd867186e70728" Aug 13 00:56:42.292212 kubelet[1902]: I0813 00:56:42.292153 1902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"58856f6b5edf77b040df80fa33d4cf84080e17e53d0d2815cadd867186e70728"} err="failed to get container status \"58856f6b5edf77b040df80fa33d4cf84080e17e53d0d2815cadd867186e70728\": rpc error: code = NotFound desc = an error occurred when try to find container \"58856f6b5edf77b040df80fa33d4cf84080e17e53d0d2815cadd867186e70728\": not found" Aug 13 00:56:42.477986 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f7bd778347accccf09afb9703ef2897b492b9d01b0289c66fea8b02af43c874-rootfs.mount: Deactivated successfully. Aug 13 00:56:42.478120 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1f7bd778347accccf09afb9703ef2897b492b9d01b0289c66fea8b02af43c874-shm.mount: Deactivated successfully. Aug 13 00:56:42.478197 systemd[1]: var-lib-kubelet-pods-80870bc1\x2da32f\x2d4ee1\x2d99e0\x2d1caea40cf072-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvxdh5.mount: Deactivated successfully. Aug 13 00:56:42.478272 systemd[1]: var-lib-kubelet-pods-2223223b\x2de61d\x2d4d69\x2d855d\x2d6c0d95269d1b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx2k7k.mount: Deactivated successfully. Aug 13 00:56:42.478365 systemd[1]: var-lib-kubelet-pods-80870bc1\x2da32f\x2d4ee1\x2d99e0\x2d1caea40cf072-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:56:42.478445 systemd[1]: var-lib-kubelet-pods-80870bc1\x2da32f\x2d4ee1\x2d99e0\x2d1caea40cf072-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:56:43.436454 sshd[3558]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:43.439724 systemd[1]: sshd@24-10.0.0.89:22-10.0.0.1:58764.service: Deactivated successfully. Aug 13 00:56:43.440356 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:56:43.441084 systemd-logind[1190]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:56:43.442735 systemd[1]: Started sshd@25-10.0.0.89:22-10.0.0.1:58770.service. Aug 13 00:56:43.443746 systemd-logind[1190]: Removed session 25. Aug 13 00:56:43.478437 sshd[3717]: Accepted publickey for core from 10.0.0.1 port 58770 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:56:43.479575 sshd[3717]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:43.483245 systemd-logind[1190]: New session 26 of user core. Aug 13 00:56:43.484153 systemd[1]: Started session-26.scope. Aug 13 00:56:43.914907 kubelet[1902]: I0813 00:56:43.914764 1902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2223223b-e61d-4d69-855d-6c0d95269d1b" path="/var/lib/kubelet/pods/2223223b-e61d-4d69-855d-6c0d95269d1b/volumes" Aug 13 00:56:43.915282 kubelet[1902]: I0813 00:56:43.915261 1902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80870bc1-a32f-4ee1-99e0-1caea40cf072" path="/var/lib/kubelet/pods/80870bc1-a32f-4ee1-99e0-1caea40cf072/volumes" Aug 13 00:56:44.152049 sshd[3717]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:44.154253 systemd[1]: Started sshd@26-10.0.0.89:22-10.0.0.1:58784.service. Aug 13 00:56:44.156197 systemd-logind[1190]: Session 26 logged out. Waiting for processes to exit. Aug 13 00:56:44.157662 systemd[1]: sshd@25-10.0.0.89:22-10.0.0.1:58770.service: Deactivated successfully. Aug 13 00:56:44.158282 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 00:56:44.160110 systemd-logind[1190]: Removed session 26. Aug 13 00:56:44.176732 kubelet[1902]: E0813 00:56:44.176658 1902 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="80870bc1-a32f-4ee1-99e0-1caea40cf072" containerName="mount-cgroup" Aug 13 00:56:44.176732 kubelet[1902]: E0813 00:56:44.176718 1902 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="80870bc1-a32f-4ee1-99e0-1caea40cf072" containerName="mount-bpf-fs" Aug 13 00:56:44.176732 kubelet[1902]: E0813 00:56:44.176737 1902 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="80870bc1-a32f-4ee1-99e0-1caea40cf072" containerName="apply-sysctl-overwrites" Aug 13 00:56:44.176732 kubelet[1902]: E0813 00:56:44.176747 1902 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2223223b-e61d-4d69-855d-6c0d95269d1b" containerName="cilium-operator" Aug 13 00:56:44.177105 kubelet[1902]: E0813 00:56:44.176755 1902 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="80870bc1-a32f-4ee1-99e0-1caea40cf072" containerName="clean-cilium-state" Aug 13 00:56:44.177105 kubelet[1902]: E0813 00:56:44.176764 1902 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="80870bc1-a32f-4ee1-99e0-1caea40cf072" containerName="cilium-agent" Aug 13 00:56:44.177105 kubelet[1902]: I0813 00:56:44.176815 1902 memory_manager.go:354] "RemoveStaleState removing state" podUID="80870bc1-a32f-4ee1-99e0-1caea40cf072" containerName="cilium-agent" Aug 13 00:56:44.177105 kubelet[1902]: I0813 00:56:44.176829 1902 memory_manager.go:354] "RemoveStaleState removing state" podUID="2223223b-e61d-4d69-855d-6c0d95269d1b" containerName="cilium-operator" Aug 13 00:56:44.187541 systemd[1]: Created slice kubepods-burstable-podb5755f69_199c_4252_8402_4313b34f52c4.slice. Aug 13 00:56:44.193428 sshd[3728]: Accepted publickey for core from 10.0.0.1 port 58784 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:56:44.194449 sshd[3728]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:44.200346 systemd[1]: Started session-27.scope. Aug 13 00:56:44.201047 systemd-logind[1190]: New session 27 of user core. Aug 13 00:56:44.261054 kubelet[1902]: I0813 00:56:44.261007 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b5755f69-199c-4252-8402-4313b34f52c4-clustermesh-secrets\") pod \"cilium-vtqh7\" (UID: \"b5755f69-199c-4252-8402-4313b34f52c4\") " pod="kube-system/cilium-vtqh7" Aug 13 00:56:44.261054 kubelet[1902]: I0813 00:56:44.261046 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-cilium-run\") pod \"cilium-vtqh7\" (UID: \"b5755f69-199c-4252-8402-4313b34f52c4\") " pod="kube-system/cilium-vtqh7" Aug 13 00:56:44.261054 kubelet[1902]: I0813 00:56:44.261066 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-hostproc\") pod \"cilium-vtqh7\" (UID: \"b5755f69-199c-4252-8402-4313b34f52c4\") " pod="kube-system/cilium-vtqh7" Aug 13 00:56:44.261276 kubelet[1902]: I0813 00:56:44.261083 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b5755f69-199c-4252-8402-4313b34f52c4-cilium-config-path\") pod \"cilium-vtqh7\" (UID: \"b5755f69-199c-4252-8402-4313b34f52c4\") " pod="kube-system/cilium-vtqh7" Aug 13 00:56:44.261276 kubelet[1902]: I0813 00:56:44.261102 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-host-proc-sys-net\") pod \"cilium-vtqh7\" (UID: \"b5755f69-199c-4252-8402-4313b34f52c4\") " pod="kube-system/cilium-vtqh7" Aug 13 00:56:44.261276 kubelet[1902]: I0813 00:56:44.261123 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b5755f69-199c-4252-8402-4313b34f52c4-cilium-ipsec-secrets\") pod \"cilium-vtqh7\" (UID: \"b5755f69-199c-4252-8402-4313b34f52c4\") " pod="kube-system/cilium-vtqh7" Aug 13 00:56:44.261276 kubelet[1902]: I0813 00:56:44.261139 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdq72\" (UniqueName: \"kubernetes.io/projected/b5755f69-199c-4252-8402-4313b34f52c4-kube-api-access-hdq72\") pod \"cilium-vtqh7\" (UID: \"b5755f69-199c-4252-8402-4313b34f52c4\") " pod="kube-system/cilium-vtqh7" Aug 13 00:56:44.261276 kubelet[1902]: I0813 00:56:44.261155 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-cilium-cgroup\") pod \"cilium-vtqh7\" (UID: \"b5755f69-199c-4252-8402-4313b34f52c4\") " pod="kube-system/cilium-vtqh7" Aug 13 00:56:44.261405 kubelet[1902]: I0813 00:56:44.261168 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-xtables-lock\") pod \"cilium-vtqh7\" (UID: \"b5755f69-199c-4252-8402-4313b34f52c4\") " pod="kube-system/cilium-vtqh7" Aug 13 00:56:44.261405 kubelet[1902]: I0813 00:56:44.261182 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-lib-modules\") pod \"cilium-vtqh7\" (UID: \"b5755f69-199c-4252-8402-4313b34f52c4\") " pod="kube-system/cilium-vtqh7" Aug 13 00:56:44.261405 kubelet[1902]: I0813 00:56:44.261196 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b5755f69-199c-4252-8402-4313b34f52c4-hubble-tls\") pod \"cilium-vtqh7\" (UID: \"b5755f69-199c-4252-8402-4313b34f52c4\") " pod="kube-system/cilium-vtqh7" Aug 13 00:56:44.261405 kubelet[1902]: I0813 00:56:44.261238 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-bpf-maps\") pod \"cilium-vtqh7\" (UID: \"b5755f69-199c-4252-8402-4313b34f52c4\") " pod="kube-system/cilium-vtqh7" Aug 13 00:56:44.261405 kubelet[1902]: I0813 00:56:44.261254 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-cni-path\") pod \"cilium-vtqh7\" (UID: \"b5755f69-199c-4252-8402-4313b34f52c4\") " pod="kube-system/cilium-vtqh7" Aug 13 00:56:44.261405 kubelet[1902]: I0813 00:56:44.261270 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-etc-cni-netd\") pod \"cilium-vtqh7\" (UID: \"b5755f69-199c-4252-8402-4313b34f52c4\") " pod="kube-system/cilium-vtqh7" Aug 13 00:56:44.261607 kubelet[1902]: I0813 00:56:44.261284 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-host-proc-sys-kernel\") pod \"cilium-vtqh7\" (UID: \"b5755f69-199c-4252-8402-4313b34f52c4\") " pod="kube-system/cilium-vtqh7" Aug 13 00:56:44.320224 sshd[3728]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:44.325033 systemd[1]: Started sshd@27-10.0.0.89:22-10.0.0.1:58794.service. Aug 13 00:56:44.328065 systemd[1]: sshd@26-10.0.0.89:22-10.0.0.1:58784.service: Deactivated successfully. Aug 13 00:56:44.329043 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 00:56:44.330047 systemd-logind[1190]: Session 27 logged out. Waiting for processes to exit. Aug 13 00:56:44.331561 systemd-logind[1190]: Removed session 27. Aug 13 00:56:44.332409 kubelet[1902]: E0813 00:56:44.332359 1902 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-hdq72 lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-vtqh7" podUID="b5755f69-199c-4252-8402-4313b34f52c4" Aug 13 00:56:44.372924 sshd[3741]: Accepted publickey for core from 10.0.0.1 port 58794 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:56:44.374302 sshd[3741]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:44.380655 systemd-logind[1190]: New session 28 of user core. Aug 13 00:56:44.381605 systemd[1]: Started session-28.scope. Aug 13 00:56:45.366105 kubelet[1902]: I0813 00:56:45.366054 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-cilium-run\") pod \"b5755f69-199c-4252-8402-4313b34f52c4\" (UID: \"b5755f69-199c-4252-8402-4313b34f52c4\") " Aug 13 00:56:45.366105 kubelet[1902]: I0813 00:56:45.366089 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b5755f69-199c-4252-8402-4313b34f52c4-cilium-config-path\") pod \"b5755f69-199c-4252-8402-4313b34f52c4\" (UID: \"b5755f69-199c-4252-8402-4313b34f52c4\") " Aug 13 00:56:45.366105 kubelet[1902]: I0813 00:56:45.366104 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-bpf-maps\") pod \"b5755f69-199c-4252-8402-4313b34f52c4\" (UID: \"b5755f69-199c-4252-8402-4313b34f52c4\") " Aug 13 00:56:45.366105 kubelet[1902]: I0813 00:56:45.366118 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-xtables-lock\") pod \"b5755f69-199c-4252-8402-4313b34f52c4\" (UID: \"b5755f69-199c-4252-8402-4313b34f52c4\") " Aug 13 00:56:45.366701 kubelet[1902]: I0813 00:56:45.366129 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-host-proc-sys-kernel\") pod \"b5755f69-199c-4252-8402-4313b34f52c4\" (UID: \"b5755f69-199c-4252-8402-4313b34f52c4\") " Aug 13 00:56:45.366701 kubelet[1902]: I0813 00:56:45.366145 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b5755f69-199c-4252-8402-4313b34f52c4-clustermesh-secrets\") pod \"b5755f69-199c-4252-8402-4313b34f52c4\" (UID: \"b5755f69-199c-4252-8402-4313b34f52c4\") " Aug 13 00:56:45.366701 kubelet[1902]: I0813 00:56:45.366158 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b5755f69-199c-4252-8402-4313b34f52c4-hubble-tls\") pod \"b5755f69-199c-4252-8402-4313b34f52c4\" (UID: \"b5755f69-199c-4252-8402-4313b34f52c4\") " Aug 13 00:56:45.366701 kubelet[1902]: I0813 00:56:45.366170 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-cni-path\") pod \"b5755f69-199c-4252-8402-4313b34f52c4\" (UID: \"b5755f69-199c-4252-8402-4313b34f52c4\") " Aug 13 00:56:45.366701 kubelet[1902]: I0813 00:56:45.366183 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b5755f69-199c-4252-8402-4313b34f52c4-cilium-ipsec-secrets\") pod \"b5755f69-199c-4252-8402-4313b34f52c4\" (UID: \"b5755f69-199c-4252-8402-4313b34f52c4\") " Aug 13 00:56:45.366701 kubelet[1902]: I0813 00:56:45.366198 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdq72\" (UniqueName: \"kubernetes.io/projected/b5755f69-199c-4252-8402-4313b34f52c4-kube-api-access-hdq72\") pod \"b5755f69-199c-4252-8402-4313b34f52c4\" (UID: \"b5755f69-199c-4252-8402-4313b34f52c4\") " Aug 13 00:56:45.366920 kubelet[1902]: I0813 00:56:45.366209 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-lib-modules\") pod \"b5755f69-199c-4252-8402-4313b34f52c4\" (UID: \"b5755f69-199c-4252-8402-4313b34f52c4\") " Aug 13 00:56:45.366920 kubelet[1902]: I0813 00:56:45.366205 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b5755f69-199c-4252-8402-4313b34f52c4" (UID: "b5755f69-199c-4252-8402-4313b34f52c4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:45.366920 kubelet[1902]: I0813 00:56:45.366221 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-hostproc\") pod \"b5755f69-199c-4252-8402-4313b34f52c4\" (UID: \"b5755f69-199c-4252-8402-4313b34f52c4\") " Aug 13 00:56:45.366920 kubelet[1902]: I0813 00:56:45.366238 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-hostproc" (OuterVolumeSpecName: "hostproc") pod "b5755f69-199c-4252-8402-4313b34f52c4" (UID: "b5755f69-199c-4252-8402-4313b34f52c4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:45.366920 kubelet[1902]: I0813 00:56:45.366255 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b5755f69-199c-4252-8402-4313b34f52c4" (UID: "b5755f69-199c-4252-8402-4313b34f52c4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:45.367113 kubelet[1902]: I0813 00:56:45.366266 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-host-proc-sys-net\") pod \"b5755f69-199c-4252-8402-4313b34f52c4\" (UID: \"b5755f69-199c-4252-8402-4313b34f52c4\") " Aug 13 00:56:45.367113 kubelet[1902]: I0813 00:56:45.366289 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-cilium-cgroup\") pod \"b5755f69-199c-4252-8402-4313b34f52c4\" (UID: \"b5755f69-199c-4252-8402-4313b34f52c4\") " Aug 13 00:56:45.367113 kubelet[1902]: I0813 00:56:45.366307 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-etc-cni-netd\") pod \"b5755f69-199c-4252-8402-4313b34f52c4\" (UID: \"b5755f69-199c-4252-8402-4313b34f52c4\") " Aug 13 00:56:45.367113 kubelet[1902]: I0813 00:56:45.366346 1902 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:45.367113 kubelet[1902]: I0813 00:56:45.366359 1902 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:45.367113 kubelet[1902]: I0813 00:56:45.366372 1902 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:45.367113 kubelet[1902]: I0813 00:56:45.366394 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b5755f69-199c-4252-8402-4313b34f52c4" (UID: "b5755f69-199c-4252-8402-4313b34f52c4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:45.367388 kubelet[1902]: I0813 00:56:45.366423 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b5755f69-199c-4252-8402-4313b34f52c4" (UID: "b5755f69-199c-4252-8402-4313b34f52c4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:45.367388 kubelet[1902]: I0813 00:56:45.366446 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b5755f69-199c-4252-8402-4313b34f52c4" (UID: "b5755f69-199c-4252-8402-4313b34f52c4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:45.367388 kubelet[1902]: I0813 00:56:45.366489 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b5755f69-199c-4252-8402-4313b34f52c4" (UID: "b5755f69-199c-4252-8402-4313b34f52c4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:45.367388 kubelet[1902]: I0813 00:56:45.366538 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b5755f69-199c-4252-8402-4313b34f52c4" (UID: "b5755f69-199c-4252-8402-4313b34f52c4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:45.367388 kubelet[1902]: I0813 00:56:45.367375 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b5755f69-199c-4252-8402-4313b34f52c4" (UID: "b5755f69-199c-4252-8402-4313b34f52c4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:45.367620 kubelet[1902]: I0813 00:56:45.367397 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-cni-path" (OuterVolumeSpecName: "cni-path") pod "b5755f69-199c-4252-8402-4313b34f52c4" (UID: "b5755f69-199c-4252-8402-4313b34f52c4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:45.368291 kubelet[1902]: I0813 00:56:45.368263 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5755f69-199c-4252-8402-4313b34f52c4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b5755f69-199c-4252-8402-4313b34f52c4" (UID: "b5755f69-199c-4252-8402-4313b34f52c4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:56:45.370933 systemd[1]: var-lib-kubelet-pods-b5755f69\x2d199c\x2d4252\x2d8402\x2d4313b34f52c4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:56:45.371341 kubelet[1902]: I0813 00:56:45.371305 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5755f69-199c-4252-8402-4313b34f52c4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b5755f69-199c-4252-8402-4313b34f52c4" (UID: "b5755f69-199c-4252-8402-4313b34f52c4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:56:45.371887 kubelet[1902]: I0813 00:56:45.371862 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5755f69-199c-4252-8402-4313b34f52c4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b5755f69-199c-4252-8402-4313b34f52c4" (UID: "b5755f69-199c-4252-8402-4313b34f52c4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:56:45.372056 kubelet[1902]: I0813 00:56:45.372010 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5755f69-199c-4252-8402-4313b34f52c4-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "b5755f69-199c-4252-8402-4313b34f52c4" (UID: "b5755f69-199c-4252-8402-4313b34f52c4"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:56:45.372774 kubelet[1902]: I0813 00:56:45.372734 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5755f69-199c-4252-8402-4313b34f52c4-kube-api-access-hdq72" (OuterVolumeSpecName: "kube-api-access-hdq72") pod "b5755f69-199c-4252-8402-4313b34f52c4" (UID: "b5755f69-199c-4252-8402-4313b34f52c4"). InnerVolumeSpecName "kube-api-access-hdq72". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:56:45.373396 systemd[1]: var-lib-kubelet-pods-b5755f69\x2d199c\x2d4252\x2d8402\x2d4313b34f52c4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhdq72.mount: Deactivated successfully. Aug 13 00:56:45.373535 systemd[1]: var-lib-kubelet-pods-b5755f69\x2d199c\x2d4252\x2d8402\x2d4313b34f52c4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:56:45.373636 systemd[1]: var-lib-kubelet-pods-b5755f69\x2d199c\x2d4252\x2d8402\x2d4313b34f52c4-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Aug 13 00:56:45.466607 kubelet[1902]: I0813 00:56:45.466550 1902 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b5755f69-199c-4252-8402-4313b34f52c4-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:45.466607 kubelet[1902]: I0813 00:56:45.466598 1902 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b5755f69-199c-4252-8402-4313b34f52c4-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:45.466607 kubelet[1902]: I0813 00:56:45.466609 1902 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:45.466607 kubelet[1902]: I0813 00:56:45.466623 1902 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:45.466607 kubelet[1902]: I0813 00:56:45.466636 1902 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b5755f69-199c-4252-8402-4313b34f52c4-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:45.467031 kubelet[1902]: I0813 00:56:45.466660 1902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdq72\" (UniqueName: \"kubernetes.io/projected/b5755f69-199c-4252-8402-4313b34f52c4-kube-api-access-hdq72\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:45.467031 kubelet[1902]: I0813 00:56:45.466680 1902 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:45.467031 kubelet[1902]: I0813 00:56:45.466694 1902 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:45.467031 kubelet[1902]: I0813 00:56:45.466702 1902 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:45.467031 kubelet[1902]: I0813 00:56:45.466708 1902 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b5755f69-199c-4252-8402-4313b34f52c4-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:45.467031 kubelet[1902]: I0813 00:56:45.466715 1902 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:45.467031 kubelet[1902]: I0813 00:56:45.466722 1902 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b5755f69-199c-4252-8402-4313b34f52c4-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:45.916907 systemd[1]: Removed slice kubepods-burstable-podb5755f69_199c_4252_8402_4313b34f52c4.slice. Aug 13 00:56:45.959189 kubelet[1902]: E0813 00:56:45.959137 1902 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:56:46.311536 systemd[1]: Created slice kubepods-burstable-pod537c56fe_0c5c_4d73_9a0e_a8c1dff3d0b0.slice. Aug 13 00:56:46.372945 kubelet[1902]: I0813 00:56:46.372884 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/537c56fe-0c5c-4d73-9a0e-a8c1dff3d0b0-hostproc\") pod \"cilium-n5scl\" (UID: \"537c56fe-0c5c-4d73-9a0e-a8c1dff3d0b0\") " pod="kube-system/cilium-n5scl" Aug 13 00:56:46.372945 kubelet[1902]: I0813 00:56:46.372926 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/537c56fe-0c5c-4d73-9a0e-a8c1dff3d0b0-etc-cni-netd\") pod \"cilium-n5scl\" (UID: \"537c56fe-0c5c-4d73-9a0e-a8c1dff3d0b0\") " pod="kube-system/cilium-n5scl" Aug 13 00:56:46.372945 kubelet[1902]: I0813 00:56:46.372942 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26dcr\" (UniqueName: \"kubernetes.io/projected/537c56fe-0c5c-4d73-9a0e-a8c1dff3d0b0-kube-api-access-26dcr\") pod \"cilium-n5scl\" (UID: \"537c56fe-0c5c-4d73-9a0e-a8c1dff3d0b0\") " pod="kube-system/cilium-n5scl" Aug 13 00:56:46.372945 kubelet[1902]: I0813 00:56:46.372958 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/537c56fe-0c5c-4d73-9a0e-a8c1dff3d0b0-cilium-config-path\") pod \"cilium-n5scl\" (UID: \"537c56fe-0c5c-4d73-9a0e-a8c1dff3d0b0\") " pod="kube-system/cilium-n5scl" Aug 13 00:56:46.373605 kubelet[1902]: I0813 00:56:46.373009 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/537c56fe-0c5c-4d73-9a0e-a8c1dff3d0b0-cilium-ipsec-secrets\") pod \"cilium-n5scl\" (UID: \"537c56fe-0c5c-4d73-9a0e-a8c1dff3d0b0\") " pod="kube-system/cilium-n5scl" Aug 13 00:56:46.373605 kubelet[1902]: I0813 00:56:46.373046 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/537c56fe-0c5c-4d73-9a0e-a8c1dff3d0b0-host-proc-sys-net\") pod \"cilium-n5scl\" (UID: \"537c56fe-0c5c-4d73-9a0e-a8c1dff3d0b0\") " pod="kube-system/cilium-n5scl" Aug 13 00:56:46.373605 kubelet[1902]: I0813 00:56:46.373071 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/537c56fe-0c5c-4d73-9a0e-a8c1dff3d0b0-clustermesh-secrets\") pod \"cilium-n5scl\" (UID: \"537c56fe-0c5c-4d73-9a0e-a8c1dff3d0b0\") " pod="kube-system/cilium-n5scl" Aug 13 00:56:46.373605 kubelet[1902]: I0813 00:56:46.373092 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/537c56fe-0c5c-4d73-9a0e-a8c1dff3d0b0-bpf-maps\") pod \"cilium-n5scl\" (UID: \"537c56fe-0c5c-4d73-9a0e-a8c1dff3d0b0\") " pod="kube-system/cilium-n5scl" Aug 13 00:56:46.373605 kubelet[1902]: I0813 00:56:46.373109 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/537c56fe-0c5c-4d73-9a0e-a8c1dff3d0b0-xtables-lock\") pod \"cilium-n5scl\" (UID: \"537c56fe-0c5c-4d73-9a0e-a8c1dff3d0b0\") " pod="kube-system/cilium-n5scl" Aug 13 00:56:46.373605 kubelet[1902]: I0813 00:56:46.373127 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/537c56fe-0c5c-4d73-9a0e-a8c1dff3d0b0-cni-path\") pod \"cilium-n5scl\" (UID: \"537c56fe-0c5c-4d73-9a0e-a8c1dff3d0b0\") " pod="kube-system/cilium-n5scl" Aug 13 00:56:46.373818 kubelet[1902]: I0813 00:56:46.373147 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/537c56fe-0c5c-4d73-9a0e-a8c1dff3d0b0-cilium-run\") pod \"cilium-n5scl\" (UID: \"537c56fe-0c5c-4d73-9a0e-a8c1dff3d0b0\") " pod="kube-system/cilium-n5scl" Aug 13 00:56:46.373818 kubelet[1902]: I0813 00:56:46.373172 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/537c56fe-0c5c-4d73-9a0e-a8c1dff3d0b0-cilium-cgroup\") pod \"cilium-n5scl\" (UID: \"537c56fe-0c5c-4d73-9a0e-a8c1dff3d0b0\") " pod="kube-system/cilium-n5scl" Aug 13 00:56:46.373818 kubelet[1902]: I0813 00:56:46.373190 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/537c56fe-0c5c-4d73-9a0e-a8c1dff3d0b0-lib-modules\") pod \"cilium-n5scl\" (UID: \"537c56fe-0c5c-4d73-9a0e-a8c1dff3d0b0\") " pod="kube-system/cilium-n5scl" Aug 13 00:56:46.373818 kubelet[1902]: I0813 00:56:46.373215 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/537c56fe-0c5c-4d73-9a0e-a8c1dff3d0b0-host-proc-sys-kernel\") pod \"cilium-n5scl\" (UID: \"537c56fe-0c5c-4d73-9a0e-a8c1dff3d0b0\") " pod="kube-system/cilium-n5scl" Aug 13 00:56:46.373818 kubelet[1902]: I0813 00:56:46.373235 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/537c56fe-0c5c-4d73-9a0e-a8c1dff3d0b0-hubble-tls\") pod \"cilium-n5scl\" (UID: \"537c56fe-0c5c-4d73-9a0e-a8c1dff3d0b0\") " pod="kube-system/cilium-n5scl" Aug 13 00:56:46.615134 kubelet[1902]: E0813 00:56:46.614548 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:46.615569 env[1199]: time="2025-08-13T00:56:46.615497062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n5scl,Uid:537c56fe-0c5c-4d73-9a0e-a8c1dff3d0b0,Namespace:kube-system,Attempt:0,}" Aug 13 00:56:46.630708 env[1199]: time="2025-08-13T00:56:46.630628217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:56:46.630708 env[1199]: time="2025-08-13T00:56:46.630672241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:56:46.630979 env[1199]: time="2025-08-13T00:56:46.630686558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:56:46.631217 env[1199]: time="2025-08-13T00:56:46.631179661Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5cdf8e8c823b1c571b83008b21a78893c2a935ee9237f10644bf5989233f7a5e pid=3771 runtime=io.containerd.runc.v2 Aug 13 00:56:46.646360 systemd[1]: Started cri-containerd-5cdf8e8c823b1c571b83008b21a78893c2a935ee9237f10644bf5989233f7a5e.scope. Aug 13 00:56:46.674566 env[1199]: time="2025-08-13T00:56:46.674516447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n5scl,Uid:537c56fe-0c5c-4d73-9a0e-a8c1dff3d0b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"5cdf8e8c823b1c571b83008b21a78893c2a935ee9237f10644bf5989233f7a5e\"" Aug 13 00:56:46.676006 kubelet[1902]: E0813 00:56:46.675660 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:46.677842 env[1199]: time="2025-08-13T00:56:46.677790419Z" level=info msg="CreateContainer within sandbox \"5cdf8e8c823b1c571b83008b21a78893c2a935ee9237f10644bf5989233f7a5e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:56:46.692700 env[1199]: time="2025-08-13T00:56:46.692664424Z" level=info msg="CreateContainer within sandbox \"5cdf8e8c823b1c571b83008b21a78893c2a935ee9237f10644bf5989233f7a5e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"34dc3f069ba9fe732d3883ec1427c9784518ba1f087625b75edf72a21d8814bf\"" Aug 13 00:56:46.693170 env[1199]: time="2025-08-13T00:56:46.693116656Z" level=info msg="StartContainer for \"34dc3f069ba9fe732d3883ec1427c9784518ba1f087625b75edf72a21d8814bf\"" Aug 13 00:56:46.711520 systemd[1]: Started cri-containerd-34dc3f069ba9fe732d3883ec1427c9784518ba1f087625b75edf72a21d8814bf.scope. Aug 13 00:56:46.804887 env[1199]: time="2025-08-13T00:56:46.804828504Z" level=info msg="StartContainer for \"34dc3f069ba9fe732d3883ec1427c9784518ba1f087625b75edf72a21d8814bf\" returns successfully" Aug 13 00:56:46.817933 systemd[1]: cri-containerd-34dc3f069ba9fe732d3883ec1427c9784518ba1f087625b75edf72a21d8814bf.scope: Deactivated successfully. Aug 13 00:56:46.850431 env[1199]: time="2025-08-13T00:56:46.850374459Z" level=info msg="shim disconnected" id=34dc3f069ba9fe732d3883ec1427c9784518ba1f087625b75edf72a21d8814bf Aug 13 00:56:46.850431 env[1199]: time="2025-08-13T00:56:46.850424685Z" level=warning msg="cleaning up after shim disconnected" id=34dc3f069ba9fe732d3883ec1427c9784518ba1f087625b75edf72a21d8814bf namespace=k8s.io Aug 13 00:56:46.850431 env[1199]: time="2025-08-13T00:56:46.850434223Z" level=info msg="cleaning up dead shim" Aug 13 00:56:46.857691 env[1199]: time="2025-08-13T00:56:46.857658679Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:56:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3853 runtime=io.containerd.runc.v2\n" Aug 13 00:56:47.266430 kubelet[1902]: E0813 00:56:47.266397 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:47.269861 env[1199]: time="2025-08-13T00:56:47.269799167Z" level=info msg="CreateContainer within sandbox \"5cdf8e8c823b1c571b83008b21a78893c2a935ee9237f10644bf5989233f7a5e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:56:47.291564 env[1199]: time="2025-08-13T00:56:47.291440803Z" level=info msg="CreateContainer within sandbox \"5cdf8e8c823b1c571b83008b21a78893c2a935ee9237f10644bf5989233f7a5e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"08bf4fd8141255c96ea69eed1ef0c57aeadd0d78f18faeaa188174718ad35dcf\"" Aug 13 00:56:47.292208 env[1199]: time="2025-08-13T00:56:47.292176157Z" level=info msg="StartContainer for \"08bf4fd8141255c96ea69eed1ef0c57aeadd0d78f18faeaa188174718ad35dcf\"" Aug 13 00:56:47.308531 systemd[1]: Started cri-containerd-08bf4fd8141255c96ea69eed1ef0c57aeadd0d78f18faeaa188174718ad35dcf.scope. Aug 13 00:56:47.334792 env[1199]: time="2025-08-13T00:56:47.334731258Z" level=info msg="StartContainer for \"08bf4fd8141255c96ea69eed1ef0c57aeadd0d78f18faeaa188174718ad35dcf\" returns successfully" Aug 13 00:56:47.340861 systemd[1]: cri-containerd-08bf4fd8141255c96ea69eed1ef0c57aeadd0d78f18faeaa188174718ad35dcf.scope: Deactivated successfully. Aug 13 00:56:47.363710 env[1199]: time="2025-08-13T00:56:47.363654081Z" level=info msg="shim disconnected" id=08bf4fd8141255c96ea69eed1ef0c57aeadd0d78f18faeaa188174718ad35dcf Aug 13 00:56:47.363884 env[1199]: time="2025-08-13T00:56:47.363715188Z" level=warning msg="cleaning up after shim disconnected" id=08bf4fd8141255c96ea69eed1ef0c57aeadd0d78f18faeaa188174718ad35dcf namespace=k8s.io Aug 13 00:56:47.363884 env[1199]: time="2025-08-13T00:56:47.363727722Z" level=info msg="cleaning up dead shim" Aug 13 00:56:47.369961 env[1199]: time="2025-08-13T00:56:47.369910852Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:56:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3914 runtime=io.containerd.runc.v2\n" Aug 13 00:56:47.923009 kubelet[1902]: I0813 00:56:47.922919 1902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5755f69-199c-4252-8402-4313b34f52c4" path="/var/lib/kubelet/pods/b5755f69-199c-4252-8402-4313b34f52c4/volumes" Aug 13 00:56:48.085964 kubelet[1902]: I0813 00:56:48.085655 1902 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T00:56:48Z","lastTransitionTime":"2025-08-13T00:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 13 00:56:48.276193 kubelet[1902]: E0813 00:56:48.275682 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:48.279577 env[1199]: time="2025-08-13T00:56:48.279378663Z" level=info msg="CreateContainer within sandbox \"5cdf8e8c823b1c571b83008b21a78893c2a935ee9237f10644bf5989233f7a5e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:56:48.308177 env[1199]: time="2025-08-13T00:56:48.308109019Z" level=info msg="CreateContainer within sandbox \"5cdf8e8c823b1c571b83008b21a78893c2a935ee9237f10644bf5989233f7a5e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"515bd59e8898d78bce7ea102b00923da7914de6297968a8f41346005c89956c3\"" Aug 13 00:56:48.309048 env[1199]: time="2025-08-13T00:56:48.308989311Z" level=info msg="StartContainer for \"515bd59e8898d78bce7ea102b00923da7914de6297968a8f41346005c89956c3\"" Aug 13 00:56:48.349531 systemd[1]: Started cri-containerd-515bd59e8898d78bce7ea102b00923da7914de6297968a8f41346005c89956c3.scope. Aug 13 00:56:48.389586 env[1199]: time="2025-08-13T00:56:48.389504427Z" level=info msg="StartContainer for \"515bd59e8898d78bce7ea102b00923da7914de6297968a8f41346005c89956c3\" returns successfully" Aug 13 00:56:48.398530 systemd[1]: cri-containerd-515bd59e8898d78bce7ea102b00923da7914de6297968a8f41346005c89956c3.scope: Deactivated successfully. Aug 13 00:56:48.428017 env[1199]: time="2025-08-13T00:56:48.427894707Z" level=info msg="shim disconnected" id=515bd59e8898d78bce7ea102b00923da7914de6297968a8f41346005c89956c3 Aug 13 00:56:48.428017 env[1199]: time="2025-08-13T00:56:48.427957737Z" level=warning msg="cleaning up after shim disconnected" id=515bd59e8898d78bce7ea102b00923da7914de6297968a8f41346005c89956c3 namespace=k8s.io Aug 13 00:56:48.428017 env[1199]: time="2025-08-13T00:56:48.427976995Z" level=info msg="cleaning up dead shim" Aug 13 00:56:48.438343 env[1199]: time="2025-08-13T00:56:48.438270760Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:56:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3970 runtime=io.containerd.runc.v2\n" Aug 13 00:56:48.478691 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-515bd59e8898d78bce7ea102b00923da7914de6297968a8f41346005c89956c3-rootfs.mount: Deactivated successfully. Aug 13 00:56:49.281891 kubelet[1902]: E0813 00:56:49.281833 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:49.283892 env[1199]: time="2025-08-13T00:56:49.283848471Z" level=info msg="CreateContainer within sandbox \"5cdf8e8c823b1c571b83008b21a78893c2a935ee9237f10644bf5989233f7a5e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:56:49.589546 env[1199]: time="2025-08-13T00:56:49.589368397Z" level=info msg="CreateContainer within sandbox \"5cdf8e8c823b1c571b83008b21a78893c2a935ee9237f10644bf5989233f7a5e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dcc76c78fa2f5e97b68b666aefd4168165188e16fe489cb6c06e3a943132cf57\"" Aug 13 00:56:49.590144 env[1199]: time="2025-08-13T00:56:49.590110596Z" level=info msg="StartContainer for \"dcc76c78fa2f5e97b68b666aefd4168165188e16fe489cb6c06e3a943132cf57\"" Aug 13 00:56:49.607870 systemd[1]: run-containerd-runc-k8s.io-dcc76c78fa2f5e97b68b666aefd4168165188e16fe489cb6c06e3a943132cf57-runc.kV7XFo.mount: Deactivated successfully. Aug 13 00:56:49.611001 systemd[1]: Started cri-containerd-dcc76c78fa2f5e97b68b666aefd4168165188e16fe489cb6c06e3a943132cf57.scope. Aug 13 00:56:49.634056 systemd[1]: cri-containerd-dcc76c78fa2f5e97b68b666aefd4168165188e16fe489cb6c06e3a943132cf57.scope: Deactivated successfully. Aug 13 00:56:49.677916 env[1199]: time="2025-08-13T00:56:49.677847831Z" level=info msg="StartContainer for \"dcc76c78fa2f5e97b68b666aefd4168165188e16fe489cb6c06e3a943132cf57\" returns successfully" Aug 13 00:56:49.704949 env[1199]: time="2025-08-13T00:56:49.704895161Z" level=info msg="shim disconnected" id=dcc76c78fa2f5e97b68b666aefd4168165188e16fe489cb6c06e3a943132cf57 Aug 13 00:56:49.704949 env[1199]: time="2025-08-13T00:56:49.704943203Z" level=warning msg="cleaning up after shim disconnected" id=dcc76c78fa2f5e97b68b666aefd4168165188e16fe489cb6c06e3a943132cf57 namespace=k8s.io Aug 13 00:56:49.704949 env[1199]: time="2025-08-13T00:56:49.704951739Z" level=info msg="cleaning up dead shim" Aug 13 00:56:49.712938 env[1199]: time="2025-08-13T00:56:49.712866372Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:56:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4025 runtime=io.containerd.runc.v2\n" Aug 13 00:56:50.286012 kubelet[1902]: E0813 00:56:50.285979 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:50.287886 env[1199]: time="2025-08-13T00:56:50.287843620Z" level=info msg="CreateContainer within sandbox \"5cdf8e8c823b1c571b83008b21a78893c2a935ee9237f10644bf5989233f7a5e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:56:50.305783 env[1199]: time="2025-08-13T00:56:50.305711708Z" level=info msg="CreateContainer within sandbox \"5cdf8e8c823b1c571b83008b21a78893c2a935ee9237f10644bf5989233f7a5e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f83d96b3d2354297546796e482dfe7b388de6f400f9aea01acdf5cf6eb7d4937\"" Aug 13 00:56:50.306568 env[1199]: time="2025-08-13T00:56:50.306504644Z" level=info msg="StartContainer for \"f83d96b3d2354297546796e482dfe7b388de6f400f9aea01acdf5cf6eb7d4937\"" Aug 13 00:56:50.322678 systemd[1]: Started cri-containerd-f83d96b3d2354297546796e482dfe7b388de6f400f9aea01acdf5cf6eb7d4937.scope. Aug 13 00:56:50.353618 env[1199]: time="2025-08-13T00:56:50.353543891Z" level=info msg="StartContainer for \"f83d96b3d2354297546796e482dfe7b388de6f400f9aea01acdf5cf6eb7d4937\" returns successfully" Aug 13 00:56:50.488179 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dcc76c78fa2f5e97b68b666aefd4168165188e16fe489cb6c06e3a943132cf57-rootfs.mount: Deactivated successfully. Aug 13 00:56:50.668498 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Aug 13 00:56:51.290567 kubelet[1902]: E0813 00:56:51.290518 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:52.611146 systemd[1]: run-containerd-runc-k8s.io-f83d96b3d2354297546796e482dfe7b388de6f400f9aea01acdf5cf6eb7d4937-runc.O8ZBVf.mount: Deactivated successfully. Aug 13 00:56:52.616286 kubelet[1902]: E0813 00:56:52.616261 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:53.356776 systemd-networkd[1021]: lxc_health: Link UP Aug 13 00:56:53.368839 systemd-networkd[1021]: lxc_health: Gained carrier Aug 13 00:56:53.369497 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Aug 13 00:56:54.616613 kubelet[1902]: E0813 00:56:54.616548 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:54.629447 kubelet[1902]: I0813 00:56:54.629379 1902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-n5scl" podStartSLOduration=8.629360929 podStartE2EDuration="8.629360929s" podCreationTimestamp="2025-08-13 00:56:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:56:51.310003297 +0000 UTC m=+105.487955569" watchObservedRunningTime="2025-08-13 00:56:54.629360929 +0000 UTC m=+108.807313201" Aug 13 00:56:54.935731 systemd-networkd[1021]: lxc_health: Gained IPv6LL Aug 13 00:56:55.297828 kubelet[1902]: E0813 00:56:55.297710 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:56.299720 kubelet[1902]: E0813 00:56:56.299682 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:56.810212 systemd[1]: run-containerd-runc-k8s.io-f83d96b3d2354297546796e482dfe7b388de6f400f9aea01acdf5cf6eb7d4937-runc.ZuqssD.mount: Deactivated successfully. Aug 13 00:56:58.912076 kubelet[1902]: E0813 00:56:58.912039 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:58.949568 sshd[3741]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:58.951761 systemd[1]: sshd@27-10.0.0.89:22-10.0.0.1:58794.service: Deactivated successfully. Aug 13 00:56:58.952482 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 00:56:58.953101 systemd-logind[1190]: Session 28 logged out. Waiting for processes to exit. Aug 13 00:56:58.953749 systemd-logind[1190]: Removed session 28.