Aug 13 07:18:51.896220 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 22:14:58 -00 2025 Aug 13 07:18:51.896241 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:18:51.896253 kernel: BIOS-provided physical RAM map: Aug 13 07:18:51.896259 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 13 07:18:51.896265 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Aug 13 07:18:51.896271 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Aug 13 07:18:51.896279 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Aug 13 07:18:51.896285 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Aug 13 07:18:51.896291 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Aug 13 07:18:51.896298 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Aug 13 07:18:51.896306 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Aug 13 07:18:51.896312 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Aug 13 07:18:51.896319 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Aug 13 07:18:51.896325 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Aug 13 07:18:51.896333 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Aug 13 07:18:51.896340 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Aug 13 07:18:51.896349 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Aug 13 07:18:51.896356 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Aug 13 07:18:51.896363 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Aug 13 07:18:51.896369 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 07:18:51.896376 kernel: NX (Execute Disable) protection: active Aug 13 07:18:51.896383 kernel: APIC: Static calls initialized Aug 13 07:18:51.896389 kernel: efi: EFI v2.7 by EDK II Aug 13 07:18:51.896396 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Aug 13 07:18:51.896403 kernel: SMBIOS 2.8 present. Aug 13 07:18:51.896410 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Aug 13 07:18:51.896416 kernel: Hypervisor detected: KVM Aug 13 07:18:51.896426 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 07:18:51.896432 kernel: kvm-clock: using sched offset of 4657100544 cycles Aug 13 07:18:51.896440 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 07:18:51.896447 kernel: tsc: Detected 2794.750 MHz processor Aug 13 07:18:51.896454 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 07:18:51.896461 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 07:18:51.896468 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Aug 13 07:18:51.896475 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Aug 13 07:18:51.896482 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 07:18:51.896491 kernel: Using GB pages for direct mapping Aug 13 07:18:51.896498 kernel: Secure boot disabled Aug 13 07:18:51.896505 kernel: ACPI: Early table checksum verification disabled Aug 13 07:18:51.896512 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Aug 13 07:18:51.896523 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Aug 13 07:18:51.896530 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:18:51.896537 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:18:51.896547 kernel: ACPI: FACS 0x000000009CBDD000 000040 Aug 13 07:18:51.896554 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:18:51.896562 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:18:51.896569 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:18:51.896576 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:18:51.896583 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Aug 13 07:18:51.896591 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Aug 13 07:18:51.896600 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Aug 13 07:18:51.896608 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Aug 13 07:18:51.896615 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Aug 13 07:18:51.896622 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Aug 13 07:18:51.896629 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Aug 13 07:18:51.896636 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Aug 13 07:18:51.896643 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Aug 13 07:18:51.896651 kernel: No NUMA configuration found Aug 13 07:18:51.896658 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Aug 13 07:18:51.896667 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Aug 13 07:18:51.896675 kernel: Zone ranges: Aug 13 07:18:51.896682 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 07:18:51.896689 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Aug 13 07:18:51.896696 kernel: Normal empty Aug 13 07:18:51.896704 kernel: Movable zone start for each node Aug 13 07:18:51.896711 kernel: Early memory node ranges Aug 13 07:18:51.896718 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Aug 13 07:18:51.896726 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Aug 13 07:18:51.896733 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Aug 13 07:18:51.896743 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Aug 13 07:18:51.896750 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Aug 13 07:18:51.896757 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Aug 13 07:18:51.896764 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Aug 13 07:18:51.896771 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 07:18:51.896779 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Aug 13 07:18:51.896786 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Aug 13 07:18:51.896793 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 07:18:51.896801 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Aug 13 07:18:51.896810 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Aug 13 07:18:51.896817 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Aug 13 07:18:51.896825 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 07:18:51.896832 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 07:18:51.896839 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 07:18:51.896846 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 07:18:51.896854 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 07:18:51.896861 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 07:18:51.896868 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 07:18:51.896877 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 07:18:51.896885 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 07:18:51.896892 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 07:18:51.896934 kernel: TSC deadline timer available Aug 13 07:18:51.896941 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Aug 13 07:18:51.896948 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 07:18:51.896955 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 07:18:51.896962 kernel: kvm-guest: setup PV sched yield Aug 13 07:18:51.896969 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Aug 13 07:18:51.896979 kernel: Booting paravirtualized kernel on KVM Aug 13 07:18:51.896987 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 07:18:51.896994 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Aug 13 07:18:51.897001 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Aug 13 07:18:51.897009 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Aug 13 07:18:51.897016 kernel: pcpu-alloc: [0] 0 1 2 3 Aug 13 07:18:51.897023 kernel: kvm-guest: PV spinlocks enabled Aug 13 07:18:51.897030 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 07:18:51.897038 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:18:51.897049 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 07:18:51.897056 kernel: random: crng init done Aug 13 07:18:51.897063 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 07:18:51.897070 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 07:18:51.897077 kernel: Fallback order for Node 0: 0 Aug 13 07:18:51.897085 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Aug 13 07:18:51.897092 kernel: Policy zone: DMA32 Aug 13 07:18:51.897099 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 07:18:51.897113 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 171124K reserved, 0K cma-reserved) Aug 13 07:18:51.897123 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 13 07:18:51.897130 kernel: ftrace: allocating 37968 entries in 149 pages Aug 13 07:18:51.897137 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 07:18:51.897145 kernel: Dynamic Preempt: voluntary Aug 13 07:18:51.897160 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 07:18:51.897170 kernel: rcu: RCU event tracing is enabled. Aug 13 07:18:51.897177 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 13 07:18:51.897185 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 07:18:51.897193 kernel: Rude variant of Tasks RCU enabled. Aug 13 07:18:51.897200 kernel: Tracing variant of Tasks RCU enabled. Aug 13 07:18:51.897208 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 07:18:51.897217 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 13 07:18:51.897225 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Aug 13 07:18:51.897232 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 07:18:51.897242 kernel: Console: colour dummy device 80x25 Aug 13 07:18:51.897257 kernel: printk: console [ttyS0] enabled Aug 13 07:18:51.897272 kernel: ACPI: Core revision 20230628 Aug 13 07:18:51.897282 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 07:18:51.897292 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 07:18:51.897301 kernel: x2apic enabled Aug 13 07:18:51.897311 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 07:18:51.897321 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 13 07:18:51.897331 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 13 07:18:51.897341 kernel: kvm-guest: setup PV IPIs Aug 13 07:18:51.897351 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 07:18:51.897362 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Aug 13 07:18:51.897370 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Aug 13 07:18:51.897377 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 07:18:51.897385 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 07:18:51.897392 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 07:18:51.897400 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 07:18:51.897408 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 07:18:51.897415 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 07:18:51.897423 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Aug 13 07:18:51.897433 kernel: RETBleed: Mitigation: untrained return thunk Aug 13 07:18:51.897441 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 07:18:51.897449 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 07:18:51.897456 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 13 07:18:51.897464 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 13 07:18:51.897472 kernel: x86/bugs: return thunk changed Aug 13 07:18:51.897479 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 13 07:18:51.897487 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 07:18:51.897495 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 07:18:51.897504 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 07:18:51.897512 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 07:18:51.897519 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Aug 13 07:18:51.897527 kernel: Freeing SMP alternatives memory: 32K Aug 13 07:18:51.897535 kernel: pid_max: default: 32768 minimum: 301 Aug 13 07:18:51.897542 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 07:18:51.897550 kernel: landlock: Up and running. Aug 13 07:18:51.897557 kernel: SELinux: Initializing. Aug 13 07:18:51.897565 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 07:18:51.897575 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 07:18:51.897582 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Aug 13 07:18:51.897590 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 07:18:51.897598 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 07:18:51.897606 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 07:18:51.897613 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 07:18:51.897621 kernel: ... version: 0 Aug 13 07:18:51.897628 kernel: ... bit width: 48 Aug 13 07:18:51.897638 kernel: ... generic registers: 6 Aug 13 07:18:51.897645 kernel: ... value mask: 0000ffffffffffff Aug 13 07:18:51.897653 kernel: ... max period: 00007fffffffffff Aug 13 07:18:51.897660 kernel: ... fixed-purpose events: 0 Aug 13 07:18:51.897668 kernel: ... event mask: 000000000000003f Aug 13 07:18:51.897675 kernel: signal: max sigframe size: 1776 Aug 13 07:18:51.897683 kernel: rcu: Hierarchical SRCU implementation. Aug 13 07:18:51.897691 kernel: rcu: Max phase no-delay instances is 400. Aug 13 07:18:51.897698 kernel: smp: Bringing up secondary CPUs ... Aug 13 07:18:51.897706 kernel: smpboot: x86: Booting SMP configuration: Aug 13 07:18:51.897716 kernel: .... node #0, CPUs: #1 #2 #3 Aug 13 07:18:51.897723 kernel: smp: Brought up 1 node, 4 CPUs Aug 13 07:18:51.897731 kernel: smpboot: Max logical packages: 1 Aug 13 07:18:51.897739 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Aug 13 07:18:51.897746 kernel: devtmpfs: initialized Aug 13 07:18:51.897754 kernel: x86/mm: Memory block size: 128MB Aug 13 07:18:51.897762 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Aug 13 07:18:51.897770 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Aug 13 07:18:51.897777 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Aug 13 07:18:51.897787 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Aug 13 07:18:51.897795 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Aug 13 07:18:51.897803 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 07:18:51.897810 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 13 07:18:51.897818 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 07:18:51.897825 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 07:18:51.897833 kernel: audit: initializing netlink subsys (disabled) Aug 13 07:18:51.897841 kernel: audit: type=2000 audit(1755069531.116:1): state=initialized audit_enabled=0 res=1 Aug 13 07:18:51.897851 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 07:18:51.897858 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 07:18:51.897866 kernel: cpuidle: using governor menu Aug 13 07:18:51.897873 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 07:18:51.897881 kernel: dca service started, version 1.12.1 Aug 13 07:18:51.897889 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Aug 13 07:18:51.897923 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 13 07:18:51.897931 kernel: PCI: Using configuration type 1 for base access Aug 13 07:18:51.897939 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 07:18:51.897949 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 07:18:51.897957 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 07:18:51.897964 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 07:18:51.897972 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 07:18:51.897979 kernel: ACPI: Added _OSI(Module Device) Aug 13 07:18:51.897987 kernel: ACPI: Added _OSI(Processor Device) Aug 13 07:18:51.897994 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 07:18:51.898002 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 07:18:51.898010 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 07:18:51.898019 kernel: ACPI: Interpreter enabled Aug 13 07:18:51.898027 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 07:18:51.898034 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 07:18:51.898042 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 07:18:51.898050 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 07:18:51.898058 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 07:18:51.898065 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 07:18:51.898253 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 07:18:51.898386 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 07:18:51.898507 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 07:18:51.898517 kernel: PCI host bridge to bus 0000:00 Aug 13 07:18:51.898641 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 07:18:51.898752 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 07:18:51.898861 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 07:18:51.898995 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Aug 13 07:18:51.899120 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 07:18:51.899231 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Aug 13 07:18:51.899341 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 07:18:51.899477 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Aug 13 07:18:51.899607 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Aug 13 07:18:51.899728 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Aug 13 07:18:51.899853 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Aug 13 07:18:51.899989 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Aug 13 07:18:51.900138 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Aug 13 07:18:51.900260 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 07:18:51.900394 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Aug 13 07:18:51.900555 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Aug 13 07:18:51.900714 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Aug 13 07:18:51.900848 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Aug 13 07:18:51.900997 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Aug 13 07:18:51.901131 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Aug 13 07:18:51.901253 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Aug 13 07:18:51.901376 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Aug 13 07:18:51.901505 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 13 07:18:51.901627 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Aug 13 07:18:51.901755 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Aug 13 07:18:51.901875 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Aug 13 07:18:51.902034 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Aug 13 07:18:51.902174 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Aug 13 07:18:51.902296 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 07:18:51.902426 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Aug 13 07:18:51.902546 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Aug 13 07:18:51.902670 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Aug 13 07:18:51.902798 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Aug 13 07:18:51.902971 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Aug 13 07:18:51.902983 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 07:18:51.902991 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 07:18:51.902999 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 07:18:51.903006 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 07:18:51.903014 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 07:18:51.903026 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 07:18:51.903033 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 07:18:51.903041 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 07:18:51.903049 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 07:18:51.903056 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 07:18:51.903064 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 07:18:51.903071 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 07:18:51.903079 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 07:18:51.903087 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 07:18:51.903097 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 07:18:51.903113 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 07:18:51.903120 kernel: iommu: Default domain type: Translated Aug 13 07:18:51.903128 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 07:18:51.903135 kernel: efivars: Registered efivars operations Aug 13 07:18:51.903143 kernel: PCI: Using ACPI for IRQ routing Aug 13 07:18:51.903150 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 07:18:51.903158 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Aug 13 07:18:51.903165 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Aug 13 07:18:51.903175 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Aug 13 07:18:51.903182 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Aug 13 07:18:51.903302 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 07:18:51.903420 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 07:18:51.903537 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 07:18:51.903547 kernel: vgaarb: loaded Aug 13 07:18:51.903555 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 07:18:51.903562 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 07:18:51.903573 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 07:18:51.903581 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 07:18:51.903589 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 07:18:51.903596 kernel: pnp: PnP ACPI init Aug 13 07:18:51.903722 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 07:18:51.903734 kernel: pnp: PnP ACPI: found 6 devices Aug 13 07:18:51.903742 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 07:18:51.903749 kernel: NET: Registered PF_INET protocol family Aug 13 07:18:51.903757 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 07:18:51.903768 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 07:18:51.903776 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 07:18:51.903784 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 07:18:51.903792 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 07:18:51.903800 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 07:18:51.903808 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 07:18:51.903816 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 07:18:51.903823 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 07:18:51.903833 kernel: NET: Registered PF_XDP protocol family Aug 13 07:18:51.903967 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Aug 13 07:18:51.904087 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Aug 13 07:18:51.904206 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 07:18:51.904334 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 07:18:51.904448 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 07:18:51.904556 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Aug 13 07:18:51.904665 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 07:18:51.904778 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Aug 13 07:18:51.904789 kernel: PCI: CLS 0 bytes, default 64 Aug 13 07:18:51.904796 kernel: Initialise system trusted keyrings Aug 13 07:18:51.904804 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 07:18:51.904812 kernel: Key type asymmetric registered Aug 13 07:18:51.904819 kernel: Asymmetric key parser 'x509' registered Aug 13 07:18:51.904827 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 07:18:51.904834 kernel: io scheduler mq-deadline registered Aug 13 07:18:51.904842 kernel: io scheduler kyber registered Aug 13 07:18:51.904852 kernel: io scheduler bfq registered Aug 13 07:18:51.904860 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 07:18:51.904868 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 07:18:51.904875 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 07:18:51.904883 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Aug 13 07:18:51.904890 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 07:18:51.904945 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 07:18:51.904953 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 07:18:51.904960 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 07:18:51.904971 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 07:18:51.905110 kernel: rtc_cmos 00:04: RTC can wake from S4 Aug 13 07:18:51.905238 kernel: rtc_cmos 00:04: registered as rtc0 Aug 13 07:18:51.905249 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Aug 13 07:18:51.905360 kernel: rtc_cmos 00:04: setting system clock to 2025-08-13T07:18:51 UTC (1755069531) Aug 13 07:18:51.905471 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 07:18:51.905480 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 13 07:18:51.905488 kernel: efifb: probing for efifb Aug 13 07:18:51.905500 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Aug 13 07:18:51.905508 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Aug 13 07:18:51.905516 kernel: efifb: scrolling: redraw Aug 13 07:18:51.905523 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Aug 13 07:18:51.905531 kernel: Console: switching to colour frame buffer device 100x37 Aug 13 07:18:51.905539 kernel: fb0: EFI VGA frame buffer device Aug 13 07:18:51.905562 kernel: pstore: Using crash dump compression: deflate Aug 13 07:18:51.905572 kernel: pstore: Registered efi_pstore as persistent store backend Aug 13 07:18:51.905582 kernel: NET: Registered PF_INET6 protocol family Aug 13 07:18:51.905611 kernel: Segment Routing with IPv6 Aug 13 07:18:51.905626 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 07:18:51.905642 kernel: NET: Registered PF_PACKET protocol family Aug 13 07:18:51.905658 kernel: Key type dns_resolver registered Aug 13 07:18:51.905667 kernel: IPI shorthand broadcast: enabled Aug 13 07:18:51.905675 kernel: sched_clock: Marking stable (630003027, 128396957)->(773328184, -14928200) Aug 13 07:18:51.905683 kernel: registered taskstats version 1 Aug 13 07:18:51.905690 kernel: Loading compiled-in X.509 certificates Aug 13 07:18:51.905698 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 264e720147fa8df9744bb9dc1c08171c0cb20041' Aug 13 07:18:51.905709 kernel: Key type .fscrypt registered Aug 13 07:18:51.905717 kernel: Key type fscrypt-provisioning registered Aug 13 07:18:51.905724 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 07:18:51.905732 kernel: ima: Allocated hash algorithm: sha1 Aug 13 07:18:51.905740 kernel: ima: No architecture policies found Aug 13 07:18:51.905748 kernel: clk: Disabling unused clocks Aug 13 07:18:51.905756 kernel: Freeing unused kernel image (initmem) memory: 42876K Aug 13 07:18:51.905763 kernel: Write protecting the kernel read-only data: 36864k Aug 13 07:18:51.905771 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Aug 13 07:18:51.905782 kernel: Run /init as init process Aug 13 07:18:51.905789 kernel: with arguments: Aug 13 07:18:51.905797 kernel: /init Aug 13 07:18:51.905805 kernel: with environment: Aug 13 07:18:51.905812 kernel: HOME=/ Aug 13 07:18:51.905820 kernel: TERM=linux Aug 13 07:18:51.905828 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 07:18:51.905838 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:18:51.905850 systemd[1]: Detected virtualization kvm. Aug 13 07:18:51.905859 systemd[1]: Detected architecture x86-64. Aug 13 07:18:51.905867 systemd[1]: Running in initrd. Aug 13 07:18:51.905875 systemd[1]: No hostname configured, using default hostname. Aug 13 07:18:51.905883 systemd[1]: Hostname set to . Aug 13 07:18:51.905908 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:18:51.905916 systemd[1]: Queued start job for default target initrd.target. Aug 13 07:18:51.905925 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:18:51.905934 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:18:51.905943 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 07:18:51.905952 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:18:51.905960 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 07:18:51.905972 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 07:18:51.905982 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 07:18:51.905990 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 07:18:51.905999 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:18:51.906008 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:18:51.906016 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:18:51.906025 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:18:51.906036 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:18:51.906044 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:18:51.906052 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:18:51.906061 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:18:51.906069 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:18:51.906078 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:18:51.906086 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:18:51.906095 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:18:51.906110 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:18:51.906120 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:18:51.906128 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 07:18:51.906137 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:18:51.906145 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 07:18:51.906153 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 07:18:51.906162 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:18:51.906170 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:18:51.906178 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:18:51.906189 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 07:18:51.906197 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:18:51.906206 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 07:18:51.906215 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:18:51.906242 systemd-journald[193]: Collecting audit messages is disabled. Aug 13 07:18:51.906263 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:18:51.906272 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:18:51.906280 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:18:51.906289 systemd-journald[193]: Journal started Aug 13 07:18:51.906309 systemd-journald[193]: Runtime Journal (/run/log/journal/888753dc33584d83aa4f4591e064e147) is 6.0M, max 48.3M, 42.2M free. Aug 13 07:18:51.910455 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:18:51.910477 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:18:51.912129 systemd-modules-load[194]: Inserted module 'overlay' Aug 13 07:18:51.917083 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:18:51.920609 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:18:51.928247 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:18:51.930974 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:18:51.936115 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 07:18:51.949711 dracut-cmdline[220]: dracut-dracut-053 Aug 13 07:18:51.950919 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 07:18:51.952728 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:18:51.957429 kernel: Bridge firewalling registered Aug 13 07:18:51.957421 systemd-modules-load[194]: Inserted module 'br_netfilter' Aug 13 07:18:51.959760 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:18:51.970027 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:18:51.979171 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:18:51.981009 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:18:52.015050 systemd-resolved[261]: Positive Trust Anchors: Aug 13 07:18:52.015065 systemd-resolved[261]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:18:52.015096 systemd-resolved[261]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:18:52.020388 systemd-resolved[261]: Defaulting to hostname 'linux'. Aug 13 07:18:52.021476 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:18:52.054582 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:18:52.075924 kernel: SCSI subsystem initialized Aug 13 07:18:52.085920 kernel: Loading iSCSI transport class v2.0-870. Aug 13 07:18:52.096923 kernel: iscsi: registered transport (tcp) Aug 13 07:18:52.117922 kernel: iscsi: registered transport (qla4xxx) Aug 13 07:18:52.117939 kernel: QLogic iSCSI HBA Driver Aug 13 07:18:52.169209 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 07:18:52.180049 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 07:18:52.206281 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 07:18:52.206313 kernel: device-mapper: uevent: version 1.0.3 Aug 13 07:18:52.207271 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 07:18:52.248929 kernel: raid6: avx2x4 gen() 29377 MB/s Aug 13 07:18:52.265915 kernel: raid6: avx2x2 gen() 30361 MB/s Aug 13 07:18:52.283017 kernel: raid6: avx2x1 gen() 25294 MB/s Aug 13 07:18:52.283042 kernel: raid6: using algorithm avx2x2 gen() 30361 MB/s Aug 13 07:18:52.301108 kernel: raid6: .... xor() 19453 MB/s, rmw enabled Aug 13 07:18:52.301130 kernel: raid6: using avx2x2 recovery algorithm Aug 13 07:18:52.321923 kernel: xor: automatically using best checksumming function avx Aug 13 07:18:52.481943 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 07:18:52.496102 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:18:52.503182 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:18:52.515372 systemd-udevd[412]: Using default interface naming scheme 'v255'. Aug 13 07:18:52.519988 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:18:52.530062 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 07:18:52.546647 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Aug 13 07:18:52.582242 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:18:52.591035 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:18:52.654457 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:18:52.666046 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 07:18:52.678319 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 07:18:52.681192 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:18:52.682452 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:18:52.685069 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:18:52.689931 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Aug 13 07:18:52.692516 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 13 07:18:52.698150 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 07:18:52.697108 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 07:18:52.701276 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 07:18:52.701299 kernel: GPT:9289727 != 19775487 Aug 13 07:18:52.701310 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 07:18:52.701972 kernel: GPT:9289727 != 19775487 Aug 13 07:18:52.703505 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 07:18:52.703567 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:18:52.708742 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:18:52.722917 kernel: libata version 3.00 loaded. Aug 13 07:18:52.722946 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 07:18:52.723929 kernel: AES CTR mode by8 optimization enabled Aug 13 07:18:52.724488 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:18:52.724627 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:18:52.726920 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:18:52.734448 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 07:18:52.734632 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 07:18:52.729444 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:18:52.729600 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:18:52.738733 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Aug 13 07:18:52.738909 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 07:18:52.731124 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:18:52.740911 kernel: scsi host0: ahci Aug 13 07:18:52.741703 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:18:52.745035 kernel: scsi host1: ahci Aug 13 07:18:52.745520 kernel: scsi host2: ahci Aug 13 07:18:52.745679 kernel: scsi host3: ahci Aug 13 07:18:52.748929 kernel: scsi host4: ahci Aug 13 07:18:52.757849 kernel: scsi host5: ahci Aug 13 07:18:52.758129 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Aug 13 07:18:52.758151 kernel: BTRFS: device fsid 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (467) Aug 13 07:18:52.758166 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Aug 13 07:18:52.758176 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Aug 13 07:18:52.758186 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Aug 13 07:18:52.758195 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Aug 13 07:18:52.758205 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Aug 13 07:18:52.762131 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (457) Aug 13 07:18:52.762907 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 13 07:18:52.773015 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 13 07:18:52.783292 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 13 07:18:52.783707 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 13 07:18:52.788662 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:18:52.801215 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 07:18:52.803456 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:18:52.803542 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:18:52.804270 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:18:52.805353 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:18:52.817035 disk-uuid[557]: Primary Header is updated. Aug 13 07:18:52.817035 disk-uuid[557]: Secondary Entries is updated. Aug 13 07:18:52.817035 disk-uuid[557]: Secondary Header is updated. Aug 13 07:18:52.820965 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:18:52.824036 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:18:52.827922 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:18:52.832035 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:18:52.854465 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:18:53.061944 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 07:18:53.069926 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 07:18:53.069988 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 07:18:53.069999 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Aug 13 07:18:53.070924 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 07:18:53.071921 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 07:18:53.072929 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Aug 13 07:18:53.072947 kernel: ata3.00: applying bridge limits Aug 13 07:18:53.073925 kernel: ata3.00: configured for UDMA/100 Aug 13 07:18:53.074935 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Aug 13 07:18:53.120518 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Aug 13 07:18:53.120843 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 07:18:53.133934 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Aug 13 07:18:53.826921 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:18:53.826973 disk-uuid[559]: The operation has completed successfully. Aug 13 07:18:53.851953 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 07:18:53.852096 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 07:18:53.878058 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 07:18:53.881460 sh[597]: Success Aug 13 07:18:53.893926 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Aug 13 07:18:53.925572 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 07:18:53.946445 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 07:18:53.950448 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 07:18:53.960956 kernel: BTRFS info (device dm-0): first mount of filesystem 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad Aug 13 07:18:53.960982 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:18:53.960993 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 07:18:53.963234 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 07:18:53.963250 kernel: BTRFS info (device dm-0): using free space tree Aug 13 07:18:53.967256 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 07:18:53.967997 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 07:18:53.975057 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 07:18:53.977118 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 07:18:53.984956 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:18:53.984986 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:18:53.985000 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:18:53.988133 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:18:53.997275 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 07:18:53.999334 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:18:54.007587 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 07:18:54.014139 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 07:18:54.073611 ignition[679]: Ignition 2.19.0 Aug 13 07:18:54.073623 ignition[679]: Stage: fetch-offline Aug 13 07:18:54.073663 ignition[679]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:18:54.073674 ignition[679]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:18:54.073781 ignition[679]: parsed url from cmdline: "" Aug 13 07:18:54.073785 ignition[679]: no config URL provided Aug 13 07:18:54.073791 ignition[679]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:18:54.073800 ignition[679]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:18:54.073831 ignition[679]: op(1): [started] loading QEMU firmware config module Aug 13 07:18:54.073836 ignition[679]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 13 07:18:54.082861 ignition[679]: op(1): [finished] loading QEMU firmware config module Aug 13 07:18:54.099003 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:18:54.112022 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:18:54.123957 ignition[679]: parsing config with SHA512: 96495810c135143fd40776a6791b6b37eb81735cb315d1658f60137782bd75d8f84c5c22054759b716b682627ac8a8965e8df29c0ea7157ae5e8c426d476f842 Aug 13 07:18:54.128687 unknown[679]: fetched base config from "system" Aug 13 07:18:54.128697 unknown[679]: fetched user config from "qemu" Aug 13 07:18:54.129622 ignition[679]: fetch-offline: fetch-offline passed Aug 13 07:18:54.129711 ignition[679]: Ignition finished successfully Aug 13 07:18:54.132474 systemd-networkd[786]: lo: Link UP Aug 13 07:18:54.132478 systemd-networkd[786]: lo: Gained carrier Aug 13 07:18:54.133966 systemd-networkd[786]: Enumeration completed Aug 13 07:18:54.134043 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:18:54.134667 systemd[1]: Reached target network.target - Network. Aug 13 07:18:54.135144 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:18:54.135149 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:18:54.137304 systemd-networkd[786]: eth0: Link UP Aug 13 07:18:54.137307 systemd-networkd[786]: eth0: Gained carrier Aug 13 07:18:54.137314 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:18:54.138448 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:18:54.139301 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 13 07:18:54.152958 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.153/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 07:18:54.154400 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 07:18:54.164711 ignition[789]: Ignition 2.19.0 Aug 13 07:18:54.164722 ignition[789]: Stage: kargs Aug 13 07:18:54.164887 ignition[789]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:18:54.164923 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:18:54.165711 ignition[789]: kargs: kargs passed Aug 13 07:18:54.165760 ignition[789]: Ignition finished successfully Aug 13 07:18:54.168657 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 07:18:54.181018 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 07:18:54.194654 ignition[798]: Ignition 2.19.0 Aug 13 07:18:54.194665 ignition[798]: Stage: disks Aug 13 07:18:54.194831 ignition[798]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:18:54.194842 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:18:54.195649 ignition[798]: disks: disks passed Aug 13 07:18:54.197806 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 07:18:54.195691 ignition[798]: Ignition finished successfully Aug 13 07:18:54.199062 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 07:18:54.200525 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:18:54.202598 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:18:54.203581 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:18:54.204130 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:18:54.212043 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 07:18:54.223765 systemd-fsck[809]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 13 07:18:54.229590 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 07:18:54.243022 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 07:18:54.327916 kernel: EXT4-fs (vda9): mounted filesystem 98cc0201-e9ec-4d2c-8a62-5b521bf9317d r/w with ordered data mode. Quota mode: none. Aug 13 07:18:54.328544 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 07:18:54.330086 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 07:18:54.340969 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:18:54.342830 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 07:18:54.345323 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 07:18:54.349382 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (817) Aug 13 07:18:54.345380 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 07:18:54.356165 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:18:54.356188 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:18:54.356199 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:18:54.356210 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:18:54.345407 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:18:54.350760 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 07:18:54.357229 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:18:54.359749 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 07:18:54.393530 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 07:18:54.398793 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Aug 13 07:18:54.403473 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 07:18:54.406974 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 07:18:54.485677 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 07:18:54.498011 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 07:18:54.499602 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 07:18:54.506980 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:18:54.523982 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 07:18:54.526308 ignition[929]: INFO : Ignition 2.19.0 Aug 13 07:18:54.526308 ignition[929]: INFO : Stage: mount Aug 13 07:18:54.527995 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:18:54.527995 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:18:54.527995 ignition[929]: INFO : mount: mount passed Aug 13 07:18:54.527995 ignition[929]: INFO : Ignition finished successfully Aug 13 07:18:54.533504 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 07:18:54.542051 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 07:18:54.960593 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 07:18:54.973045 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:18:54.978927 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (944) Aug 13 07:18:54.981077 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:18:54.981092 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:18:54.981103 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:18:54.983919 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:18:54.985406 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:18:55.013029 ignition[961]: INFO : Ignition 2.19.0 Aug 13 07:18:55.013029 ignition[961]: INFO : Stage: files Aug 13 07:18:55.014683 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:18:55.014683 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:18:55.014683 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Aug 13 07:18:55.018484 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 07:18:55.018484 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 07:18:55.023020 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 07:18:55.024459 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 07:18:55.026371 unknown[961]: wrote ssh authorized keys file for user: core Aug 13 07:18:55.027496 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 07:18:55.029830 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 07:18:55.031678 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Aug 13 07:18:55.073087 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 07:18:55.226135 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 07:18:55.227966 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 07:18:55.227966 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 13 07:18:55.391107 systemd-networkd[786]: eth0: Gained IPv6LL Aug 13 07:18:55.443071 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 07:18:56.207603 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 07:18:56.210176 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 07:18:56.210176 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 07:18:56.210176 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:18:56.210176 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:18:56.210176 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:18:56.210176 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:18:56.210176 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:18:56.210176 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:18:56.210176 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:18:56.210176 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:18:56.210176 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 07:18:56.210176 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 07:18:56.210176 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 07:18:56.210176 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Aug 13 07:18:56.586447 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 07:18:57.124126 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 07:18:57.124126 ignition[961]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 13 07:18:57.128431 ignition[961]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:18:57.128431 ignition[961]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:18:57.128431 ignition[961]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 13 07:18:57.128431 ignition[961]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Aug 13 07:18:57.128431 ignition[961]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 07:18:57.128431 ignition[961]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 07:18:57.128431 ignition[961]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Aug 13 07:18:57.128431 ignition[961]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Aug 13 07:18:57.214203 ignition[961]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 07:18:57.222116 ignition[961]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 07:18:57.223911 ignition[961]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Aug 13 07:18:57.223911 ignition[961]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Aug 13 07:18:57.223911 ignition[961]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 07:18:57.223911 ignition[961]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:18:57.223911 ignition[961]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:18:57.223911 ignition[961]: INFO : files: files passed Aug 13 07:18:57.223911 ignition[961]: INFO : Ignition finished successfully Aug 13 07:18:57.231225 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 07:18:57.251053 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 07:18:57.254505 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 07:18:57.257312 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 07:18:57.257438 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 07:18:57.269958 initrd-setup-root-after-ignition[989]: grep: /sysroot/oem/oem-release: No such file or directory Aug 13 07:18:57.273877 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:18:57.273877 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:18:57.277542 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:18:57.280614 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:18:57.281385 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 07:18:57.289051 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 07:18:57.316024 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 07:18:57.316137 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 07:18:57.317037 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 07:18:57.319430 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 07:18:57.319776 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 07:18:57.329098 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 07:18:57.344112 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:18:57.360136 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 07:18:57.372028 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:18:57.372585 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:18:57.372995 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 07:18:57.373496 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 07:18:57.373643 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:18:57.378738 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 07:18:57.379299 systemd[1]: Stopped target basic.target - Basic System. Aug 13 07:18:57.379659 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 07:18:57.380215 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:18:57.380574 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 07:18:57.380966 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 07:18:57.381489 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:18:57.381881 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 07:18:57.382412 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 07:18:57.382773 systemd[1]: Stopped target swap.target - Swaps. Aug 13 07:18:57.383283 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 07:18:57.383418 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:18:57.399986 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:18:57.400521 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:18:57.400837 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 07:18:57.400988 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:18:57.401471 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 07:18:57.401605 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 07:18:57.408649 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 07:18:57.408769 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:18:57.411520 systemd[1]: Stopped target paths.target - Path Units. Aug 13 07:18:57.413398 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 07:18:57.418974 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:18:57.419546 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 07:18:57.419915 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 07:18:57.420452 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 07:18:57.420573 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:18:57.425323 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 07:18:57.425450 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:18:57.427076 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 07:18:57.427230 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:18:57.429273 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 07:18:57.429423 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 07:18:57.440069 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 07:18:57.441337 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 07:18:57.441828 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 07:18:57.442007 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:18:57.442494 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 07:18:57.442633 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:18:57.447054 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 07:18:57.447198 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 07:18:57.467608 ignition[1015]: INFO : Ignition 2.19.0 Aug 13 07:18:57.467608 ignition[1015]: INFO : Stage: umount Aug 13 07:18:57.469391 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:18:57.469391 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:18:57.469967 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 07:18:57.473454 ignition[1015]: INFO : umount: umount passed Aug 13 07:18:57.474343 ignition[1015]: INFO : Ignition finished successfully Aug 13 07:18:57.477454 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 07:18:57.477604 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 07:18:57.479814 systemd[1]: Stopped target network.target - Network. Aug 13 07:18:57.481511 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 07:18:57.481575 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 07:18:57.483516 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 07:18:57.483578 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 07:18:57.485559 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 07:18:57.485618 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 07:18:57.487742 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 07:18:57.487800 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 07:18:57.490163 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 07:18:57.492298 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 07:18:57.500950 systemd-networkd[786]: eth0: DHCPv6 lease lost Aug 13 07:18:57.502125 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 07:18:57.502272 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 07:18:57.506212 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 07:18:57.506370 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 07:18:57.509175 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 07:18:57.509233 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:18:57.530029 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 07:18:57.530472 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 07:18:57.530545 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:18:57.530891 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:18:57.530999 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:18:57.531417 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 07:18:57.531479 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 07:18:57.531771 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 07:18:57.531829 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:18:57.532420 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:18:57.553751 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 07:18:57.554018 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:18:57.556594 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 07:18:57.556736 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 07:18:57.559290 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 07:18:57.559384 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 07:18:57.560828 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 07:18:57.560880 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:18:57.562998 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 07:18:57.563064 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:18:57.565463 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 07:18:57.565524 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 07:18:57.567655 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:18:57.567718 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:18:57.586046 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 07:18:57.586322 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 07:18:57.586381 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:18:57.586686 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 07:18:57.586731 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:18:57.587177 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 07:18:57.587221 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:18:57.587575 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:18:57.587618 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:18:57.592913 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 07:18:57.593036 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 07:18:57.731388 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 07:18:57.731532 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 07:18:57.733525 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 07:18:57.735212 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 07:18:57.735267 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 07:18:57.746018 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 07:18:57.752800 systemd[1]: Switching root. Aug 13 07:18:57.784626 systemd-journald[193]: Journal stopped Aug 13 07:18:59.096935 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Aug 13 07:18:59.097020 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 07:18:59.097041 kernel: SELinux: policy capability open_perms=1 Aug 13 07:18:59.097067 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 07:18:59.097082 kernel: SELinux: policy capability always_check_network=0 Aug 13 07:18:59.097104 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 07:18:59.097116 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 07:18:59.097127 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 07:18:59.097138 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 07:18:59.097149 kernel: audit: type=1403 audit(1755069538.327:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 07:18:59.097173 systemd[1]: Successfully loaded SELinux policy in 42.920ms. Aug 13 07:18:59.097199 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.589ms. Aug 13 07:18:59.097212 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:18:59.097225 systemd[1]: Detected virtualization kvm. Aug 13 07:18:59.097242 systemd[1]: Detected architecture x86-64. Aug 13 07:18:59.097253 systemd[1]: Detected first boot. Aug 13 07:18:59.097265 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:18:59.097276 zram_generator::config[1060]: No configuration found. Aug 13 07:18:59.097289 systemd[1]: Populated /etc with preset unit settings. Aug 13 07:18:59.097301 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 07:18:59.097313 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 07:18:59.097326 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 07:18:59.097350 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 07:18:59.097366 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 07:18:59.097380 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 07:18:59.097392 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 07:18:59.097415 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 07:18:59.097428 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 07:18:59.097440 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 07:18:59.097452 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 07:18:59.097471 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:18:59.097483 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:18:59.097495 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 07:18:59.097507 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 07:18:59.097519 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 07:18:59.097531 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:18:59.097542 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 07:18:59.097554 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:18:59.097566 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 07:18:59.097583 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 07:18:59.097596 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 07:18:59.097607 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 07:18:59.097619 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:18:59.097636 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:18:59.097648 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:18:59.097659 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:18:59.097671 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 07:18:59.097725 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 07:18:59.097740 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:18:59.097752 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:18:59.097764 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:18:59.097779 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 07:18:59.097790 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 07:18:59.097802 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 07:18:59.097814 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 07:18:59.097826 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:18:59.097843 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 07:18:59.097855 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 07:18:59.097867 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 07:18:59.097879 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 07:18:59.097948 systemd[1]: Reached target machines.target - Containers. Aug 13 07:18:59.097962 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 07:18:59.097974 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:18:59.097988 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:18:59.098006 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 07:18:59.098018 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:18:59.098031 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:18:59.098043 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:18:59.098055 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 07:18:59.098067 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:18:59.098080 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 07:18:59.098091 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 07:18:59.098103 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 07:18:59.098119 kernel: fuse: init (API version 7.39) Aug 13 07:18:59.098139 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 07:18:59.098150 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 07:18:59.098162 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:18:59.098177 kernel: loop: module loaded Aug 13 07:18:59.098192 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:18:59.098209 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 07:18:59.098231 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 07:18:59.098248 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:18:59.098271 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 07:18:59.098284 systemd[1]: Stopped verity-setup.service. Aug 13 07:18:59.098296 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:18:59.098308 kernel: ACPI: bus type drm_connector registered Aug 13 07:18:59.098339 systemd-journald[1134]: Collecting audit messages is disabled. Aug 13 07:18:59.098368 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 07:18:59.098381 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 07:18:59.098393 systemd-journald[1134]: Journal started Aug 13 07:18:59.098415 systemd-journald[1134]: Runtime Journal (/run/log/journal/888753dc33584d83aa4f4591e064e147) is 6.0M, max 48.3M, 42.2M free. Aug 13 07:18:58.850345 systemd[1]: Queued start job for default target multi-user.target. Aug 13 07:18:58.874878 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 13 07:18:58.875353 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 07:18:59.100933 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:18:59.102022 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 07:18:59.103230 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 07:18:59.104473 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 07:18:59.105732 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 07:18:59.107109 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 07:18:59.108611 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:18:59.110291 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 07:18:59.110471 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 07:18:59.112157 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:18:59.112333 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:18:59.113788 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:18:59.114036 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:18:59.115454 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:18:59.115629 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:18:59.117182 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 07:18:59.117376 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 07:18:59.118771 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:18:59.118969 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:18:59.120437 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:18:59.121822 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 07:18:59.123373 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 07:18:59.142352 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 07:18:59.158066 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 07:18:59.160455 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 07:18:59.161555 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 07:18:59.161585 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:18:59.163578 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 07:18:59.167546 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 07:18:59.170301 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 07:18:59.172058 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:18:59.174171 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 07:18:59.178564 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 07:18:59.179957 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:18:59.183792 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 07:18:59.185373 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:18:59.192313 systemd-journald[1134]: Time spent on flushing to /var/log/journal/888753dc33584d83aa4f4591e064e147 is 30.756ms for 996 entries. Aug 13 07:18:59.192313 systemd-journald[1134]: System Journal (/var/log/journal/888753dc33584d83aa4f4591e064e147) is 8.0M, max 195.6M, 187.6M free. Aug 13 07:18:59.286665 systemd-journald[1134]: Received client request to flush runtime journal. Aug 13 07:18:59.286873 kernel: loop0: detected capacity change from 0 to 142488 Aug 13 07:18:59.191440 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:18:59.197223 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 07:18:59.202103 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:18:59.206696 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:18:59.208473 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 07:18:59.210183 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 07:18:59.212355 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 07:18:59.214263 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 07:18:59.223714 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 07:18:59.276581 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 07:18:59.283125 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 07:18:59.289318 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 07:18:59.297043 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:18:59.300208 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Aug 13 07:18:59.300226 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Aug 13 07:18:59.302546 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 07:18:59.306085 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 07:18:59.306869 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 07:18:59.308666 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:18:59.314010 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 07:18:59.318173 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 07:18:59.359463 kernel: loop1: detected capacity change from 0 to 224512 Aug 13 07:18:59.386356 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 07:18:59.398174 kernel: loop2: detected capacity change from 0 to 140768 Aug 13 07:18:59.398565 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:18:59.486690 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Aug 13 07:18:59.486710 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Aug 13 07:18:59.489272 kernel: loop3: detected capacity change from 0 to 142488 Aug 13 07:18:59.495225 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:18:59.731939 kernel: loop4: detected capacity change from 0 to 224512 Aug 13 07:18:59.740918 kernel: loop5: detected capacity change from 0 to 140768 Aug 13 07:18:59.751477 (sd-merge)[1201]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 13 07:18:59.752267 (sd-merge)[1201]: Merged extensions into '/usr'. Aug 13 07:18:59.756087 systemd[1]: Reloading requested from client PID 1174 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 07:18:59.756104 systemd[1]: Reloading... Aug 13 07:18:59.864929 zram_generator::config[1224]: No configuration found. Aug 13 07:18:59.996410 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:19:00.053785 ldconfig[1169]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 07:19:00.054975 systemd[1]: Reloading finished in 298 ms. Aug 13 07:19:00.091494 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 07:19:00.093080 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 07:19:00.109174 systemd[1]: Starting ensure-sysext.service... Aug 13 07:19:00.111736 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:19:00.119780 systemd[1]: Reloading requested from client PID 1265 ('systemctl') (unit ensure-sysext.service)... Aug 13 07:19:00.119793 systemd[1]: Reloading... Aug 13 07:19:00.152164 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 07:19:00.153952 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 07:19:00.155121 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 07:19:00.155424 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Aug 13 07:19:00.155501 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Aug 13 07:19:00.160868 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:19:00.161651 systemd-tmpfiles[1266]: Skipping /boot Aug 13 07:19:00.180448 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:19:00.180608 systemd-tmpfiles[1266]: Skipping /boot Aug 13 07:19:00.186925 zram_generator::config[1289]: No configuration found. Aug 13 07:19:00.313649 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:19:00.368603 systemd[1]: Reloading finished in 248 ms. Aug 13 07:19:00.394602 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:19:00.414349 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:19:00.585174 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 07:19:00.588254 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 07:19:00.589787 augenrules[1347]: No rules Aug 13 07:19:00.592089 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:19:00.595264 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 07:19:00.597050 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:19:00.603654 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:19:00.603828 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:19:00.605362 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:19:00.608790 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:19:00.640275 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:19:00.641390 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:19:00.643483 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 07:19:00.644445 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:19:00.645568 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 07:19:00.646395 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:19:00.646580 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:19:00.657386 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 07:19:00.659220 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:19:00.659395 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:19:00.661106 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:19:00.661278 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:19:00.670002 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:19:00.670387 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:19:00.677285 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:19:00.679815 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:19:00.683288 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:19:00.684454 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:19:00.684711 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 07:19:00.685065 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:19:00.686811 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:19:00.687246 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:19:00.689014 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:19:00.689188 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:19:00.691123 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:19:00.691291 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:19:00.695007 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 07:19:00.700263 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 07:19:00.704859 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:19:00.706279 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:19:00.714211 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:19:00.719029 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:19:00.726003 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:19:00.728496 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:19:00.729744 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:19:00.729853 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 07:19:00.729879 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:19:00.730614 systemd[1]: Finished ensure-sysext.service. Aug 13 07:19:00.731858 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:19:00.732064 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:19:00.733496 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:19:00.733675 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:19:00.740222 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:19:00.740411 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:19:00.741997 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:19:00.742176 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:19:00.744781 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:19:00.744934 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:19:00.756263 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 07:19:00.907712 systemd-resolved[1353]: Positive Trust Anchors: Aug 13 07:19:00.907731 systemd-resolved[1353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:19:00.907763 systemd-resolved[1353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:19:00.911730 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 07:19:00.911732 systemd-resolved[1353]: Defaulting to hostname 'linux'. Aug 13 07:19:00.913224 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 07:19:00.959385 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:19:00.960888 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:19:01.010816 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 07:19:01.025157 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:19:01.027741 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 07:19:01.048695 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 07:19:01.051198 systemd-udevd[1390]: Using default interface naming scheme 'v255'. Aug 13 07:19:01.069628 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:19:01.082333 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:19:01.103493 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 07:19:01.183920 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1412) Aug 13 07:19:01.198858 systemd-networkd[1398]: lo: Link UP Aug 13 07:19:01.198872 systemd-networkd[1398]: lo: Gained carrier Aug 13 07:19:01.201350 systemd-networkd[1398]: Enumeration completed Aug 13 07:19:01.202004 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:19:01.202749 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:19:01.202754 systemd-networkd[1398]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:19:01.203286 systemd[1]: Reached target network.target - Network. Aug 13 07:19:01.203841 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:19:01.203873 systemd-networkd[1398]: eth0: Link UP Aug 13 07:19:01.203877 systemd-networkd[1398]: eth0: Gained carrier Aug 13 07:19:01.203886 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:19:01.210065 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 07:19:01.212062 systemd-networkd[1398]: eth0: DHCPv4 address 10.0.0.153/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 07:19:01.213084 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:19:01.214490 systemd-timesyncd[1386]: Network configuration changed, trying to establish connection. Aug 13 07:19:01.215469 systemd-timesyncd[1386]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 13 07:19:01.215619 systemd-timesyncd[1386]: Initial clock synchronization to Wed 2025-08-13 07:19:01.453948 UTC. Aug 13 07:19:01.220142 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 07:19:01.221863 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 13 07:19:01.265976 kernel: ACPI: button: Power Button [PWRF] Aug 13 07:19:01.287922 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Aug 13 07:19:01.288228 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 07:19:01.288408 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Aug 13 07:19:01.291573 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 07:19:01.292505 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Aug 13 07:19:01.301515 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 07:19:01.454956 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 07:19:01.499754 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:19:01.538198 kernel: kvm_amd: TSC scaling supported Aug 13 07:19:01.538287 kernel: kvm_amd: Nested Virtualization enabled Aug 13 07:19:01.538301 kernel: kvm_amd: Nested Paging enabled Aug 13 07:19:01.539548 kernel: kvm_amd: LBR virtualization supported Aug 13 07:19:01.539595 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Aug 13 07:19:01.540367 kernel: kvm_amd: Virtual GIF supported Aug 13 07:19:01.565280 kernel: EDAC MC: Ver: 3.0.0 Aug 13 07:19:01.576290 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:19:01.604454 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 07:19:01.618100 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 07:19:01.628685 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:19:01.710698 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 07:19:01.712269 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:19:01.713375 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:19:01.714534 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 07:19:01.715959 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 07:19:01.717556 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 07:19:01.718848 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 07:19:01.720207 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 07:19:01.721559 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 07:19:01.721592 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:19:01.722572 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:19:01.724627 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 07:19:01.727582 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 07:19:01.736593 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 07:19:01.739161 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 07:19:01.740797 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 07:19:01.742061 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:19:01.744055 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:19:01.745092 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:19:01.745120 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:19:01.746203 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 07:19:01.748392 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 07:19:01.750379 lvm[1442]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:19:01.751996 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 07:19:01.756072 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 07:19:01.759038 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 07:19:01.760653 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 07:19:01.762426 jq[1445]: false Aug 13 07:19:01.762757 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 07:19:01.765727 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 07:19:01.769193 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 07:19:01.776032 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 07:19:01.778066 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 07:19:01.779058 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 07:19:01.780176 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 07:19:01.784084 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 07:19:01.787043 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 07:19:01.790335 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 07:19:01.790578 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 07:19:01.796243 jq[1460]: true Aug 13 07:19:01.800386 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 07:19:01.800687 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 07:19:01.802697 extend-filesystems[1446]: Found loop3 Aug 13 07:19:01.826016 extend-filesystems[1446]: Found loop4 Aug 13 07:19:01.826016 extend-filesystems[1446]: Found loop5 Aug 13 07:19:01.826016 extend-filesystems[1446]: Found sr0 Aug 13 07:19:01.826016 extend-filesystems[1446]: Found vda Aug 13 07:19:01.826016 extend-filesystems[1446]: Found vda1 Aug 13 07:19:01.826016 extend-filesystems[1446]: Found vda2 Aug 13 07:19:01.826016 extend-filesystems[1446]: Found vda3 Aug 13 07:19:01.826016 extend-filesystems[1446]: Found usr Aug 13 07:19:01.826016 extend-filesystems[1446]: Found vda4 Aug 13 07:19:01.826016 extend-filesystems[1446]: Found vda6 Aug 13 07:19:01.826016 extend-filesystems[1446]: Found vda7 Aug 13 07:19:01.826016 extend-filesystems[1446]: Found vda9 Aug 13 07:19:01.826016 extend-filesystems[1446]: Checking size of /dev/vda9 Aug 13 07:19:01.840013 update_engine[1458]: I20250813 07:19:01.817855 1458 main.cc:92] Flatcar Update Engine starting Aug 13 07:19:01.802781 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 07:19:01.835647 dbus-daemon[1444]: [system] SELinux support is enabled Aug 13 07:19:01.814100 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 07:19:01.840618 jq[1467]: true Aug 13 07:19:01.836291 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 07:19:01.839834 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 07:19:01.839867 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 07:19:01.841851 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 07:19:01.841875 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 07:19:01.846975 update_engine[1458]: I20250813 07:19:01.846855 1458 update_check_scheduler.cc:74] Next update check in 3m23s Aug 13 07:19:01.847859 tar[1464]: linux-amd64/LICENSE Aug 13 07:19:01.851407 tar[1464]: linux-amd64/helm Aug 13 07:19:01.848092 systemd[1]: Started update-engine.service - Update Engine. Aug 13 07:19:01.851310 (ntainerd)[1476]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 07:19:01.854125 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 07:19:01.864757 extend-filesystems[1446]: Resized partition /dev/vda9 Aug 13 07:19:01.867168 extend-filesystems[1496]: resize2fs 1.47.1 (20-May-2024) Aug 13 07:19:01.871293 systemd-logind[1453]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 07:19:01.871323 systemd-logind[1453]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 07:19:01.874973 systemd-logind[1453]: New seat seat0. Aug 13 07:19:01.879942 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 13 07:19:01.889635 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 07:19:01.979947 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1405) Aug 13 07:19:02.002276 locksmithd[1487]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 07:19:02.302162 sshd_keygen[1463]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 07:19:02.341731 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 07:19:02.353164 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 07:19:02.361506 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 07:19:02.361735 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 07:19:02.364427 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 07:19:02.490330 tar[1464]: linux-amd64/README.md Aug 13 07:19:02.504763 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 13 07:19:02.513465 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 07:19:02.610373 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 07:19:02.635772 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 07:19:02.637066 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 07:19:02.789573 extend-filesystems[1496]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 07:19:02.789573 extend-filesystems[1496]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 07:19:02.789573 extend-filesystems[1496]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 13 07:19:02.795119 extend-filesystems[1446]: Resized filesystem in /dev/vda9 Aug 13 07:19:02.790896 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 07:19:02.791208 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 07:19:02.796526 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 07:19:02.799498 bash[1497]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:19:02.801692 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 07:19:02.804046 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 13 07:19:02.855305 containerd[1476]: time="2025-08-13T07:19:02.855154096Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 07:19:02.881546 containerd[1476]: time="2025-08-13T07:19:02.881466469Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:19:02.883521 containerd[1476]: time="2025-08-13T07:19:02.883486469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:19:02.883521 containerd[1476]: time="2025-08-13T07:19:02.883517968Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 07:19:02.883599 containerd[1476]: time="2025-08-13T07:19:02.883534497Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 07:19:02.883752 containerd[1476]: time="2025-08-13T07:19:02.883725732Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 07:19:02.883752 containerd[1476]: time="2025-08-13T07:19:02.883744387Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 07:19:02.883827 containerd[1476]: time="2025-08-13T07:19:02.883810893Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:19:02.883855 containerd[1476]: time="2025-08-13T07:19:02.883825637Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:19:02.884068 containerd[1476]: time="2025-08-13T07:19:02.884047939Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:19:02.884068 containerd[1476]: time="2025-08-13T07:19:02.884066150Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 07:19:02.884140 containerd[1476]: time="2025-08-13T07:19:02.884079284Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:19:02.884140 containerd[1476]: time="2025-08-13T07:19:02.884089055Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 07:19:02.884209 containerd[1476]: time="2025-08-13T07:19:02.884191178Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:19:02.884492 containerd[1476]: time="2025-08-13T07:19:02.884461993Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:19:02.884611 containerd[1476]: time="2025-08-13T07:19:02.884593077Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:19:02.884641 containerd[1476]: time="2025-08-13T07:19:02.884609007Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 07:19:02.884726 containerd[1476]: time="2025-08-13T07:19:02.884710976Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 07:19:02.884787 containerd[1476]: time="2025-08-13T07:19:02.884772995Z" level=info msg="metadata content store policy set" policy=shared Aug 13 07:19:02.890191 containerd[1476]: time="2025-08-13T07:19:02.890153966Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 07:19:02.890240 containerd[1476]: time="2025-08-13T07:19:02.890201293Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 07:19:02.890240 containerd[1476]: time="2025-08-13T07:19:02.890216697Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 07:19:02.890240 containerd[1476]: time="2025-08-13T07:19:02.890231936Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 07:19:02.890302 containerd[1476]: time="2025-08-13T07:19:02.890246463Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 07:19:02.890407 containerd[1476]: time="2025-08-13T07:19:02.890379342Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 07:19:02.890626 containerd[1476]: time="2025-08-13T07:19:02.890602944Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 07:19:02.890731 containerd[1476]: time="2025-08-13T07:19:02.890707575Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 07:19:02.890731 containerd[1476]: time="2025-08-13T07:19:02.890724619Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 07:19:02.890770 containerd[1476]: time="2025-08-13T07:19:02.890736804Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 07:19:02.890770 containerd[1476]: time="2025-08-13T07:19:02.890749258Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 07:19:02.890770 containerd[1476]: time="2025-08-13T07:19:02.890761628Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 07:19:02.890825 containerd[1476]: time="2025-08-13T07:19:02.890773163Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 07:19:02.890825 containerd[1476]: time="2025-08-13T07:19:02.890786421Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 07:19:02.890825 containerd[1476]: time="2025-08-13T07:19:02.890800247Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 07:19:02.890825 containerd[1476]: time="2025-08-13T07:19:02.890812772Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 07:19:02.890897 containerd[1476]: time="2025-08-13T07:19:02.890824741Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 07:19:02.890897 containerd[1476]: time="2025-08-13T07:19:02.890835677Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 07:19:02.890897 containerd[1476]: time="2025-08-13T07:19:02.890855229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 07:19:02.890897 containerd[1476]: time="2025-08-13T07:19:02.890868601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 07:19:02.890897 containerd[1476]: time="2025-08-13T07:19:02.890880197Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 07:19:02.890897 containerd[1476]: time="2025-08-13T07:19:02.890892991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 07:19:02.890897 containerd[1476]: time="2025-08-13T07:19:02.890905269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 07:19:02.891042 containerd[1476]: time="2025-08-13T07:19:02.890918775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 07:19:02.891042 containerd[1476]: time="2025-08-13T07:19:02.890952358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 07:19:02.891042 containerd[1476]: time="2025-08-13T07:19:02.890964853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 07:19:02.891042 containerd[1476]: time="2025-08-13T07:19:02.890978390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 07:19:02.891042 containerd[1476]: time="2025-08-13T07:19:02.890992927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 07:19:02.891042 containerd[1476]: time="2025-08-13T07:19:02.891004565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 07:19:02.891042 containerd[1476]: time="2025-08-13T07:19:02.891015616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 07:19:02.891042 containerd[1476]: time="2025-08-13T07:19:02.891027388Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 07:19:02.891042 containerd[1476]: time="2025-08-13T07:19:02.891043153Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 07:19:02.891202 containerd[1476]: time="2025-08-13T07:19:02.891061487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 07:19:02.891202 containerd[1476]: time="2025-08-13T07:19:02.891072816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 07:19:02.891202 containerd[1476]: time="2025-08-13T07:19:02.891082865Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 07:19:02.891202 containerd[1476]: time="2025-08-13T07:19:02.891132101Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 07:19:02.891202 containerd[1476]: time="2025-08-13T07:19:02.891151364Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 07:19:02.891202 containerd[1476]: time="2025-08-13T07:19:02.891165251Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 07:19:02.891202 containerd[1476]: time="2025-08-13T07:19:02.891177178Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 07:19:02.891202 containerd[1476]: time="2025-08-13T07:19:02.891192066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 07:19:02.891364 containerd[1476]: time="2025-08-13T07:19:02.891208667Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 07:19:02.891364 containerd[1476]: time="2025-08-13T07:19:02.891219067Z" level=info msg="NRI interface is disabled by configuration." Aug 13 07:19:02.891364 containerd[1476]: time="2025-08-13T07:19:02.891231437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 07:19:02.891591 containerd[1476]: time="2025-08-13T07:19:02.891516945Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 07:19:02.891591 containerd[1476]: time="2025-08-13T07:19:02.891589487Z" level=info msg="Connect containerd service" Aug 13 07:19:02.891744 containerd[1476]: time="2025-08-13T07:19:02.891620223Z" level=info msg="using legacy CRI server" Aug 13 07:19:02.891744 containerd[1476]: time="2025-08-13T07:19:02.891627456Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 07:19:02.891744 containerd[1476]: time="2025-08-13T07:19:02.891724544Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 07:19:02.892349 containerd[1476]: time="2025-08-13T07:19:02.892301326Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:19:02.892418 containerd[1476]: time="2025-08-13T07:19:02.892377439Z" level=info msg="Start subscribing containerd event" Aug 13 07:19:02.892448 containerd[1476]: time="2025-08-13T07:19:02.892419410Z" level=info msg="Start recovering state" Aug 13 07:19:02.892490 containerd[1476]: time="2025-08-13T07:19:02.892478128Z" level=info msg="Start event monitor" Aug 13 07:19:02.892536 containerd[1476]: time="2025-08-13T07:19:02.892503385Z" level=info msg="Start snapshots syncer" Aug 13 07:19:02.892536 containerd[1476]: time="2025-08-13T07:19:02.892515952Z" level=info msg="Start cni network conf syncer for default" Aug 13 07:19:02.892589 containerd[1476]: time="2025-08-13T07:19:02.892540301Z" level=info msg="Start streaming server" Aug 13 07:19:02.892833 containerd[1476]: time="2025-08-13T07:19:02.892798941Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 07:19:02.892920 containerd[1476]: time="2025-08-13T07:19:02.892875281Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 07:19:02.892987 containerd[1476]: time="2025-08-13T07:19:02.892964455Z" level=info msg="containerd successfully booted in 0.040066s" Aug 13 07:19:02.893063 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 07:19:03.263794 systemd-networkd[1398]: eth0: Gained IPv6LL Aug 13 07:19:03.267597 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 07:19:03.269638 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 07:19:03.285242 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 13 07:19:03.287864 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:19:03.290161 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 07:19:03.309859 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 13 07:19:03.310159 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 13 07:19:03.312159 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 07:19:03.314413 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 07:19:04.162463 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:19:04.175455 (kubelet)[1556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:19:04.176097 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 07:19:04.179985 systemd[1]: Startup finished in 768ms (kernel) + 6.615s (initrd) + 5.893s (userspace) = 13.277s. Aug 13 07:19:04.346019 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 07:19:04.354345 systemd[1]: Started sshd@0-10.0.0.153:22-10.0.0.1:34640.service - OpenSSH per-connection server daemon (10.0.0.1:34640). Aug 13 07:19:04.402202 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 34640 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:19:04.404763 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:04.414529 systemd-logind[1453]: New session 1 of user core. Aug 13 07:19:04.415807 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 07:19:04.424167 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 07:19:04.437648 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 07:19:04.440659 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 07:19:04.462822 (systemd)[1571]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 07:19:04.934221 systemd[1571]: Queued start job for default target default.target. Aug 13 07:19:04.943283 systemd[1571]: Created slice app.slice - User Application Slice. Aug 13 07:19:04.943310 systemd[1571]: Reached target paths.target - Paths. Aug 13 07:19:04.943324 systemd[1571]: Reached target timers.target - Timers. Aug 13 07:19:04.945610 systemd[1571]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 07:19:04.961206 systemd[1571]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 07:19:04.961359 systemd[1571]: Reached target sockets.target - Sockets. Aug 13 07:19:04.961374 systemd[1571]: Reached target basic.target - Basic System. Aug 13 07:19:04.961417 systemd[1571]: Reached target default.target - Main User Target. Aug 13 07:19:04.961457 systemd[1571]: Startup finished in 152ms. Aug 13 07:19:04.962559 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 07:19:05.020190 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 07:19:05.076046 kubelet[1556]: E0813 07:19:05.075952 1556 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:19:05.083877 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:19:05.084129 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:19:05.084493 systemd[1]: kubelet.service: Consumed 1.599s CPU time. Aug 13 07:19:05.087523 systemd[1]: Started sshd@1-10.0.0.153:22-10.0.0.1:34642.service - OpenSSH per-connection server daemon (10.0.0.1:34642). Aug 13 07:19:05.127151 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 34642 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:19:05.128602 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:05.133070 systemd-logind[1453]: New session 2 of user core. Aug 13 07:19:05.144083 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 07:19:05.200644 sshd[1585]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:05.215779 systemd[1]: sshd@1-10.0.0.153:22-10.0.0.1:34642.service: Deactivated successfully. Aug 13 07:19:05.217842 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 07:19:05.219473 systemd-logind[1453]: Session 2 logged out. Waiting for processes to exit. Aug 13 07:19:05.229274 systemd[1]: Started sshd@2-10.0.0.153:22-10.0.0.1:34650.service - OpenSSH per-connection server daemon (10.0.0.1:34650). Aug 13 07:19:05.230391 systemd-logind[1453]: Removed session 2. Aug 13 07:19:05.264799 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 34650 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:19:05.266316 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:05.270331 systemd-logind[1453]: New session 3 of user core. Aug 13 07:19:05.280074 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 07:19:05.333317 sshd[1592]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:05.348992 systemd[1]: sshd@2-10.0.0.153:22-10.0.0.1:34650.service: Deactivated successfully. Aug 13 07:19:05.350664 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 07:19:05.352243 systemd-logind[1453]: Session 3 logged out. Waiting for processes to exit. Aug 13 07:19:05.353489 systemd[1]: Started sshd@3-10.0.0.153:22-10.0.0.1:34664.service - OpenSSH per-connection server daemon (10.0.0.1:34664). Aug 13 07:19:05.354272 systemd-logind[1453]: Removed session 3. Aug 13 07:19:05.393035 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 34664 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:19:05.395012 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:05.399137 systemd-logind[1453]: New session 4 of user core. Aug 13 07:19:05.415092 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 07:19:05.471191 sshd[1599]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:05.482811 systemd[1]: sshd@3-10.0.0.153:22-10.0.0.1:34664.service: Deactivated successfully. Aug 13 07:19:05.485270 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 07:19:05.487205 systemd-logind[1453]: Session 4 logged out. Waiting for processes to exit. Aug 13 07:19:05.497215 systemd[1]: Started sshd@4-10.0.0.153:22-10.0.0.1:34680.service - OpenSSH per-connection server daemon (10.0.0.1:34680). Aug 13 07:19:05.498257 systemd-logind[1453]: Removed session 4. Aug 13 07:19:05.531637 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 34680 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:19:05.534059 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:05.538348 systemd-logind[1453]: New session 5 of user core. Aug 13 07:19:05.548080 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 07:19:05.607756 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 07:19:05.608117 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:19:05.633558 sudo[1609]: pam_unix(sudo:session): session closed for user root Aug 13 07:19:05.636045 sshd[1606]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:05.651371 systemd[1]: sshd@4-10.0.0.153:22-10.0.0.1:34680.service: Deactivated successfully. Aug 13 07:19:05.653677 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 07:19:05.655307 systemd-logind[1453]: Session 5 logged out. Waiting for processes to exit. Aug 13 07:19:05.665243 systemd[1]: Started sshd@5-10.0.0.153:22-10.0.0.1:34696.service - OpenSSH per-connection server daemon (10.0.0.1:34696). Aug 13 07:19:05.666392 systemd-logind[1453]: Removed session 5. Aug 13 07:19:05.699716 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 34696 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:19:05.701272 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:05.705399 systemd-logind[1453]: New session 6 of user core. Aug 13 07:19:05.720043 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 07:19:05.776137 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 07:19:05.776569 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:19:05.781299 sudo[1618]: pam_unix(sudo:session): session closed for user root Aug 13 07:19:05.788647 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 07:19:05.789029 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:19:05.809144 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 07:19:05.810958 auditctl[1621]: No rules Aug 13 07:19:05.812671 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 07:19:05.813009 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 07:19:05.815386 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:19:05.855743 augenrules[1639]: No rules Aug 13 07:19:05.857981 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:19:05.859412 sudo[1617]: pam_unix(sudo:session): session closed for user root Aug 13 07:19:05.861412 sshd[1614]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:05.869128 systemd[1]: sshd@5-10.0.0.153:22-10.0.0.1:34696.service: Deactivated successfully. Aug 13 07:19:05.871296 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 07:19:05.873553 systemd-logind[1453]: Session 6 logged out. Waiting for processes to exit. Aug 13 07:19:05.885334 systemd[1]: Started sshd@6-10.0.0.153:22-10.0.0.1:34698.service - OpenSSH per-connection server daemon (10.0.0.1:34698). Aug 13 07:19:05.886602 systemd-logind[1453]: Removed session 6. Aug 13 07:19:05.921067 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 34698 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:19:05.922657 sshd[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:05.927471 systemd-logind[1453]: New session 7 of user core. Aug 13 07:19:05.937067 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 07:19:05.994025 sudo[1650]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 07:19:05.994458 sudo[1650]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:19:06.701197 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 07:19:06.702056 (dockerd)[1668]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 07:19:07.496431 dockerd[1668]: time="2025-08-13T07:19:07.496342482Z" level=info msg="Starting up" Aug 13 07:19:08.594909 dockerd[1668]: time="2025-08-13T07:19:08.594815942Z" level=info msg="Loading containers: start." Aug 13 07:19:08.759949 kernel: Initializing XFRM netlink socket Aug 13 07:19:08.860461 systemd-networkd[1398]: docker0: Link UP Aug 13 07:19:08.924767 dockerd[1668]: time="2025-08-13T07:19:08.924694770Z" level=info msg="Loading containers: done." Aug 13 07:19:08.949983 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3923104827-merged.mount: Deactivated successfully. Aug 13 07:19:08.953973 dockerd[1668]: time="2025-08-13T07:19:08.953926800Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 07:19:08.954088 dockerd[1668]: time="2025-08-13T07:19:08.954067081Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 07:19:08.954228 dockerd[1668]: time="2025-08-13T07:19:08.954209354Z" level=info msg="Daemon has completed initialization" Aug 13 07:19:09.012015 dockerd[1668]: time="2025-08-13T07:19:09.011925855Z" level=info msg="API listen on /run/docker.sock" Aug 13 07:19:09.012342 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 07:19:10.078395 containerd[1476]: time="2025-08-13T07:19:10.078321275Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Aug 13 07:19:10.788742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount723475764.mount: Deactivated successfully. Aug 13 07:19:12.540206 containerd[1476]: time="2025-08-13T07:19:12.540128031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:12.541410 containerd[1476]: time="2025-08-13T07:19:12.541254828Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.7: active requests=0, bytes read=28799994" Aug 13 07:19:12.544054 containerd[1476]: time="2025-08-13T07:19:12.543992349Z" level=info msg="ImageCreate event name:\"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:12.550254 containerd[1476]: time="2025-08-13T07:19:12.550187444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:12.551117 containerd[1476]: time="2025-08-13T07:19:12.551084920Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.7\" with image id \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\", size \"28796794\" in 2.472690477s" Aug 13 07:19:12.551182 containerd[1476]: time="2025-08-13T07:19:12.551121988Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\"" Aug 13 07:19:12.551980 containerd[1476]: time="2025-08-13T07:19:12.551932827Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Aug 13 07:19:14.236477 containerd[1476]: time="2025-08-13T07:19:14.236394590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:14.237402 containerd[1476]: time="2025-08-13T07:19:14.237327952Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.7: active requests=0, bytes read=24783636" Aug 13 07:19:14.238583 containerd[1476]: time="2025-08-13T07:19:14.238550590Z" level=info msg="ImageCreate event name:\"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:14.242596 containerd[1476]: time="2025-08-13T07:19:14.242519300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:14.245920 containerd[1476]: time="2025-08-13T07:19:14.245084395Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.7\" with image id \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\", size \"26385470\" in 1.69310528s" Aug 13 07:19:14.245920 containerd[1476]: time="2025-08-13T07:19:14.245137179Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\"" Aug 13 07:19:14.246197 containerd[1476]: time="2025-08-13T07:19:14.246154741Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Aug 13 07:19:15.334785 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 07:19:15.344083 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:19:15.770162 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:19:15.782277 (kubelet)[1888]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:19:15.944202 kubelet[1888]: E0813 07:19:15.944143 1888 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:19:15.950812 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:19:15.951049 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:19:17.518472 containerd[1476]: time="2025-08-13T07:19:17.518400682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:17.520467 containerd[1476]: time="2025-08-13T07:19:17.520404253Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.7: active requests=0, bytes read=19176921" Aug 13 07:19:17.522163 containerd[1476]: time="2025-08-13T07:19:17.522114520Z" level=info msg="ImageCreate event name:\"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:17.525758 containerd[1476]: time="2025-08-13T07:19:17.525686696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:17.526809 containerd[1476]: time="2025-08-13T07:19:17.526774174Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.7\" with image id \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\", size \"20778773\" in 3.280576634s" Aug 13 07:19:17.526897 containerd[1476]: time="2025-08-13T07:19:17.526815115Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\"" Aug 13 07:19:17.527367 containerd[1476]: time="2025-08-13T07:19:17.527342444Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Aug 13 07:19:20.307264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4171067741.mount: Deactivated successfully. Aug 13 07:19:20.870759 containerd[1476]: time="2025-08-13T07:19:20.870689875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:20.871628 containerd[1476]: time="2025-08-13T07:19:20.871560253Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.7: active requests=0, bytes read=30895380" Aug 13 07:19:20.873063 containerd[1476]: time="2025-08-13T07:19:20.873006856Z" level=info msg="ImageCreate event name:\"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:20.875390 containerd[1476]: time="2025-08-13T07:19:20.875345776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:20.876154 containerd[1476]: time="2025-08-13T07:19:20.876111527Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.7\" with image id \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\", repo tag \"registry.k8s.io/kube-proxy:v1.32.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\", size \"30894399\" in 3.348733963s" Aug 13 07:19:20.876190 containerd[1476]: time="2025-08-13T07:19:20.876151278Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\"" Aug 13 07:19:20.876642 containerd[1476]: time="2025-08-13T07:19:20.876607236Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 07:19:21.825523 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount289864920.mount: Deactivated successfully. Aug 13 07:19:23.018926 containerd[1476]: time="2025-08-13T07:19:23.018812553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:23.020346 containerd[1476]: time="2025-08-13T07:19:23.020226367Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 07:19:23.022414 containerd[1476]: time="2025-08-13T07:19:23.022351604Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:23.026392 containerd[1476]: time="2025-08-13T07:19:23.026334538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:23.027779 containerd[1476]: time="2025-08-13T07:19:23.027735615Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.15109168s" Aug 13 07:19:23.027779 containerd[1476]: time="2025-08-13T07:19:23.027770302Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 07:19:23.028462 containerd[1476]: time="2025-08-13T07:19:23.028399545Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 07:19:23.625885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2234650604.mount: Deactivated successfully. Aug 13 07:19:23.631966 containerd[1476]: time="2025-08-13T07:19:23.631914385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:23.632625 containerd[1476]: time="2025-08-13T07:19:23.632564222Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 07:19:23.633762 containerd[1476]: time="2025-08-13T07:19:23.633716014Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:23.637309 containerd[1476]: time="2025-08-13T07:19:23.637272409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:23.638126 containerd[1476]: time="2025-08-13T07:19:23.638089191Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 609.645073ms" Aug 13 07:19:23.638126 containerd[1476]: time="2025-08-13T07:19:23.638118729Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 07:19:23.638683 containerd[1476]: time="2025-08-13T07:19:23.638647461Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 13 07:19:24.326749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3954199514.mount: Deactivated successfully. Aug 13 07:19:26.201247 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 07:19:26.219127 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:19:26.873726 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:19:26.877916 (kubelet)[2020]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:19:27.467679 kubelet[2020]: E0813 07:19:27.467625 2020 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:19:27.473328 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:19:27.473829 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:19:27.603450 containerd[1476]: time="2025-08-13T07:19:27.603387145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:27.604095 containerd[1476]: time="2025-08-13T07:19:27.604036618Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Aug 13 07:19:27.605323 containerd[1476]: time="2025-08-13T07:19:27.605294463Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:27.608413 containerd[1476]: time="2025-08-13T07:19:27.608358120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:27.609910 containerd[1476]: time="2025-08-13T07:19:27.609866229Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.971174983s" Aug 13 07:19:27.609960 containerd[1476]: time="2025-08-13T07:19:27.609909776Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 07:19:30.257706 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:19:30.268169 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:19:30.292357 systemd[1]: Reloading requested from client PID 2062 ('systemctl') (unit session-7.scope)... Aug 13 07:19:30.292378 systemd[1]: Reloading... Aug 13 07:19:30.387982 zram_generator::config[2107]: No configuration found. Aug 13 07:19:30.686175 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:19:30.762380 systemd[1]: Reloading finished in 469 ms. Aug 13 07:19:30.812026 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:19:30.816188 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 07:19:30.816438 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:19:30.818072 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:19:30.984167 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:19:30.988755 (kubelet)[2151]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:19:31.028136 kubelet[2151]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:19:31.028136 kubelet[2151]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 07:19:31.028136 kubelet[2151]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:19:31.028433 kubelet[2151]: I0813 07:19:31.028244 2151 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:19:31.273961 kubelet[2151]: I0813 07:19:31.273841 2151 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 07:19:31.273961 kubelet[2151]: I0813 07:19:31.273876 2151 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:19:31.274200 kubelet[2151]: I0813 07:19:31.274176 2151 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 07:19:31.296303 kubelet[2151]: E0813 07:19:31.296258 2151 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.153:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.153:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:19:31.296464 kubelet[2151]: I0813 07:19:31.296445 2151 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:19:31.304751 kubelet[2151]: E0813 07:19:31.304688 2151 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:19:31.304751 kubelet[2151]: I0813 07:19:31.304737 2151 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:19:31.309954 kubelet[2151]: I0813 07:19:31.309921 2151 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:19:31.311099 kubelet[2151]: I0813 07:19:31.311050 2151 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:19:31.311292 kubelet[2151]: I0813 07:19:31.311079 2151 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:19:31.311387 kubelet[2151]: I0813 07:19:31.311300 2151 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:19:31.311387 kubelet[2151]: I0813 07:19:31.311311 2151 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 07:19:31.311497 kubelet[2151]: I0813 07:19:31.311475 2151 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:19:31.313980 kubelet[2151]: I0813 07:19:31.313936 2151 kubelet.go:446] "Attempting to sync node with API server" Aug 13 07:19:31.313980 kubelet[2151]: I0813 07:19:31.313973 2151 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:19:31.314161 kubelet[2151]: I0813 07:19:31.313999 2151 kubelet.go:352] "Adding apiserver pod source" Aug 13 07:19:31.314161 kubelet[2151]: I0813 07:19:31.314013 2151 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:19:31.317414 kubelet[2151]: I0813 07:19:31.317309 2151 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:19:31.318253 kubelet[2151]: I0813 07:19:31.317848 2151 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:19:31.318693 kubelet[2151]: W0813 07:19:31.318630 2151 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.153:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Aug 13 07:19:31.318725 kubelet[2151]: E0813 07:19:31.318713 2151 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.153:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.153:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:19:31.319656 kubelet[2151]: W0813 07:19:31.319470 2151 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 07:19:31.319656 kubelet[2151]: W0813 07:19:31.319584 2151 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Aug 13 07:19:31.319656 kubelet[2151]: E0813 07:19:31.319621 2151 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.153:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:19:31.321532 kubelet[2151]: I0813 07:19:31.321503 2151 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 07:19:31.321582 kubelet[2151]: I0813 07:19:31.321544 2151 server.go:1287] "Started kubelet" Aug 13 07:19:31.322603 kubelet[2151]: I0813 07:19:31.321675 2151 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:19:31.323554 kubelet[2151]: I0813 07:19:31.322680 2151 server.go:479] "Adding debug handlers to kubelet server" Aug 13 07:19:31.323554 kubelet[2151]: I0813 07:19:31.323083 2151 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:19:31.323554 kubelet[2151]: I0813 07:19:31.323459 2151 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:19:31.324534 kubelet[2151]: I0813 07:19:31.323793 2151 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:19:31.324534 kubelet[2151]: I0813 07:19:31.324135 2151 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:19:31.324892 kubelet[2151]: E0813 07:19:31.324876 2151 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:19:31.325011 kubelet[2151]: I0813 07:19:31.324998 2151 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 07:19:31.325323 kubelet[2151]: I0813 07:19:31.325291 2151 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 07:19:31.325489 kubelet[2151]: I0813 07:19:31.325478 2151 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:19:31.326247 kubelet[2151]: W0813 07:19:31.326206 2151 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Aug 13 07:19:31.326304 kubelet[2151]: E0813 07:19:31.326262 2151 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.153:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:19:31.326702 kubelet[2151]: I0813 07:19:31.326501 2151 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:19:31.326702 kubelet[2151]: I0813 07:19:31.326574 2151 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:19:31.326702 kubelet[2151]: E0813 07:19:31.326664 2151 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.153:6443: connect: connection refused" interval="200ms" Aug 13 07:19:31.327629 kubelet[2151]: E0813 07:19:31.327458 2151 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:19:31.328039 kubelet[2151]: E0813 07:19:31.327086 2151 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.153:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.153:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b4279f2b75aeb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 07:19:31.321518827 +0000 UTC m=+0.328780139,LastTimestamp:2025-08-13 07:19:31.321518827 +0000 UTC m=+0.328780139,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 07:19:31.328654 kubelet[2151]: I0813 07:19:31.328325 2151 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:19:31.341234 kubelet[2151]: I0813 07:19:31.341185 2151 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:19:31.343066 kubelet[2151]: I0813 07:19:31.343042 2151 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 07:19:31.343066 kubelet[2151]: I0813 07:19:31.343056 2151 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 07:19:31.343149 kubelet[2151]: I0813 07:19:31.343060 2151 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:19:31.343149 kubelet[2151]: I0813 07:19:31.343095 2151 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 07:19:31.343149 kubelet[2151]: I0813 07:19:31.343121 2151 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 07:19:31.343149 kubelet[2151]: I0813 07:19:31.343130 2151 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 07:19:31.343239 kubelet[2151]: E0813 07:19:31.343179 2151 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:19:31.343649 kubelet[2151]: W0813 07:19:31.343616 2151 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Aug 13 07:19:31.343689 kubelet[2151]: E0813 07:19:31.343654 2151 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.153:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:19:31.343689 kubelet[2151]: I0813 07:19:31.343076 2151 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:19:31.425828 kubelet[2151]: E0813 07:19:31.425779 2151 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:19:31.444140 kubelet[2151]: E0813 07:19:31.444091 2151 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 07:19:31.526803 kubelet[2151]: E0813 07:19:31.526681 2151 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:19:31.527299 kubelet[2151]: E0813 07:19:31.527267 2151 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.153:6443: connect: connection refused" interval="400ms" Aug 13 07:19:31.627679 kubelet[2151]: E0813 07:19:31.627619 2151 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:19:31.644818 kubelet[2151]: E0813 07:19:31.644777 2151 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 07:19:31.728313 kubelet[2151]: E0813 07:19:31.728277 2151 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:19:31.829462 kubelet[2151]: E0813 07:19:31.829322 2151 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:19:31.866337 kubelet[2151]: I0813 07:19:31.866295 2151 policy_none.go:49] "None policy: Start" Aug 13 07:19:31.866337 kubelet[2151]: I0813 07:19:31.866336 2151 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 07:19:31.866337 kubelet[2151]: I0813 07:19:31.866354 2151 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:19:31.877260 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 07:19:31.889407 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 07:19:31.892464 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 07:19:31.899884 kubelet[2151]: I0813 07:19:31.899835 2151 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:19:31.900163 kubelet[2151]: I0813 07:19:31.900142 2151 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:19:31.900222 kubelet[2151]: I0813 07:19:31.900164 2151 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:19:31.900632 kubelet[2151]: I0813 07:19:31.900469 2151 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:19:31.902236 kubelet[2151]: E0813 07:19:31.902210 2151 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 07:19:31.902294 kubelet[2151]: E0813 07:19:31.902267 2151 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 13 07:19:31.928814 kubelet[2151]: E0813 07:19:31.928773 2151 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.153:6443: connect: connection refused" interval="800ms" Aug 13 07:19:32.002221 kubelet[2151]: I0813 07:19:32.002177 2151 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 07:19:32.002666 kubelet[2151]: E0813 07:19:32.002613 2151 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.153:6443/api/v1/nodes\": dial tcp 10.0.0.153:6443: connect: connection refused" node="localhost" Aug 13 07:19:32.052469 systemd[1]: Created slice kubepods-burstable-pod481e656da08d962264a112f273402084.slice - libcontainer container kubepods-burstable-pod481e656da08d962264a112f273402084.slice. Aug 13 07:19:32.072551 kubelet[2151]: E0813 07:19:32.072512 2151 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 07:19:32.075129 systemd[1]: Created slice kubepods-burstable-pod393e2c0a78c0056780c2194ff80c6df1.slice - libcontainer container kubepods-burstable-pod393e2c0a78c0056780c2194ff80c6df1.slice. Aug 13 07:19:32.083121 kubelet[2151]: E0813 07:19:32.083043 2151 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 07:19:32.085617 systemd[1]: Created slice kubepods-burstable-pod750d39fc02542d706e018e4727e23919.slice - libcontainer container kubepods-burstable-pod750d39fc02542d706e018e4727e23919.slice. Aug 13 07:19:32.087109 kubelet[2151]: E0813 07:19:32.087079 2151 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 07:19:32.130530 kubelet[2151]: I0813 07:19:32.130498 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/481e656da08d962264a112f273402084-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"481e656da08d962264a112f273402084\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:19:32.130530 kubelet[2151]: I0813 07:19:32.130527 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:19:32.130619 kubelet[2151]: I0813 07:19:32.130567 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:19:32.130619 kubelet[2151]: I0813 07:19:32.130588 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/750d39fc02542d706e018e4727e23919-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"750d39fc02542d706e018e4727e23919\") " pod="kube-system/kube-scheduler-localhost" Aug 13 07:19:32.130619 kubelet[2151]: I0813 07:19:32.130601 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/481e656da08d962264a112f273402084-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"481e656da08d962264a112f273402084\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:19:32.130699 kubelet[2151]: I0813 07:19:32.130618 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/481e656da08d962264a112f273402084-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"481e656da08d962264a112f273402084\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:19:32.130699 kubelet[2151]: I0813 07:19:32.130633 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:19:32.130699 kubelet[2151]: I0813 07:19:32.130674 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:19:32.130699 kubelet[2151]: I0813 07:19:32.130692 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:19:32.204471 kubelet[2151]: I0813 07:19:32.204450 2151 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 07:19:32.204810 kubelet[2151]: E0813 07:19:32.204772 2151 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.153:6443/api/v1/nodes\": dial tcp 10.0.0.153:6443: connect: connection refused" node="localhost" Aug 13 07:19:32.279755 kubelet[2151]: W0813 07:19:32.279723 2151 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Aug 13 07:19:32.279831 kubelet[2151]: E0813 07:19:32.279758 2151 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.153:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:19:32.373997 kubelet[2151]: E0813 07:19:32.373875 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:32.374475 containerd[1476]: time="2025-08-13T07:19:32.374380490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:481e656da08d962264a112f273402084,Namespace:kube-system,Attempt:0,}" Aug 13 07:19:32.383695 kubelet[2151]: E0813 07:19:32.383653 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:32.384101 containerd[1476]: time="2025-08-13T07:19:32.384061351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:393e2c0a78c0056780c2194ff80c6df1,Namespace:kube-system,Attempt:0,}" Aug 13 07:19:32.388447 kubelet[2151]: E0813 07:19:32.388418 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:32.388803 containerd[1476]: time="2025-08-13T07:19:32.388775732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:750d39fc02542d706e018e4727e23919,Namespace:kube-system,Attempt:0,}" Aug 13 07:19:32.437733 kubelet[2151]: W0813 07:19:32.437671 2151 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Aug 13 07:19:32.437733 kubelet[2151]: E0813 07:19:32.437727 2151 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.153:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:19:32.485305 kubelet[2151]: W0813 07:19:32.485265 2151 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Aug 13 07:19:32.485305 kubelet[2151]: E0813 07:19:32.485298 2151 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.153:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:19:32.606051 kubelet[2151]: I0813 07:19:32.606006 2151 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 07:19:32.606391 kubelet[2151]: E0813 07:19:32.606342 2151 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.153:6443/api/v1/nodes\": dial tcp 10.0.0.153:6443: connect: connection refused" node="localhost" Aug 13 07:19:32.729700 kubelet[2151]: E0813 07:19:32.729589 2151 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.153:6443: connect: connection refused" interval="1.6s" Aug 13 07:19:32.855577 kubelet[2151]: W0813 07:19:32.855513 2151 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.153:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Aug 13 07:19:32.855641 kubelet[2151]: E0813 07:19:32.855584 2151 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.153:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.153:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:19:33.407432 kubelet[2151]: I0813 07:19:33.407397 2151 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 07:19:33.407790 kubelet[2151]: E0813 07:19:33.407629 2151 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.153:6443/api/v1/nodes\": dial tcp 10.0.0.153:6443: connect: connection refused" node="localhost" Aug 13 07:19:33.412638 kubelet[2151]: E0813 07:19:33.412610 2151 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.153:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.153:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:19:33.605645 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2968633910.mount: Deactivated successfully. Aug 13 07:19:33.611619 containerd[1476]: time="2025-08-13T07:19:33.611575586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:19:33.613222 containerd[1476]: time="2025-08-13T07:19:33.613169320Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:19:33.614129 containerd[1476]: time="2025-08-13T07:19:33.614100967Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:19:33.615053 containerd[1476]: time="2025-08-13T07:19:33.615023763Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:19:33.615830 containerd[1476]: time="2025-08-13T07:19:33.615801940Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 13 07:19:33.616957 containerd[1476]: time="2025-08-13T07:19:33.616919214Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:19:33.617668 containerd[1476]: time="2025-08-13T07:19:33.617626781Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:19:33.620276 containerd[1476]: time="2025-08-13T07:19:33.620243351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:19:33.622368 containerd[1476]: time="2025-08-13T07:19:33.622337854Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.238224958s" Aug 13 07:19:33.623046 containerd[1476]: time="2025-08-13T07:19:33.623021573Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.234199439s" Aug 13 07:19:33.624147 containerd[1476]: time="2025-08-13T07:19:33.624095934Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.249636904s" Aug 13 07:19:34.179875 containerd[1476]: time="2025-08-13T07:19:34.179763076Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:19:34.180201 containerd[1476]: time="2025-08-13T07:19:34.179844594Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:19:34.181025 containerd[1476]: time="2025-08-13T07:19:34.180942058Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:19:34.181025 containerd[1476]: time="2025-08-13T07:19:34.181003858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:19:34.181379 containerd[1476]: time="2025-08-13T07:19:34.181027165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:34.181379 containerd[1476]: time="2025-08-13T07:19:34.181152191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:34.181379 containerd[1476]: time="2025-08-13T07:19:34.181241379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:34.181379 containerd[1476]: time="2025-08-13T07:19:34.181342556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:34.182373 containerd[1476]: time="2025-08-13T07:19:34.182286083Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:19:34.182425 containerd[1476]: time="2025-08-13T07:19:34.182346610Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:19:34.182425 containerd[1476]: time="2025-08-13T07:19:34.182361637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:34.182488 containerd[1476]: time="2025-08-13T07:19:34.182441271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:34.318654 kubelet[2151]: W0813 07:19:34.318543 2151 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Aug 13 07:19:34.318654 kubelet[2151]: E0813 07:19:34.318592 2151 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.153:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:19:34.331041 kubelet[2151]: E0813 07:19:34.330951 2151 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.153:6443: connect: connection refused" interval="3.2s" Aug 13 07:19:34.920209 systemd[1]: Started cri-containerd-79e8be2ebf6f3c40501413a4cc945ab6bf8cf81f27604011f926add37d1b6dbe.scope - libcontainer container 79e8be2ebf6f3c40501413a4cc945ab6bf8cf81f27604011f926add37d1b6dbe. Aug 13 07:19:34.945057 systemd[1]: Started cri-containerd-629f2cc00843ae59e294a6c69a4c4c00e099112cf91f9747c68effa5a480eb37.scope - libcontainer container 629f2cc00843ae59e294a6c69a4c4c00e099112cf91f9747c68effa5a480eb37. Aug 13 07:19:34.946612 systemd[1]: Started cri-containerd-658c9bc690ff5800e9640180c4f3903b87a4269f4316a4eddf1ecdbd136d98ce.scope - libcontainer container 658c9bc690ff5800e9640180c4f3903b87a4269f4316a4eddf1ecdbd136d98ce. Aug 13 07:19:34.976765 containerd[1476]: time="2025-08-13T07:19:34.976725780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:393e2c0a78c0056780c2194ff80c6df1,Namespace:kube-system,Attempt:0,} returns sandbox id \"79e8be2ebf6f3c40501413a4cc945ab6bf8cf81f27604011f926add37d1b6dbe\"" Aug 13 07:19:34.981298 kubelet[2151]: E0813 07:19:34.981169 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:34.985415 containerd[1476]: time="2025-08-13T07:19:34.984556831Z" level=info msg="CreateContainer within sandbox \"79e8be2ebf6f3c40501413a4cc945ab6bf8cf81f27604011f926add37d1b6dbe\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 07:19:34.986236 containerd[1476]: time="2025-08-13T07:19:34.986207425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:481e656da08d962264a112f273402084,Namespace:kube-system,Attempt:0,} returns sandbox id \"629f2cc00843ae59e294a6c69a4c4c00e099112cf91f9747c68effa5a480eb37\"" Aug 13 07:19:34.987058 kubelet[2151]: E0813 07:19:34.987038 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:34.987368 containerd[1476]: time="2025-08-13T07:19:34.987347070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:750d39fc02542d706e018e4727e23919,Namespace:kube-system,Attempt:0,} returns sandbox id \"658c9bc690ff5800e9640180c4f3903b87a4269f4316a4eddf1ecdbd136d98ce\"" Aug 13 07:19:34.990474 containerd[1476]: time="2025-08-13T07:19:34.990439149Z" level=info msg="CreateContainer within sandbox \"629f2cc00843ae59e294a6c69a4c4c00e099112cf91f9747c68effa5a480eb37\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 07:19:34.990811 kubelet[2151]: E0813 07:19:34.990695 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:34.992563 containerd[1476]: time="2025-08-13T07:19:34.992543943Z" level=info msg="CreateContainer within sandbox \"658c9bc690ff5800e9640180c4f3903b87a4269f4316a4eddf1ecdbd136d98ce\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 07:19:35.006213 containerd[1476]: time="2025-08-13T07:19:35.006100013Z" level=info msg="CreateContainer within sandbox \"79e8be2ebf6f3c40501413a4cc945ab6bf8cf81f27604011f926add37d1b6dbe\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9e06116fe30dce42bc4aeb7424a66c6d3179f9d8633ab30d9324c6ab20611651\"" Aug 13 07:19:35.006726 containerd[1476]: time="2025-08-13T07:19:35.006690216Z" level=info msg="StartContainer for \"9e06116fe30dce42bc4aeb7424a66c6d3179f9d8633ab30d9324c6ab20611651\"" Aug 13 07:19:35.008962 kubelet[2151]: I0813 07:19:35.008938 2151 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 07:19:35.009529 kubelet[2151]: E0813 07:19:35.009429 2151 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.153:6443/api/v1/nodes\": dial tcp 10.0.0.153:6443: connect: connection refused" node="localhost" Aug 13 07:19:35.013520 containerd[1476]: time="2025-08-13T07:19:35.013418749Z" level=info msg="CreateContainer within sandbox \"629f2cc00843ae59e294a6c69a4c4c00e099112cf91f9747c68effa5a480eb37\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"91812236dfaad3f227ef1094bbead16698c555d480f2c3e36db915fb621b4ae4\"" Aug 13 07:19:35.013803 containerd[1476]: time="2025-08-13T07:19:35.013780282Z" level=info msg="StartContainer for \"91812236dfaad3f227ef1094bbead16698c555d480f2c3e36db915fb621b4ae4\"" Aug 13 07:19:35.016215 containerd[1476]: time="2025-08-13T07:19:35.016115084Z" level=info msg="CreateContainer within sandbox \"658c9bc690ff5800e9640180c4f3903b87a4269f4316a4eddf1ecdbd136d98ce\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"456b444f9619e0ff480452fa6cb52e17d112c7ab24968a4e5833d05cd84f9ee3\"" Aug 13 07:19:35.016519 containerd[1476]: time="2025-08-13T07:19:35.016498369Z" level=info msg="StartContainer for \"456b444f9619e0ff480452fa6cb52e17d112c7ab24968a4e5833d05cd84f9ee3\"" Aug 13 07:19:35.035034 systemd[1]: Started cri-containerd-9e06116fe30dce42bc4aeb7424a66c6d3179f9d8633ab30d9324c6ab20611651.scope - libcontainer container 9e06116fe30dce42bc4aeb7424a66c6d3179f9d8633ab30d9324c6ab20611651. Aug 13 07:19:35.039829 systemd[1]: Started cri-containerd-456b444f9619e0ff480452fa6cb52e17d112c7ab24968a4e5833d05cd84f9ee3.scope - libcontainer container 456b444f9619e0ff480452fa6cb52e17d112c7ab24968a4e5833d05cd84f9ee3. Aug 13 07:19:35.041082 systemd[1]: Started cri-containerd-91812236dfaad3f227ef1094bbead16698c555d480f2c3e36db915fb621b4ae4.scope - libcontainer container 91812236dfaad3f227ef1094bbead16698c555d480f2c3e36db915fb621b4ae4. Aug 13 07:19:35.077423 containerd[1476]: time="2025-08-13T07:19:35.077270368Z" level=info msg="StartContainer for \"9e06116fe30dce42bc4aeb7424a66c6d3179f9d8633ab30d9324c6ab20611651\" returns successfully" Aug 13 07:19:35.082509 containerd[1476]: time="2025-08-13T07:19:35.082467105Z" level=info msg="StartContainer for \"456b444f9619e0ff480452fa6cb52e17d112c7ab24968a4e5833d05cd84f9ee3\" returns successfully" Aug 13 07:19:35.089256 containerd[1476]: time="2025-08-13T07:19:35.089185504Z" level=info msg="StartContainer for \"91812236dfaad3f227ef1094bbead16698c555d480f2c3e36db915fb621b4ae4\" returns successfully" Aug 13 07:19:35.096428 kubelet[2151]: W0813 07:19:35.096356 2151 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Aug 13 07:19:35.096428 kubelet[2151]: E0813 07:19:35.096403 2151 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.153:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:19:35.355635 kubelet[2151]: E0813 07:19:35.355372 2151 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 07:19:35.355635 kubelet[2151]: E0813 07:19:35.355475 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:35.359175 kubelet[2151]: E0813 07:19:35.358977 2151 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 07:19:35.359175 kubelet[2151]: E0813 07:19:35.359059 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:35.362933 kubelet[2151]: E0813 07:19:35.362678 2151 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 07:19:35.362933 kubelet[2151]: E0813 07:19:35.362758 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:36.364353 kubelet[2151]: E0813 07:19:36.364321 2151 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 07:19:36.365021 kubelet[2151]: E0813 07:19:36.364444 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:36.365021 kubelet[2151]: E0813 07:19:36.364481 2151 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 07:19:36.365021 kubelet[2151]: E0813 07:19:36.364630 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:36.422605 kubelet[2151]: E0813 07:19:36.422568 2151 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Aug 13 07:19:36.773205 kubelet[2151]: E0813 07:19:36.773108 2151 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Aug 13 07:19:37.216155 kubelet[2151]: E0813 07:19:37.216020 2151 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Aug 13 07:19:37.533963 kubelet[2151]: E0813 07:19:37.533813 2151 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 13 07:19:38.124627 kubelet[2151]: E0813 07:19:38.124591 2151 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Aug 13 07:19:38.211279 kubelet[2151]: I0813 07:19:38.211246 2151 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 07:19:38.216157 kubelet[2151]: I0813 07:19:38.216122 2151 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 13 07:19:38.216157 kubelet[2151]: E0813 07:19:38.216148 2151 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Aug 13 07:19:38.222515 kubelet[2151]: E0813 07:19:38.222467 2151 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:19:38.323457 kubelet[2151]: E0813 07:19:38.323411 2151 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:19:38.354525 kubelet[2151]: E0813 07:19:38.354478 2151 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 07:19:38.354705 kubelet[2151]: E0813 07:19:38.354611 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:38.424275 kubelet[2151]: E0813 07:19:38.424152 2151 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:19:38.524976 kubelet[2151]: E0813 07:19:38.524886 2151 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:19:38.586422 systemd[1]: Reloading requested from client PID 2429 ('systemctl') (unit session-7.scope)... Aug 13 07:19:38.586438 systemd[1]: Reloading... Aug 13 07:19:38.625464 kubelet[2151]: E0813 07:19:38.625427 2151 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:19:38.660936 zram_generator::config[2472]: No configuration found. Aug 13 07:19:38.726504 kubelet[2151]: E0813 07:19:38.726393 2151 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:19:38.767201 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:19:38.827444 kubelet[2151]: E0813 07:19:38.827415 2151 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:19:38.854735 systemd[1]: Reloading finished in 267 ms. Aug 13 07:19:38.898262 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:19:38.919259 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 07:19:38.919538 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:19:38.931101 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:19:39.089248 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:19:39.094785 (kubelet)[2513]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:19:39.132193 kubelet[2513]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:19:39.132610 kubelet[2513]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 07:19:39.132751 kubelet[2513]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:19:39.132871 kubelet[2513]: I0813 07:19:39.132827 2513 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:19:39.138885 kubelet[2513]: I0813 07:19:39.138851 2513 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 07:19:39.138885 kubelet[2513]: I0813 07:19:39.138877 2513 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:19:39.139203 kubelet[2513]: I0813 07:19:39.139181 2513 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 07:19:39.140353 kubelet[2513]: I0813 07:19:39.140329 2513 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 07:19:39.142592 kubelet[2513]: I0813 07:19:39.142566 2513 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:19:39.146607 kubelet[2513]: E0813 07:19:39.146569 2513 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:19:39.146607 kubelet[2513]: I0813 07:19:39.146594 2513 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:19:39.151188 kubelet[2513]: I0813 07:19:39.151171 2513 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:19:39.151415 kubelet[2513]: I0813 07:19:39.151387 2513 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:19:39.151573 kubelet[2513]: I0813 07:19:39.151411 2513 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:19:39.151573 kubelet[2513]: I0813 07:19:39.151571 2513 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:19:39.151683 kubelet[2513]: I0813 07:19:39.151580 2513 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 07:19:39.151683 kubelet[2513]: I0813 07:19:39.151629 2513 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:19:39.151781 kubelet[2513]: I0813 07:19:39.151768 2513 kubelet.go:446] "Attempting to sync node with API server" Aug 13 07:19:39.151807 kubelet[2513]: I0813 07:19:39.151794 2513 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:19:39.151845 kubelet[2513]: I0813 07:19:39.151814 2513 kubelet.go:352] "Adding apiserver pod source" Aug 13 07:19:39.151845 kubelet[2513]: I0813 07:19:39.151827 2513 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:19:39.153086 kubelet[2513]: I0813 07:19:39.152769 2513 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:19:39.153302 kubelet[2513]: I0813 07:19:39.153177 2513 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:19:39.155929 kubelet[2513]: I0813 07:19:39.153630 2513 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 07:19:39.155929 kubelet[2513]: I0813 07:19:39.153661 2513 server.go:1287] "Started kubelet" Aug 13 07:19:39.155929 kubelet[2513]: I0813 07:19:39.154258 2513 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:19:39.155929 kubelet[2513]: I0813 07:19:39.154559 2513 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:19:39.155929 kubelet[2513]: I0813 07:19:39.154611 2513 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:19:39.155929 kubelet[2513]: I0813 07:19:39.155642 2513 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:19:39.155929 kubelet[2513]: I0813 07:19:39.155850 2513 server.go:479] "Adding debug handlers to kubelet server" Aug 13 07:19:39.156192 kubelet[2513]: I0813 07:19:39.156137 2513 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:19:39.164058 kubelet[2513]: I0813 07:19:39.164030 2513 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 07:19:39.164370 kubelet[2513]: E0813 07:19:39.164339 2513 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:19:39.165736 kubelet[2513]: I0813 07:19:39.165705 2513 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 07:19:39.165972 kubelet[2513]: I0813 07:19:39.165877 2513 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:19:39.166294 kubelet[2513]: I0813 07:19:39.166254 2513 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:19:39.168043 kubelet[2513]: I0813 07:19:39.167976 2513 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:19:39.168043 kubelet[2513]: I0813 07:19:39.167998 2513 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:19:39.169480 kubelet[2513]: E0813 07:19:39.169459 2513 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:19:39.171390 kubelet[2513]: I0813 07:19:39.171349 2513 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:19:39.172611 kubelet[2513]: I0813 07:19:39.172580 2513 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:19:39.172638 kubelet[2513]: I0813 07:19:39.172613 2513 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 07:19:39.172730 kubelet[2513]: I0813 07:19:39.172697 2513 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 07:19:39.172758 kubelet[2513]: I0813 07:19:39.172727 2513 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 07:19:39.172815 kubelet[2513]: E0813 07:19:39.172776 2513 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:19:39.200071 kubelet[2513]: I0813 07:19:39.200035 2513 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 07:19:39.200071 kubelet[2513]: I0813 07:19:39.200051 2513 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 07:19:39.200071 kubelet[2513]: I0813 07:19:39.200069 2513 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:19:39.200230 kubelet[2513]: I0813 07:19:39.200209 2513 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 07:19:39.200256 kubelet[2513]: I0813 07:19:39.200220 2513 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 07:19:39.200256 kubelet[2513]: I0813 07:19:39.200238 2513 policy_none.go:49] "None policy: Start" Aug 13 07:19:39.200256 kubelet[2513]: I0813 07:19:39.200247 2513 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 07:19:39.200256 kubelet[2513]: I0813 07:19:39.200257 2513 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:19:39.200369 kubelet[2513]: I0813 07:19:39.200353 2513 state_mem.go:75] "Updated machine memory state" Aug 13 07:19:39.206095 kubelet[2513]: I0813 07:19:39.206072 2513 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:19:39.206422 kubelet[2513]: I0813 07:19:39.206244 2513 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:19:39.206422 kubelet[2513]: I0813 07:19:39.206265 2513 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:19:39.206703 kubelet[2513]: I0813 07:19:39.206670 2513 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:19:39.207575 kubelet[2513]: E0813 07:19:39.207543 2513 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 07:19:39.273728 kubelet[2513]: I0813 07:19:39.273684 2513 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 07:19:39.273909 kubelet[2513]: I0813 07:19:39.273823 2513 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 13 07:19:39.273970 kubelet[2513]: I0813 07:19:39.273937 2513 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 07:19:39.311077 kubelet[2513]: I0813 07:19:39.311037 2513 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 07:19:39.367685 kubelet[2513]: I0813 07:19:39.367561 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/481e656da08d962264a112f273402084-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"481e656da08d962264a112f273402084\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:19:39.367685 kubelet[2513]: I0813 07:19:39.367597 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:19:39.367685 kubelet[2513]: I0813 07:19:39.367619 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:19:39.367685 kubelet[2513]: I0813 07:19:39.367634 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:19:39.367685 kubelet[2513]: I0813 07:19:39.367654 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:19:39.367943 kubelet[2513]: I0813 07:19:39.367673 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/750d39fc02542d706e018e4727e23919-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"750d39fc02542d706e018e4727e23919\") " pod="kube-system/kube-scheduler-localhost" Aug 13 07:19:39.367943 kubelet[2513]: I0813 07:19:39.367720 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/481e656da08d962264a112f273402084-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"481e656da08d962264a112f273402084\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:19:39.367943 kubelet[2513]: I0813 07:19:39.367741 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/481e656da08d962264a112f273402084-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"481e656da08d962264a112f273402084\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:19:39.367943 kubelet[2513]: I0813 07:19:39.367764 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:19:39.392546 kubelet[2513]: I0813 07:19:39.390891 2513 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Aug 13 07:19:39.392546 kubelet[2513]: I0813 07:19:39.390968 2513 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 13 07:19:39.458073 sudo[2552]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 07:19:39.458446 sudo[2552]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 13 07:19:39.688572 kubelet[2513]: E0813 07:19:39.688420 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:39.689203 kubelet[2513]: E0813 07:19:39.689181 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:39.691517 kubelet[2513]: E0813 07:19:39.691490 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:39.918758 sudo[2552]: pam_unix(sudo:session): session closed for user root Aug 13 07:19:40.184938 kubelet[2513]: I0813 07:19:40.184806 2513 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 07:19:40.184938 kubelet[2513]: I0813 07:19:40.184914 2513 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 07:19:40.454284 kubelet[2513]: E0813 07:19:40.453758 2513 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Aug 13 07:19:40.454284 kubelet[2513]: E0813 07:19:40.453966 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:40.455673 kubelet[2513]: I0813 07:19:40.454670 2513 apiserver.go:52] "Watching apiserver" Aug 13 07:19:40.455673 kubelet[2513]: E0813 07:19:40.454831 2513 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 13 07:19:40.455673 kubelet[2513]: E0813 07:19:40.455539 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:40.456664 kubelet[2513]: E0813 07:19:40.456548 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:40.465965 kubelet[2513]: I0813 07:19:40.465921 2513 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 07:19:40.476869 kubelet[2513]: I0813 07:19:40.476800 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.476739062 podStartE2EDuration="1.476739062s" podCreationTimestamp="2025-08-13 07:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:19:40.47627827 +0000 UTC m=+1.377698737" watchObservedRunningTime="2025-08-13 07:19:40.476739062 +0000 UTC m=+1.378159529" Aug 13 07:19:40.488924 kubelet[2513]: I0813 07:19:40.488857 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.488838008 podStartE2EDuration="1.488838008s" podCreationTimestamp="2025-08-13 07:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:19:40.482417879 +0000 UTC m=+1.383838346" watchObservedRunningTime="2025-08-13 07:19:40.488838008 +0000 UTC m=+1.390258476" Aug 13 07:19:40.495254 kubelet[2513]: I0813 07:19:40.495218 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.495206339 podStartE2EDuration="1.495206339s" podCreationTimestamp="2025-08-13 07:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:19:40.488959906 +0000 UTC m=+1.390380373" watchObservedRunningTime="2025-08-13 07:19:40.495206339 +0000 UTC m=+1.396626806" Aug 13 07:19:41.186117 kubelet[2513]: I0813 07:19:41.186075 2513 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 07:19:41.186117 kubelet[2513]: E0813 07:19:41.186093 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:41.192990 kubelet[2513]: E0813 07:19:41.192961 2513 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 13 07:19:41.193079 kubelet[2513]: E0813 07:19:41.193064 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:41.216559 sudo[1650]: pam_unix(sudo:session): session closed for user root Aug 13 07:19:41.218361 sshd[1647]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:41.222524 systemd[1]: sshd@6-10.0.0.153:22-10.0.0.1:34698.service: Deactivated successfully. Aug 13 07:19:41.224541 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 07:19:41.224735 systemd[1]: session-7.scope: Consumed 5.599s CPU time, 155.3M memory peak, 0B memory swap peak. Aug 13 07:19:41.225183 systemd-logind[1453]: Session 7 logged out. Waiting for processes to exit. Aug 13 07:19:41.226116 systemd-logind[1453]: Removed session 7. Aug 13 07:19:41.594261 kubelet[2513]: E0813 07:19:41.594146 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:42.187839 kubelet[2513]: E0813 07:19:42.187815 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:42.188258 kubelet[2513]: E0813 07:19:42.187865 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:44.130976 kubelet[2513]: I0813 07:19:44.130940 2513 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 07:19:44.133194 containerd[1476]: time="2025-08-13T07:19:44.133131605Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 07:19:44.133472 kubelet[2513]: I0813 07:19:44.133359 2513 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 07:19:45.027611 systemd[1]: Created slice kubepods-besteffort-podd27ffe92_b2fa_4cd6_b3b5_1d4c67142fab.slice - libcontainer container kubepods-besteffort-podd27ffe92_b2fa_4cd6_b3b5_1d4c67142fab.slice. Aug 13 07:19:45.045192 systemd[1]: Created slice kubepods-burstable-pod3fa38a5c_6cbe_4899_8188_7ef31f226de4.slice - libcontainer container kubepods-burstable-pod3fa38a5c_6cbe_4899_8188_7ef31f226de4.slice. Aug 13 07:19:45.104415 kubelet[2513]: I0813 07:19:45.104372 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-xtables-lock\") pod \"cilium-ljhgc\" (UID: \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\") " pod="kube-system/cilium-ljhgc" Aug 13 07:19:45.104415 kubelet[2513]: I0813 07:19:45.104408 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d27ffe92-b2fa-4cd6-b3b5-1d4c67142fab-xtables-lock\") pod \"kube-proxy-qhcn4\" (UID: \"d27ffe92-b2fa-4cd6-b3b5-1d4c67142fab\") " pod="kube-system/kube-proxy-qhcn4" Aug 13 07:19:45.104415 kubelet[2513]: I0813 07:19:45.104424 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2jsn\" (UniqueName: \"kubernetes.io/projected/d27ffe92-b2fa-4cd6-b3b5-1d4c67142fab-kube-api-access-k2jsn\") pod \"kube-proxy-qhcn4\" (UID: \"d27ffe92-b2fa-4cd6-b3b5-1d4c67142fab\") " pod="kube-system/kube-proxy-qhcn4" Aug 13 07:19:45.104415 kubelet[2513]: I0813 07:19:45.104440 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-cilium-cgroup\") pod \"cilium-ljhgc\" (UID: \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\") " pod="kube-system/cilium-ljhgc" Aug 13 07:19:45.104715 kubelet[2513]: I0813 07:19:45.104455 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-host-proc-sys-kernel\") pod \"cilium-ljhgc\" (UID: \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\") " pod="kube-system/cilium-ljhgc" Aug 13 07:19:45.104715 kubelet[2513]: I0813 07:19:45.104472 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j894b\" (UniqueName: \"kubernetes.io/projected/3fa38a5c-6cbe-4899-8188-7ef31f226de4-kube-api-access-j894b\") pod \"cilium-ljhgc\" (UID: \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\") " pod="kube-system/cilium-ljhgc" Aug 13 07:19:45.104715 kubelet[2513]: I0813 07:19:45.104602 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3fa38a5c-6cbe-4899-8188-7ef31f226de4-clustermesh-secrets\") pod \"cilium-ljhgc\" (UID: \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\") " pod="kube-system/cilium-ljhgc" Aug 13 07:19:45.104715 kubelet[2513]: I0813 07:19:45.104660 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3fa38a5c-6cbe-4899-8188-7ef31f226de4-cilium-config-path\") pod \"cilium-ljhgc\" (UID: \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\") " pod="kube-system/cilium-ljhgc" Aug 13 07:19:45.104715 kubelet[2513]: I0813 07:19:45.104698 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-bpf-maps\") pod \"cilium-ljhgc\" (UID: \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\") " pod="kube-system/cilium-ljhgc" Aug 13 07:19:45.104877 kubelet[2513]: I0813 07:19:45.104749 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-cni-path\") pod \"cilium-ljhgc\" (UID: \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\") " pod="kube-system/cilium-ljhgc" Aug 13 07:19:45.104877 kubelet[2513]: I0813 07:19:45.104798 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-etc-cni-netd\") pod \"cilium-ljhgc\" (UID: \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\") " pod="kube-system/cilium-ljhgc" Aug 13 07:19:45.104877 kubelet[2513]: I0813 07:19:45.104824 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-cilium-run\") pod \"cilium-ljhgc\" (UID: \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\") " pod="kube-system/cilium-ljhgc" Aug 13 07:19:45.104877 kubelet[2513]: I0813 07:19:45.104855 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d27ffe92-b2fa-4cd6-b3b5-1d4c67142fab-kube-proxy\") pod \"kube-proxy-qhcn4\" (UID: \"d27ffe92-b2fa-4cd6-b3b5-1d4c67142fab\") " pod="kube-system/kube-proxy-qhcn4" Aug 13 07:19:45.104877 kubelet[2513]: I0813 07:19:45.104874 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-hostproc\") pod \"cilium-ljhgc\" (UID: \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\") " pod="kube-system/cilium-ljhgc" Aug 13 07:19:45.105372 kubelet[2513]: I0813 07:19:45.104887 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-lib-modules\") pod \"cilium-ljhgc\" (UID: \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\") " pod="kube-system/cilium-ljhgc" Aug 13 07:19:45.105372 kubelet[2513]: I0813 07:19:45.104932 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-host-proc-sys-net\") pod \"cilium-ljhgc\" (UID: \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\") " pod="kube-system/cilium-ljhgc" Aug 13 07:19:45.105372 kubelet[2513]: I0813 07:19:45.104961 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d27ffe92-b2fa-4cd6-b3b5-1d4c67142fab-lib-modules\") pod \"kube-proxy-qhcn4\" (UID: \"d27ffe92-b2fa-4cd6-b3b5-1d4c67142fab\") " pod="kube-system/kube-proxy-qhcn4" Aug 13 07:19:45.105372 kubelet[2513]: I0813 07:19:45.104995 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3fa38a5c-6cbe-4899-8188-7ef31f226de4-hubble-tls\") pod \"cilium-ljhgc\" (UID: \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\") " pod="kube-system/cilium-ljhgc" Aug 13 07:19:45.338333 kubelet[2513]: E0813 07:19:45.338194 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:45.339409 containerd[1476]: time="2025-08-13T07:19:45.339354959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qhcn4,Uid:d27ffe92-b2fa-4cd6-b3b5-1d4c67142fab,Namespace:kube-system,Attempt:0,}" Aug 13 07:19:45.346796 systemd[1]: Created slice kubepods-besteffort-pod8a6f13f7_86d1_4e95_9e7e_7d6e074d270a.slice - libcontainer container kubepods-besteffort-pod8a6f13f7_86d1_4e95_9e7e_7d6e074d270a.slice. Aug 13 07:19:45.348799 kubelet[2513]: E0813 07:19:45.348763 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:45.349501 containerd[1476]: time="2025-08-13T07:19:45.349330224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ljhgc,Uid:3fa38a5c-6cbe-4899-8188-7ef31f226de4,Namespace:kube-system,Attempt:0,}" Aug 13 07:19:45.370361 containerd[1476]: time="2025-08-13T07:19:45.370267272Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:19:45.370454 containerd[1476]: time="2025-08-13T07:19:45.370364545Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:19:45.370454 containerd[1476]: time="2025-08-13T07:19:45.370405214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:45.370515 containerd[1476]: time="2025-08-13T07:19:45.370490069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:45.380498 containerd[1476]: time="2025-08-13T07:19:45.380362740Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:19:45.380498 containerd[1476]: time="2025-08-13T07:19:45.380433526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:19:45.380498 containerd[1476]: time="2025-08-13T07:19:45.380445902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:45.380742 containerd[1476]: time="2025-08-13T07:19:45.380559992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:45.393088 systemd[1]: Started cri-containerd-eb1faa8dbc419319d36828fecf27549e6538bd889c4ffe02da9facc28f82cdef.scope - libcontainer container eb1faa8dbc419319d36828fecf27549e6538bd889c4ffe02da9facc28f82cdef. Aug 13 07:19:45.396565 systemd[1]: Started cri-containerd-ad3ff4d11406e703baf602f5302f67d08d927b3bf41948762927d519f9d35278.scope - libcontainer container ad3ff4d11406e703baf602f5302f67d08d927b3bf41948762927d519f9d35278. Aug 13 07:19:45.406626 kubelet[2513]: I0813 07:19:45.406496 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc56m\" (UniqueName: \"kubernetes.io/projected/8a6f13f7-86d1-4e95-9e7e-7d6e074d270a-kube-api-access-kc56m\") pod \"cilium-operator-6c4d7847fc-dx8w7\" (UID: \"8a6f13f7-86d1-4e95-9e7e-7d6e074d270a\") " pod="kube-system/cilium-operator-6c4d7847fc-dx8w7" Aug 13 07:19:45.407280 kubelet[2513]: I0813 07:19:45.406942 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a6f13f7-86d1-4e95-9e7e-7d6e074d270a-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-dx8w7\" (UID: \"8a6f13f7-86d1-4e95-9e7e-7d6e074d270a\") " pod="kube-system/cilium-operator-6c4d7847fc-dx8w7" Aug 13 07:19:45.420069 containerd[1476]: time="2025-08-13T07:19:45.419958865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qhcn4,Uid:d27ffe92-b2fa-4cd6-b3b5-1d4c67142fab,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb1faa8dbc419319d36828fecf27549e6538bd889c4ffe02da9facc28f82cdef\"" Aug 13 07:19:45.420763 kubelet[2513]: E0813 07:19:45.420740 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:45.424123 containerd[1476]: time="2025-08-13T07:19:45.423654207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ljhgc,Uid:3fa38a5c-6cbe-4899-8188-7ef31f226de4,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad3ff4d11406e703baf602f5302f67d08d927b3bf41948762927d519f9d35278\"" Aug 13 07:19:45.424123 containerd[1476]: time="2025-08-13T07:19:45.423684372Z" level=info msg="CreateContainer within sandbox \"eb1faa8dbc419319d36828fecf27549e6538bd889c4ffe02da9facc28f82cdef\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 07:19:45.424391 kubelet[2513]: E0813 07:19:45.424365 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:45.425968 containerd[1476]: time="2025-08-13T07:19:45.425758468Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 07:19:45.441510 containerd[1476]: time="2025-08-13T07:19:45.441464738Z" level=info msg="CreateContainer within sandbox \"eb1faa8dbc419319d36828fecf27549e6538bd889c4ffe02da9facc28f82cdef\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2af28b434db29ce4d0a669dd40ccfb5ccbf8d44cf3159608fb578c05fd2c722e\"" Aug 13 07:19:45.442052 containerd[1476]: time="2025-08-13T07:19:45.442004257Z" level=info msg="StartContainer for \"2af28b434db29ce4d0a669dd40ccfb5ccbf8d44cf3159608fb578c05fd2c722e\"" Aug 13 07:19:45.468025 systemd[1]: Started cri-containerd-2af28b434db29ce4d0a669dd40ccfb5ccbf8d44cf3159608fb578c05fd2c722e.scope - libcontainer container 2af28b434db29ce4d0a669dd40ccfb5ccbf8d44cf3159608fb578c05fd2c722e. Aug 13 07:19:45.496434 containerd[1476]: time="2025-08-13T07:19:45.496377628Z" level=info msg="StartContainer for \"2af28b434db29ce4d0a669dd40ccfb5ccbf8d44cf3159608fb578c05fd2c722e\" returns successfully" Aug 13 07:19:45.650137 kubelet[2513]: E0813 07:19:45.650008 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:45.651273 containerd[1476]: time="2025-08-13T07:19:45.651013469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dx8w7,Uid:8a6f13f7-86d1-4e95-9e7e-7d6e074d270a,Namespace:kube-system,Attempt:0,}" Aug 13 07:19:45.676690 containerd[1476]: time="2025-08-13T07:19:45.676576984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:19:45.676690 containerd[1476]: time="2025-08-13T07:19:45.676646306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:19:45.676690 containerd[1476]: time="2025-08-13T07:19:45.676666912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:45.677627 containerd[1476]: time="2025-08-13T07:19:45.677572101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:45.696231 systemd[1]: Started cri-containerd-fb7adf642fa94d39c99f161bd770acbee56cc9fb54599b802466ca26d9f79166.scope - libcontainer container fb7adf642fa94d39c99f161bd770acbee56cc9fb54599b802466ca26d9f79166. Aug 13 07:19:45.740146 containerd[1476]: time="2025-08-13T07:19:45.740109176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dx8w7,Uid:8a6f13f7-86d1-4e95-9e7e-7d6e074d270a,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb7adf642fa94d39c99f161bd770acbee56cc9fb54599b802466ca26d9f79166\"" Aug 13 07:19:45.741038 kubelet[2513]: E0813 07:19:45.740989 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:46.196101 kubelet[2513]: E0813 07:19:46.196078 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:46.204460 kubelet[2513]: I0813 07:19:46.204403 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qhcn4" podStartSLOduration=1.204385883 podStartE2EDuration="1.204385883s" podCreationTimestamp="2025-08-13 07:19:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:19:46.204217096 +0000 UTC m=+7.105637563" watchObservedRunningTime="2025-08-13 07:19:46.204385883 +0000 UTC m=+7.105806350" Aug 13 07:19:46.904490 update_engine[1458]: I20250813 07:19:46.904404 1458 update_attempter.cc:509] Updating boot flags... Aug 13 07:19:46.951932 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2894) Aug 13 07:19:49.123607 kubelet[2513]: E0813 07:19:49.123571 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:49.201123 kubelet[2513]: E0813 07:19:49.201030 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:50.765509 kubelet[2513]: E0813 07:19:50.765430 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:51.204194 kubelet[2513]: E0813 07:19:51.204153 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:51.600294 kubelet[2513]: E0813 07:19:51.600102 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:52.240250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1898781231.mount: Deactivated successfully. Aug 13 07:19:57.529986 containerd[1476]: time="2025-08-13T07:19:57.529924162Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:57.530639 containerd[1476]: time="2025-08-13T07:19:57.530581882Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Aug 13 07:19:57.531839 containerd[1476]: time="2025-08-13T07:19:57.531788088Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:57.533239 containerd[1476]: time="2025-08-13T07:19:57.533210608Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.107316144s" Aug 13 07:19:57.533318 containerd[1476]: time="2025-08-13T07:19:57.533238575Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 07:19:57.539503 containerd[1476]: time="2025-08-13T07:19:57.539470187Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 07:19:57.556317 containerd[1476]: time="2025-08-13T07:19:57.556288411Z" level=info msg="CreateContainer within sandbox \"ad3ff4d11406e703baf602f5302f67d08d927b3bf41948762927d519f9d35278\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 07:19:57.569680 containerd[1476]: time="2025-08-13T07:19:57.569635409Z" level=info msg="CreateContainer within sandbox \"ad3ff4d11406e703baf602f5302f67d08d927b3bf41948762927d519f9d35278\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5239ff9d45dbe7fdd97408667235d16ce2a0f40615064ca466cfd98bd3306a93\"" Aug 13 07:19:57.572143 containerd[1476]: time="2025-08-13T07:19:57.572116251Z" level=info msg="StartContainer for \"5239ff9d45dbe7fdd97408667235d16ce2a0f40615064ca466cfd98bd3306a93\"" Aug 13 07:19:57.606023 systemd[1]: Started cri-containerd-5239ff9d45dbe7fdd97408667235d16ce2a0f40615064ca466cfd98bd3306a93.scope - libcontainer container 5239ff9d45dbe7fdd97408667235d16ce2a0f40615064ca466cfd98bd3306a93. Aug 13 07:19:57.631831 containerd[1476]: time="2025-08-13T07:19:57.631740956Z" level=info msg="StartContainer for \"5239ff9d45dbe7fdd97408667235d16ce2a0f40615064ca466cfd98bd3306a93\" returns successfully" Aug 13 07:19:57.642596 systemd[1]: cri-containerd-5239ff9d45dbe7fdd97408667235d16ce2a0f40615064ca466cfd98bd3306a93.scope: Deactivated successfully. Aug 13 07:19:58.336175 kubelet[2513]: E0813 07:19:58.336146 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:58.339534 containerd[1476]: time="2025-08-13T07:19:58.337390809Z" level=info msg="shim disconnected" id=5239ff9d45dbe7fdd97408667235d16ce2a0f40615064ca466cfd98bd3306a93 namespace=k8s.io Aug 13 07:19:58.339643 containerd[1476]: time="2025-08-13T07:19:58.339532971Z" level=warning msg="cleaning up after shim disconnected" id=5239ff9d45dbe7fdd97408667235d16ce2a0f40615064ca466cfd98bd3306a93 namespace=k8s.io Aug 13 07:19:58.339643 containerd[1476]: time="2025-08-13T07:19:58.339546479Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:19:58.565880 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5239ff9d45dbe7fdd97408667235d16ce2a0f40615064ca466cfd98bd3306a93-rootfs.mount: Deactivated successfully. Aug 13 07:19:59.252039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3269978480.mount: Deactivated successfully. Aug 13 07:19:59.339702 kubelet[2513]: E0813 07:19:59.339675 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:59.341759 containerd[1476]: time="2025-08-13T07:19:59.341702892Z" level=info msg="CreateContainer within sandbox \"ad3ff4d11406e703baf602f5302f67d08d927b3bf41948762927d519f9d35278\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 07:19:59.401835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4289950630.mount: Deactivated successfully. Aug 13 07:19:59.406669 containerd[1476]: time="2025-08-13T07:19:59.406631021Z" level=info msg="CreateContainer within sandbox \"ad3ff4d11406e703baf602f5302f67d08d927b3bf41948762927d519f9d35278\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ee0a6f3422f03778cc4c51b01280be46260ed2204349b4a96d5278ad03f305a2\"" Aug 13 07:19:59.407544 containerd[1476]: time="2025-08-13T07:19:59.407395309Z" level=info msg="StartContainer for \"ee0a6f3422f03778cc4c51b01280be46260ed2204349b4a96d5278ad03f305a2\"" Aug 13 07:19:59.439025 systemd[1]: Started cri-containerd-ee0a6f3422f03778cc4c51b01280be46260ed2204349b4a96d5278ad03f305a2.scope - libcontainer container ee0a6f3422f03778cc4c51b01280be46260ed2204349b4a96d5278ad03f305a2. Aug 13 07:19:59.469214 containerd[1476]: time="2025-08-13T07:19:59.469090485Z" level=info msg="StartContainer for \"ee0a6f3422f03778cc4c51b01280be46260ed2204349b4a96d5278ad03f305a2\" returns successfully" Aug 13 07:19:59.482958 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:19:59.483192 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:19:59.483265 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:19:59.490544 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:19:59.490774 systemd[1]: cri-containerd-ee0a6f3422f03778cc4c51b01280be46260ed2204349b4a96d5278ad03f305a2.scope: Deactivated successfully. Aug 13 07:19:59.513601 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:19:59.674544 containerd[1476]: time="2025-08-13T07:19:59.674480966Z" level=info msg="shim disconnected" id=ee0a6f3422f03778cc4c51b01280be46260ed2204349b4a96d5278ad03f305a2 namespace=k8s.io Aug 13 07:19:59.674544 containerd[1476]: time="2025-08-13T07:19:59.674538363Z" level=warning msg="cleaning up after shim disconnected" id=ee0a6f3422f03778cc4c51b01280be46260ed2204349b4a96d5278ad03f305a2 namespace=k8s.io Aug 13 07:19:59.674544 containerd[1476]: time="2025-08-13T07:19:59.674547001Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:19:59.698884 containerd[1476]: time="2025-08-13T07:19:59.698833617Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:59.699716 containerd[1476]: time="2025-08-13T07:19:59.699675915Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Aug 13 07:19:59.700995 containerd[1476]: time="2025-08-13T07:19:59.700975925Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:59.702275 containerd[1476]: time="2025-08-13T07:19:59.702253340Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.162755876s" Aug 13 07:19:59.702316 containerd[1476]: time="2025-08-13T07:19:59.702279142Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 07:19:59.704087 containerd[1476]: time="2025-08-13T07:19:59.704066336Z" level=info msg="CreateContainer within sandbox \"fb7adf642fa94d39c99f161bd770acbee56cc9fb54599b802466ca26d9f79166\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 07:19:59.716247 containerd[1476]: time="2025-08-13T07:19:59.716222881Z" level=info msg="CreateContainer within sandbox \"fb7adf642fa94d39c99f161bd770acbee56cc9fb54599b802466ca26d9f79166\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"21bca8fd40b43c2c6571f69b4295d4673b1b7d6aba9658d1c3b8eaa7875a9b32\"" Aug 13 07:19:59.716601 containerd[1476]: time="2025-08-13T07:19:59.716557804Z" level=info msg="StartContainer for \"21bca8fd40b43c2c6571f69b4295d4673b1b7d6aba9658d1c3b8eaa7875a9b32\"" Aug 13 07:19:59.747042 systemd[1]: Started cri-containerd-21bca8fd40b43c2c6571f69b4295d4673b1b7d6aba9658d1c3b8eaa7875a9b32.scope - libcontainer container 21bca8fd40b43c2c6571f69b4295d4673b1b7d6aba9658d1c3b8eaa7875a9b32. Aug 13 07:19:59.773354 containerd[1476]: time="2025-08-13T07:19:59.773247846Z" level=info msg="StartContainer for \"21bca8fd40b43c2c6571f69b4295d4673b1b7d6aba9658d1c3b8eaa7875a9b32\" returns successfully" Aug 13 07:20:00.343204 kubelet[2513]: E0813 07:20:00.343158 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:20:00.347654 containerd[1476]: time="2025-08-13T07:20:00.347602572Z" level=info msg="CreateContainer within sandbox \"ad3ff4d11406e703baf602f5302f67d08d927b3bf41948762927d519f9d35278\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 07:20:00.347923 kubelet[2513]: E0813 07:20:00.347663 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:20:00.393560 containerd[1476]: time="2025-08-13T07:20:00.393501124Z" level=info msg="CreateContainer within sandbox \"ad3ff4d11406e703baf602f5302f67d08d927b3bf41948762927d519f9d35278\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4216b89e259a5faa3aa3f275c396765e03de1f1ae16e61ca79e504ac498a28df\"" Aug 13 07:20:00.394250 containerd[1476]: time="2025-08-13T07:20:00.394215516Z" level=info msg="StartContainer for \"4216b89e259a5faa3aa3f275c396765e03de1f1ae16e61ca79e504ac498a28df\"" Aug 13 07:20:00.477050 systemd[1]: Started cri-containerd-4216b89e259a5faa3aa3f275c396765e03de1f1ae16e61ca79e504ac498a28df.scope - libcontainer container 4216b89e259a5faa3aa3f275c396765e03de1f1ae16e61ca79e504ac498a28df. Aug 13 07:20:00.535521 containerd[1476]: time="2025-08-13T07:20:00.535467195Z" level=info msg="StartContainer for \"4216b89e259a5faa3aa3f275c396765e03de1f1ae16e61ca79e504ac498a28df\" returns successfully" Aug 13 07:20:00.536794 systemd[1]: cri-containerd-4216b89e259a5faa3aa3f275c396765e03de1f1ae16e61ca79e504ac498a28df.scope: Deactivated successfully. Aug 13 07:20:00.561306 containerd[1476]: time="2025-08-13T07:20:00.561240200Z" level=info msg="shim disconnected" id=4216b89e259a5faa3aa3f275c396765e03de1f1ae16e61ca79e504ac498a28df namespace=k8s.io Aug 13 07:20:00.561306 containerd[1476]: time="2025-08-13T07:20:00.561297807Z" level=warning msg="cleaning up after shim disconnected" id=4216b89e259a5faa3aa3f275c396765e03de1f1ae16e61ca79e504ac498a28df namespace=k8s.io Aug 13 07:20:00.561306 containerd[1476]: time="2025-08-13T07:20:00.561306656Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:20:01.394980 kubelet[2513]: E0813 07:20:01.394937 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:20:01.395439 kubelet[2513]: E0813 07:20:01.394937 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:20:01.396653 containerd[1476]: time="2025-08-13T07:20:01.396591981Z" level=info msg="CreateContainer within sandbox \"ad3ff4d11406e703baf602f5302f67d08d927b3bf41948762927d519f9d35278\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 07:20:01.411591 kubelet[2513]: I0813 07:20:01.411530 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-dx8w7" podStartSLOduration=2.45036498 podStartE2EDuration="16.41149788s" podCreationTimestamp="2025-08-13 07:19:45 +0000 UTC" firstStartedPulling="2025-08-13 07:19:45.741739712 +0000 UTC m=+6.643160179" lastFinishedPulling="2025-08-13 07:19:59.702872612 +0000 UTC m=+20.604293079" observedRunningTime="2025-08-13 07:20:00.366177998 +0000 UTC m=+21.267598465" watchObservedRunningTime="2025-08-13 07:20:01.41149788 +0000 UTC m=+22.312918347" Aug 13 07:20:01.419687 containerd[1476]: time="2025-08-13T07:20:01.419639861Z" level=info msg="CreateContainer within sandbox \"ad3ff4d11406e703baf602f5302f67d08d927b3bf41948762927d519f9d35278\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5ed58b969232058470c5baa75d9f43bb4f57e4dec47f83c0beca409908ca526d\"" Aug 13 07:20:01.419960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3176863920.mount: Deactivated successfully. Aug 13 07:20:01.420134 containerd[1476]: time="2025-08-13T07:20:01.420101386Z" level=info msg="StartContainer for \"5ed58b969232058470c5baa75d9f43bb4f57e4dec47f83c0beca409908ca526d\"" Aug 13 07:20:01.451032 systemd[1]: Started cri-containerd-5ed58b969232058470c5baa75d9f43bb4f57e4dec47f83c0beca409908ca526d.scope - libcontainer container 5ed58b969232058470c5baa75d9f43bb4f57e4dec47f83c0beca409908ca526d. Aug 13 07:20:01.473391 systemd[1]: cri-containerd-5ed58b969232058470c5baa75d9f43bb4f57e4dec47f83c0beca409908ca526d.scope: Deactivated successfully. Aug 13 07:20:01.475578 containerd[1476]: time="2025-08-13T07:20:01.475544198Z" level=info msg="StartContainer for \"5ed58b969232058470c5baa75d9f43bb4f57e4dec47f83c0beca409908ca526d\" returns successfully" Aug 13 07:20:01.565982 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ed58b969232058470c5baa75d9f43bb4f57e4dec47f83c0beca409908ca526d-rootfs.mount: Deactivated successfully. Aug 13 07:20:01.820393 containerd[1476]: time="2025-08-13T07:20:01.820248637Z" level=info msg="shim disconnected" id=5ed58b969232058470c5baa75d9f43bb4f57e4dec47f83c0beca409908ca526d namespace=k8s.io Aug 13 07:20:01.820393 containerd[1476]: time="2025-08-13T07:20:01.820298768Z" level=warning msg="cleaning up after shim disconnected" id=5ed58b969232058470c5baa75d9f43bb4f57e4dec47f83c0beca409908ca526d namespace=k8s.io Aug 13 07:20:01.820393 containerd[1476]: time="2025-08-13T07:20:01.820308628Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:20:02.399508 kubelet[2513]: E0813 07:20:02.399473 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:20:02.401687 containerd[1476]: time="2025-08-13T07:20:02.401632136Z" level=info msg="CreateContainer within sandbox \"ad3ff4d11406e703baf602f5302f67d08d927b3bf41948762927d519f9d35278\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 07:20:02.429414 containerd[1476]: time="2025-08-13T07:20:02.429365248Z" level=info msg="CreateContainer within sandbox \"ad3ff4d11406e703baf602f5302f67d08d927b3bf41948762927d519f9d35278\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8141a88155888a401a3d18775941a7b98f100405b17b5c3e27c222c4c7674ab0\"" Aug 13 07:20:02.429984 containerd[1476]: time="2025-08-13T07:20:02.429947234Z" level=info msg="StartContainer for \"8141a88155888a401a3d18775941a7b98f100405b17b5c3e27c222c4c7674ab0\"" Aug 13 07:20:02.464035 systemd[1]: Started cri-containerd-8141a88155888a401a3d18775941a7b98f100405b17b5c3e27c222c4c7674ab0.scope - libcontainer container 8141a88155888a401a3d18775941a7b98f100405b17b5c3e27c222c4c7674ab0. Aug 13 07:20:02.494626 containerd[1476]: time="2025-08-13T07:20:02.494573289Z" level=info msg="StartContainer for \"8141a88155888a401a3d18775941a7b98f100405b17b5c3e27c222c4c7674ab0\" returns successfully" Aug 13 07:20:02.654479 kubelet[2513]: I0813 07:20:02.654279 2513 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 07:20:02.674315 kubelet[2513]: I0813 07:20:02.673104 2513 status_manager.go:890] "Failed to get status for pod" podUID="989080ab-5b38-4b15-8d7f-56e5a51d45a2" pod="kube-system/coredns-668d6bf9bc-fd87s" err="pods \"coredns-668d6bf9bc-fd87s\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Aug 13 07:20:02.683011 systemd[1]: Created slice kubepods-burstable-pod989080ab_5b38_4b15_8d7f_56e5a51d45a2.slice - libcontainer container kubepods-burstable-pod989080ab_5b38_4b15_8d7f_56e5a51d45a2.slice. Aug 13 07:20:02.689786 systemd[1]: Created slice kubepods-burstable-pod5db9bfb1_3709_40ec_be15_017ebb014bdd.slice - libcontainer container kubepods-burstable-pod5db9bfb1_3709_40ec_be15_017ebb014bdd.slice. Aug 13 07:20:02.742148 kubelet[2513]: I0813 07:20:02.742118 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5db9bfb1-3709-40ec-be15-017ebb014bdd-config-volume\") pod \"coredns-668d6bf9bc-8qrfv\" (UID: \"5db9bfb1-3709-40ec-be15-017ebb014bdd\") " pod="kube-system/coredns-668d6bf9bc-8qrfv" Aug 13 07:20:02.742333 kubelet[2513]: I0813 07:20:02.742272 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/989080ab-5b38-4b15-8d7f-56e5a51d45a2-config-volume\") pod \"coredns-668d6bf9bc-fd87s\" (UID: \"989080ab-5b38-4b15-8d7f-56e5a51d45a2\") " pod="kube-system/coredns-668d6bf9bc-fd87s" Aug 13 07:20:02.742333 kubelet[2513]: I0813 07:20:02.742301 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f6v9\" (UniqueName: \"kubernetes.io/projected/989080ab-5b38-4b15-8d7f-56e5a51d45a2-kube-api-access-4f6v9\") pod \"coredns-668d6bf9bc-fd87s\" (UID: \"989080ab-5b38-4b15-8d7f-56e5a51d45a2\") " pod="kube-system/coredns-668d6bf9bc-fd87s" Aug 13 07:20:02.742333 kubelet[2513]: I0813 07:20:02.742326 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s47rk\" (UniqueName: \"kubernetes.io/projected/5db9bfb1-3709-40ec-be15-017ebb014bdd-kube-api-access-s47rk\") pod \"coredns-668d6bf9bc-8qrfv\" (UID: \"5db9bfb1-3709-40ec-be15-017ebb014bdd\") " pod="kube-system/coredns-668d6bf9bc-8qrfv" Aug 13 07:20:03.287606 kubelet[2513]: E0813 07:20:03.287570 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:20:03.288300 containerd[1476]: time="2025-08-13T07:20:03.288255521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fd87s,Uid:989080ab-5b38-4b15-8d7f-56e5a51d45a2,Namespace:kube-system,Attempt:0,}" Aug 13 07:20:03.293483 kubelet[2513]: E0813 07:20:03.293452 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:20:03.293996 containerd[1476]: time="2025-08-13T07:20:03.293882253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8qrfv,Uid:5db9bfb1-3709-40ec-be15-017ebb014bdd,Namespace:kube-system,Attempt:0,}" Aug 13 07:20:03.404186 kubelet[2513]: E0813 07:20:03.404158 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:20:03.805834 systemd[1]: Started sshd@7-10.0.0.153:22-10.0.0.1:38600.service - OpenSSH per-connection server daemon (10.0.0.1:38600). Aug 13 07:20:03.849509 sshd[3367]: Accepted publickey for core from 10.0.0.1 port 38600 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:20:03.851208 sshd[3367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:03.855064 systemd-logind[1453]: New session 8 of user core. Aug 13 07:20:03.863032 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 07:20:04.016042 sshd[3367]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:04.022934 systemd[1]: sshd@7-10.0.0.153:22-10.0.0.1:38600.service: Deactivated successfully. Aug 13 07:20:04.026191 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 07:20:04.029483 systemd-logind[1453]: Session 8 logged out. Waiting for processes to exit. Aug 13 07:20:04.030392 systemd-logind[1453]: Removed session 8. Aug 13 07:20:04.405808 kubelet[2513]: E0813 07:20:04.405766 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:20:04.989645 systemd-networkd[1398]: cilium_host: Link UP Aug 13 07:20:04.989808 systemd-networkd[1398]: cilium_net: Link UP Aug 13 07:20:04.990579 systemd-networkd[1398]: cilium_net: Gained carrier Aug 13 07:20:04.990799 systemd-networkd[1398]: cilium_host: Gained carrier Aug 13 07:20:04.990975 systemd-networkd[1398]: cilium_net: Gained IPv6LL Aug 13 07:20:04.991164 systemd-networkd[1398]: cilium_host: Gained IPv6LL Aug 13 07:20:05.095973 systemd-networkd[1398]: cilium_vxlan: Link UP Aug 13 07:20:05.095994 systemd-networkd[1398]: cilium_vxlan: Gained carrier Aug 13 07:20:05.315933 kernel: NET: Registered PF_ALG protocol family Aug 13 07:20:05.407825 kubelet[2513]: E0813 07:20:05.407800 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:20:05.955515 systemd-networkd[1398]: lxc_health: Link UP Aug 13 07:20:05.962511 systemd-networkd[1398]: lxc_health: Gained carrier Aug 13 07:20:06.306988 systemd-networkd[1398]: cilium_vxlan: Gained IPv6LL Aug 13 07:20:06.377643 systemd-networkd[1398]: lxcec9e2c4cfd40: Link UP Aug 13 07:20:06.386726 systemd-networkd[1398]: lxc3ac921a1167a: Link UP Aug 13 07:20:06.398926 kernel: eth0: renamed from tmp7576d Aug 13 07:20:06.407927 kernel: eth0: renamed from tmp5b884 Aug 13 07:20:06.409767 kubelet[2513]: E0813 07:20:06.409742 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:20:06.413576 systemd-networkd[1398]: lxc3ac921a1167a: Gained carrier Aug 13 07:20:06.413782 systemd-networkd[1398]: lxcec9e2c4cfd40: Gained carrier Aug 13 07:20:07.007050 systemd-networkd[1398]: lxc_health: Gained IPv6LL Aug 13 07:20:07.363746 kubelet[2513]: I0813 07:20:07.363495 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ljhgc" podStartSLOduration=10.248982336 podStartE2EDuration="22.363476889s" podCreationTimestamp="2025-08-13 07:19:45 +0000 UTC" firstStartedPulling="2025-08-13 07:19:45.424815877 +0000 UTC m=+6.326236354" lastFinishedPulling="2025-08-13 07:19:57.53931044 +0000 UTC m=+18.440730907" observedRunningTime="2025-08-13 07:20:03.418173843 +0000 UTC m=+24.319594340" watchObservedRunningTime="2025-08-13 07:20:07.363476889 +0000 UTC m=+28.264897356" Aug 13 07:20:07.411535 kubelet[2513]: E0813 07:20:07.411427 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:20:07.519153 systemd-networkd[1398]: lxcec9e2c4cfd40: Gained IPv6LL Aug 13 07:20:08.351099 systemd-networkd[1398]: lxc3ac921a1167a: Gained IPv6LL Aug 13 07:20:08.413304 kubelet[2513]: E0813 07:20:08.413268 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:20:09.034806 systemd[1]: Started sshd@8-10.0.0.153:22-10.0.0.1:50592.service - OpenSSH per-connection server daemon (10.0.0.1:50592). Aug 13 07:20:09.075822 sshd[3757]: Accepted publickey for core from 10.0.0.1 port 50592 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:20:09.077341 sshd[3757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:09.082356 systemd-logind[1453]: New session 9 of user core. Aug 13 07:20:09.090050 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 07:20:09.206755 sshd[3757]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:09.210526 systemd[1]: sshd@8-10.0.0.153:22-10.0.0.1:50592.service: Deactivated successfully. Aug 13 07:20:09.212328 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 07:20:09.213070 systemd-logind[1453]: Session 9 logged out. Waiting for processes to exit. Aug 13 07:20:09.213945 systemd-logind[1453]: Removed session 9. Aug 13 07:20:10.111164 containerd[1476]: time="2025-08-13T07:20:10.111024687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:20:10.112011 containerd[1476]: time="2025-08-13T07:20:10.111118684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:20:10.112011 containerd[1476]: time="2025-08-13T07:20:10.111985086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:20:10.112294 containerd[1476]: time="2025-08-13T07:20:10.112086217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:20:10.133085 systemd[1]: Started cri-containerd-7576de95a75a138059c1e8f92b3e92faa0dce874b76c9309296bc33750dce58a.scope - libcontainer container 7576de95a75a138059c1e8f92b3e92faa0dce874b76c9309296bc33750dce58a. Aug 13 07:20:10.145829 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:20:10.169481 containerd[1476]: time="2025-08-13T07:20:10.169436595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8qrfv,Uid:5db9bfb1-3709-40ec-be15-017ebb014bdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"7576de95a75a138059c1e8f92b3e92faa0dce874b76c9309296bc33750dce58a\"" Aug 13 07:20:10.170201 kubelet[2513]: E0813 07:20:10.170173 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:20:10.172066 containerd[1476]: time="2025-08-13T07:20:10.172026322Z" level=info msg="CreateContainer within sandbox \"7576de95a75a138059c1e8f92b3e92faa0dce874b76c9309296bc33750dce58a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:20:10.204338 containerd[1476]: time="2025-08-13T07:20:10.204237654Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:20:10.204338 containerd[1476]: time="2025-08-13T07:20:10.204292022Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:20:10.204338 containerd[1476]: time="2025-08-13T07:20:10.204303625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:20:10.204529 containerd[1476]: time="2025-08-13T07:20:10.204381149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:20:10.230029 systemd[1]: Started cri-containerd-5b88437327b9fbce716d8792863b1f50e041ca3b8951c1ec21676798aec0d61c.scope - libcontainer container 5b88437327b9fbce716d8792863b1f50e041ca3b8951c1ec21676798aec0d61c. Aug 13 07:20:10.241271 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:20:10.264293 containerd[1476]: time="2025-08-13T07:20:10.264251576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fd87s,Uid:989080ab-5b38-4b15-8d7f-56e5a51d45a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b88437327b9fbce716d8792863b1f50e041ca3b8951c1ec21676798aec0d61c\"" Aug 13 07:20:10.265001 kubelet[2513]: E0813 07:20:10.264933 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:20:10.266829 containerd[1476]: time="2025-08-13T07:20:10.266795743Z" level=info msg="CreateContainer within sandbox \"5b88437327b9fbce716d8792863b1f50e041ca3b8951c1ec21676798aec0d61c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:20:10.671136 containerd[1476]: time="2025-08-13T07:20:10.671034445Z" level=info msg="CreateContainer within sandbox \"5b88437327b9fbce716d8792863b1f50e041ca3b8951c1ec21676798aec0d61c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5e17db4ec4961c11cf532f26a81632b2f4079bc5c60bfd95c77bfdc65415a1c7\"" Aug 13 07:20:10.671574 containerd[1476]: time="2025-08-13T07:20:10.671537675Z" level=info msg="StartContainer for \"5e17db4ec4961c11cf532f26a81632b2f4079bc5c60bfd95c77bfdc65415a1c7\"" Aug 13 07:20:10.673137 containerd[1476]: time="2025-08-13T07:20:10.673072596Z" level=info msg="CreateContainer within sandbox \"7576de95a75a138059c1e8f92b3e92faa0dce874b76c9309296bc33750dce58a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d82dbdb779c78cdf894f739f6a26a8d5a521b40c02f7fa87d8d3ab594360ed3e\"" Aug 13 07:20:10.673578 containerd[1476]: time="2025-08-13T07:20:10.673507210Z" level=info msg="StartContainer for \"d82dbdb779c78cdf894f739f6a26a8d5a521b40c02f7fa87d8d3ab594360ed3e\"" Aug 13 07:20:10.704035 systemd[1]: Started cri-containerd-d82dbdb779c78cdf894f739f6a26a8d5a521b40c02f7fa87d8d3ab594360ed3e.scope - libcontainer container d82dbdb779c78cdf894f739f6a26a8d5a521b40c02f7fa87d8d3ab594360ed3e. Aug 13 07:20:10.707268 systemd[1]: Started cri-containerd-5e17db4ec4961c11cf532f26a81632b2f4079bc5c60bfd95c77bfdc65415a1c7.scope - libcontainer container 5e17db4ec4961c11cf532f26a81632b2f4079bc5c60bfd95c77bfdc65415a1c7. Aug 13 07:20:10.734429 containerd[1476]: time="2025-08-13T07:20:10.734380570Z" level=info msg="StartContainer for \"d82dbdb779c78cdf894f739f6a26a8d5a521b40c02f7fa87d8d3ab594360ed3e\" returns successfully" Aug 13 07:20:10.738316 containerd[1476]: time="2025-08-13T07:20:10.738280130Z" level=info msg="StartContainer for \"5e17db4ec4961c11cf532f26a81632b2f4079bc5c60bfd95c77bfdc65415a1c7\" returns successfully" Aug 13 07:20:11.421735 kubelet[2513]: E0813 07:20:11.421287 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:20:11.423994 kubelet[2513]: E0813 07:20:11.423939 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:20:11.431273 kubelet[2513]: I0813 07:20:11.431204 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8qrfv" podStartSLOduration=26.431183862 podStartE2EDuration="26.431183862s" podCreationTimestamp="2025-08-13 07:19:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:20:11.430754349 +0000 UTC m=+32.332174806" watchObservedRunningTime="2025-08-13 07:20:11.431183862 +0000 UTC m=+32.332604329" Aug 13 07:20:11.439607 kubelet[2513]: I0813 07:20:11.439543 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-fd87s" podStartSLOduration=26.439522579 podStartE2EDuration="26.439522579s" podCreationTimestamp="2025-08-13 07:19:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:20:11.438803392 +0000 UTC m=+32.340223859" watchObservedRunningTime="2025-08-13 07:20:11.439522579 +0000 UTC m=+32.340943046" Aug 13 07:20:12.425662 kubelet[2513]: E0813 07:20:12.425624 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:20:12.426146 kubelet[2513]: E0813 07:20:12.425757 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:20:13.427571 kubelet[2513]: E0813 07:20:13.427536 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:20:13.428032 kubelet[2513]: E0813 07:20:13.427708 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:20:14.222001 systemd[1]: Started sshd@9-10.0.0.153:22-10.0.0.1:50598.service - OpenSSH per-connection server daemon (10.0.0.1:50598). Aug 13 07:20:14.263484 sshd[3943]: Accepted publickey for core from 10.0.0.1 port 50598 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:20:14.265236 sshd[3943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:14.269199 systemd-logind[1453]: New session 10 of user core. Aug 13 07:20:14.279041 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 07:20:14.426752 sshd[3943]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:14.431219 systemd[1]: sshd@9-10.0.0.153:22-10.0.0.1:50598.service: Deactivated successfully. Aug 13 07:20:14.433246 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 07:20:14.433832 systemd-logind[1453]: Session 10 logged out. Waiting for processes to exit. Aug 13 07:20:14.434652 systemd-logind[1453]: Removed session 10. Aug 13 07:20:19.437939 systemd[1]: Started sshd@10-10.0.0.153:22-10.0.0.1:56236.service - OpenSSH per-connection server daemon (10.0.0.1:56236). Aug 13 07:20:19.476039 sshd[3964]: Accepted publickey for core from 10.0.0.1 port 56236 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:20:19.477537 sshd[3964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:19.481108 systemd-logind[1453]: New session 11 of user core. Aug 13 07:20:19.496039 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 07:20:19.613588 sshd[3964]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:19.630496 systemd[1]: sshd@10-10.0.0.153:22-10.0.0.1:56236.service: Deactivated successfully. Aug 13 07:20:19.632063 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 07:20:19.633383 systemd-logind[1453]: Session 11 logged out. Waiting for processes to exit. Aug 13 07:20:19.638278 systemd[1]: Started sshd@11-10.0.0.153:22-10.0.0.1:56250.service - OpenSSH per-connection server daemon (10.0.0.1:56250). Aug 13 07:20:19.639149 systemd-logind[1453]: Removed session 11. Aug 13 07:20:19.671940 sshd[3980]: Accepted publickey for core from 10.0.0.1 port 56250 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:20:19.673378 sshd[3980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:19.676940 systemd-logind[1453]: New session 12 of user core. Aug 13 07:20:19.689159 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 07:20:19.835287 sshd[3980]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:19.845653 systemd[1]: sshd@11-10.0.0.153:22-10.0.0.1:56250.service: Deactivated successfully. Aug 13 07:20:19.847244 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 07:20:19.849442 systemd-logind[1453]: Session 12 logged out. Waiting for processes to exit. Aug 13 07:20:19.854626 systemd[1]: Started sshd@12-10.0.0.153:22-10.0.0.1:56264.service - OpenSSH per-connection server daemon (10.0.0.1:56264). Aug 13 07:20:19.856113 systemd-logind[1453]: Removed session 12. Aug 13 07:20:19.889757 sshd[3993]: Accepted publickey for core from 10.0.0.1 port 56264 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:20:19.891233 sshd[3993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:19.895213 systemd-logind[1453]: New session 13 of user core. Aug 13 07:20:19.902037 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 07:20:20.006412 sshd[3993]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:20.010450 systemd[1]: sshd@12-10.0.0.153:22-10.0.0.1:56264.service: Deactivated successfully. Aug 13 07:20:20.012248 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 07:20:20.013014 systemd-logind[1453]: Session 13 logged out. Waiting for processes to exit. Aug 13 07:20:20.013782 systemd-logind[1453]: Removed session 13. Aug 13 07:20:25.019563 systemd[1]: Started sshd@13-10.0.0.153:22-10.0.0.1:56272.service - OpenSSH per-connection server daemon (10.0.0.1:56272). Aug 13 07:20:25.074864 sshd[4008]: Accepted publickey for core from 10.0.0.1 port 56272 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:20:25.076412 sshd[4008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:25.080467 systemd-logind[1453]: New session 14 of user core. Aug 13 07:20:25.088043 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 07:20:25.197325 sshd[4008]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:25.201500 systemd[1]: sshd@13-10.0.0.153:22-10.0.0.1:56272.service: Deactivated successfully. Aug 13 07:20:25.203353 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 07:20:25.204090 systemd-logind[1453]: Session 14 logged out. Waiting for processes to exit. Aug 13 07:20:25.204944 systemd-logind[1453]: Removed session 14. Aug 13 07:20:30.208752 systemd[1]: Started sshd@14-10.0.0.153:22-10.0.0.1:53988.service - OpenSSH per-connection server daemon (10.0.0.1:53988). Aug 13 07:20:30.246362 sshd[4023]: Accepted publickey for core from 10.0.0.1 port 53988 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:20:30.248129 sshd[4023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:30.251682 systemd-logind[1453]: New session 15 of user core. Aug 13 07:20:30.260028 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 07:20:30.365484 sshd[4023]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:30.372776 systemd[1]: sshd@14-10.0.0.153:22-10.0.0.1:53988.service: Deactivated successfully. Aug 13 07:20:30.374563 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 07:20:30.376210 systemd-logind[1453]: Session 15 logged out. Waiting for processes to exit. Aug 13 07:20:30.384133 systemd[1]: Started sshd@15-10.0.0.153:22-10.0.0.1:54002.service - OpenSSH per-connection server daemon (10.0.0.1:54002). Aug 13 07:20:30.385122 systemd-logind[1453]: Removed session 15. Aug 13 07:20:30.418091 sshd[4037]: Accepted publickey for core from 10.0.0.1 port 54002 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:20:30.419851 sshd[4037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:30.423816 systemd-logind[1453]: New session 16 of user core. Aug 13 07:20:30.441053 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 07:20:30.619450 sshd[4037]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:30.630733 systemd[1]: sshd@15-10.0.0.153:22-10.0.0.1:54002.service: Deactivated successfully. Aug 13 07:20:30.632514 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 07:20:30.634135 systemd-logind[1453]: Session 16 logged out. Waiting for processes to exit. Aug 13 07:20:30.635411 systemd[1]: Started sshd@16-10.0.0.153:22-10.0.0.1:54012.service - OpenSSH per-connection server daemon (10.0.0.1:54012). Aug 13 07:20:30.636591 systemd-logind[1453]: Removed session 16. Aug 13 07:20:30.676989 sshd[4049]: Accepted publickey for core from 10.0.0.1 port 54012 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:20:30.678472 sshd[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:30.682122 systemd-logind[1453]: New session 17 of user core. Aug 13 07:20:30.693033 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 07:20:31.183848 sshd[4049]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:31.196115 systemd[1]: sshd@16-10.0.0.153:22-10.0.0.1:54012.service: Deactivated successfully. Aug 13 07:20:31.199328 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 07:20:31.202344 systemd-logind[1453]: Session 17 logged out. Waiting for processes to exit. Aug 13 07:20:31.210157 systemd[1]: Started sshd@17-10.0.0.153:22-10.0.0.1:54028.service - OpenSSH per-connection server daemon (10.0.0.1:54028). Aug 13 07:20:31.210987 systemd-logind[1453]: Removed session 17. Aug 13 07:20:31.244069 sshd[4070]: Accepted publickey for core from 10.0.0.1 port 54028 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:20:31.245608 sshd[4070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:31.249529 systemd-logind[1453]: New session 18 of user core. Aug 13 07:20:31.259012 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 07:20:31.504749 sshd[4070]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:31.515941 systemd[1]: sshd@17-10.0.0.153:22-10.0.0.1:54028.service: Deactivated successfully. Aug 13 07:20:31.517961 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 07:20:31.519767 systemd-logind[1453]: Session 18 logged out. Waiting for processes to exit. Aug 13 07:20:31.526126 systemd[1]: Started sshd@18-10.0.0.153:22-10.0.0.1:54030.service - OpenSSH per-connection server daemon (10.0.0.1:54030). Aug 13 07:20:31.526887 systemd-logind[1453]: Removed session 18. Aug 13 07:20:31.560164 sshd[4083]: Accepted publickey for core from 10.0.0.1 port 54030 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:20:31.562340 sshd[4083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:31.566265 systemd-logind[1453]: New session 19 of user core. Aug 13 07:20:31.576025 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 07:20:31.680513 sshd[4083]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:31.684631 systemd[1]: sshd@18-10.0.0.153:22-10.0.0.1:54030.service: Deactivated successfully. Aug 13 07:20:31.686601 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 07:20:31.687284 systemd-logind[1453]: Session 19 logged out. Waiting for processes to exit. Aug 13 07:20:31.688222 systemd-logind[1453]: Removed session 19. Aug 13 07:20:36.692763 systemd[1]: Started sshd@19-10.0.0.153:22-10.0.0.1:54046.service - OpenSSH per-connection server daemon (10.0.0.1:54046). Aug 13 07:20:36.730507 sshd[4100]: Accepted publickey for core from 10.0.0.1 port 54046 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:20:36.732200 sshd[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:36.736051 systemd-logind[1453]: New session 20 of user core. Aug 13 07:20:36.745039 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 07:20:36.848786 sshd[4100]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:36.852942 systemd[1]: sshd@19-10.0.0.153:22-10.0.0.1:54046.service: Deactivated successfully. Aug 13 07:20:36.854938 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 07:20:36.855559 systemd-logind[1453]: Session 20 logged out. Waiting for processes to exit. Aug 13 07:20:36.856440 systemd-logind[1453]: Removed session 20. Aug 13 07:20:41.859681 systemd[1]: Started sshd@20-10.0.0.153:22-10.0.0.1:53990.service - OpenSSH per-connection server daemon (10.0.0.1:53990). Aug 13 07:20:41.897642 sshd[4117]: Accepted publickey for core from 10.0.0.1 port 53990 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:20:41.899226 sshd[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:41.902942 systemd-logind[1453]: New session 21 of user core. Aug 13 07:20:41.917031 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 07:20:42.021822 sshd[4117]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:42.025881 systemd[1]: sshd@20-10.0.0.153:22-10.0.0.1:53990.service: Deactivated successfully. Aug 13 07:20:42.027623 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 07:20:42.028310 systemd-logind[1453]: Session 21 logged out. Waiting for processes to exit. Aug 13 07:20:42.029219 systemd-logind[1453]: Removed session 21. Aug 13 07:20:47.036857 systemd[1]: Started sshd@21-10.0.0.153:22-10.0.0.1:53992.service - OpenSSH per-connection server daemon (10.0.0.1:53992). Aug 13 07:20:47.074884 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 53992 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:20:47.076365 sshd[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:47.080118 systemd-logind[1453]: New session 22 of user core. Aug 13 07:20:47.091028 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 07:20:47.174003 kubelet[2513]: E0813 07:20:47.173864 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:20:47.195048 sshd[4134]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:47.199348 systemd[1]: sshd@21-10.0.0.153:22-10.0.0.1:53992.service: Deactivated successfully. Aug 13 07:20:47.201430 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 07:20:47.202046 systemd-logind[1453]: Session 22 logged out. Waiting for processes to exit. Aug 13 07:20:47.202867 systemd-logind[1453]: Removed session 22. Aug 13 07:20:51.174165 kubelet[2513]: E0813 07:20:51.174120 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:20:52.209849 systemd[1]: Started sshd@22-10.0.0.153:22-10.0.0.1:38290.service - OpenSSH per-connection server daemon (10.0.0.1:38290). Aug 13 07:20:52.247780 sshd[4148]: Accepted publickey for core from 10.0.0.1 port 38290 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:20:52.249266 sshd[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:52.252839 systemd-logind[1453]: New session 23 of user core. Aug 13 07:20:52.256019 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 07:20:52.366823 sshd[4148]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:52.381813 systemd[1]: sshd@22-10.0.0.153:22-10.0.0.1:38290.service: Deactivated successfully. Aug 13 07:20:52.383662 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 07:20:52.385298 systemd-logind[1453]: Session 23 logged out. Waiting for processes to exit. Aug 13 07:20:52.394129 systemd[1]: Started sshd@23-10.0.0.153:22-10.0.0.1:38302.service - OpenSSH per-connection server daemon (10.0.0.1:38302). Aug 13 07:20:52.395067 systemd-logind[1453]: Removed session 23. Aug 13 07:20:52.428617 sshd[4162]: Accepted publickey for core from 10.0.0.1 port 38302 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:20:52.430113 sshd[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:52.434140 systemd-logind[1453]: New session 24 of user core. Aug 13 07:20:52.443026 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 07:20:53.904202 containerd[1476]: time="2025-08-13T07:20:53.904151941Z" level=info msg="StopContainer for \"21bca8fd40b43c2c6571f69b4295d4673b1b7d6aba9658d1c3b8eaa7875a9b32\" with timeout 30 (s)" Aug 13 07:20:53.904729 containerd[1476]: time="2025-08-13T07:20:53.904550832Z" level=info msg="Stop container \"21bca8fd40b43c2c6571f69b4295d4673b1b7d6aba9658d1c3b8eaa7875a9b32\" with signal terminated" Aug 13 07:20:53.919433 systemd[1]: cri-containerd-21bca8fd40b43c2c6571f69b4295d4673b1b7d6aba9658d1c3b8eaa7875a9b32.scope: Deactivated successfully. Aug 13 07:20:53.934298 containerd[1476]: time="2025-08-13T07:20:53.934107584Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:20:53.936834 containerd[1476]: time="2025-08-13T07:20:53.936775072Z" level=info msg="StopContainer for \"8141a88155888a401a3d18775941a7b98f100405b17b5c3e27c222c4c7674ab0\" with timeout 2 (s)" Aug 13 07:20:53.937070 containerd[1476]: time="2025-08-13T07:20:53.937029390Z" level=info msg="Stop container \"8141a88155888a401a3d18775941a7b98f100405b17b5c3e27c222c4c7674ab0\" with signal terminated" Aug 13 07:20:53.944701 systemd-networkd[1398]: lxc_health: Link DOWN Aug 13 07:20:53.944709 systemd-networkd[1398]: lxc_health: Lost carrier Aug 13 07:20:53.944754 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21bca8fd40b43c2c6571f69b4295d4673b1b7d6aba9658d1c3b8eaa7875a9b32-rootfs.mount: Deactivated successfully. Aug 13 07:20:53.954942 containerd[1476]: time="2025-08-13T07:20:53.954848658Z" level=info msg="shim disconnected" id=21bca8fd40b43c2c6571f69b4295d4673b1b7d6aba9658d1c3b8eaa7875a9b32 namespace=k8s.io Aug 13 07:20:53.954942 containerd[1476]: time="2025-08-13T07:20:53.954940872Z" level=warning msg="cleaning up after shim disconnected" id=21bca8fd40b43c2c6571f69b4295d4673b1b7d6aba9658d1c3b8eaa7875a9b32 namespace=k8s.io Aug 13 07:20:53.954942 containerd[1476]: time="2025-08-13T07:20:53.954950309Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:20:53.973722 containerd[1476]: time="2025-08-13T07:20:53.973679262Z" level=info msg="StopContainer for \"21bca8fd40b43c2c6571f69b4295d4673b1b7d6aba9658d1c3b8eaa7875a9b32\" returns successfully" Aug 13 07:20:53.976502 systemd[1]: cri-containerd-8141a88155888a401a3d18775941a7b98f100405b17b5c3e27c222c4c7674ab0.scope: Deactivated successfully. Aug 13 07:20:53.976794 systemd[1]: cri-containerd-8141a88155888a401a3d18775941a7b98f100405b17b5c3e27c222c4c7674ab0.scope: Consumed 6.617s CPU time. Aug 13 07:20:53.979310 containerd[1476]: time="2025-08-13T07:20:53.979271833Z" level=info msg="StopPodSandbox for \"fb7adf642fa94d39c99f161bd770acbee56cc9fb54599b802466ca26d9f79166\"" Aug 13 07:20:53.979435 containerd[1476]: time="2025-08-13T07:20:53.979323341Z" level=info msg="Container to stop \"21bca8fd40b43c2c6571f69b4295d4673b1b7d6aba9658d1c3b8eaa7875a9b32\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:20:53.982806 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fb7adf642fa94d39c99f161bd770acbee56cc9fb54599b802466ca26d9f79166-shm.mount: Deactivated successfully. Aug 13 07:20:53.986196 systemd[1]: cri-containerd-fb7adf642fa94d39c99f161bd770acbee56cc9fb54599b802466ca26d9f79166.scope: Deactivated successfully. Aug 13 07:20:53.997226 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8141a88155888a401a3d18775941a7b98f100405b17b5c3e27c222c4c7674ab0-rootfs.mount: Deactivated successfully. Aug 13 07:20:54.003718 containerd[1476]: time="2025-08-13T07:20:54.003663943Z" level=info msg="shim disconnected" id=8141a88155888a401a3d18775941a7b98f100405b17b5c3e27c222c4c7674ab0 namespace=k8s.io Aug 13 07:20:54.004171 containerd[1476]: time="2025-08-13T07:20:54.003979529Z" level=warning msg="cleaning up after shim disconnected" id=8141a88155888a401a3d18775941a7b98f100405b17b5c3e27c222c4c7674ab0 namespace=k8s.io Aug 13 07:20:54.004171 containerd[1476]: time="2025-08-13T07:20:54.003993735Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:20:54.009233 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb7adf642fa94d39c99f161bd770acbee56cc9fb54599b802466ca26d9f79166-rootfs.mount: Deactivated successfully. Aug 13 07:20:54.010780 containerd[1476]: time="2025-08-13T07:20:54.010621297Z" level=info msg="shim disconnected" id=fb7adf642fa94d39c99f161bd770acbee56cc9fb54599b802466ca26d9f79166 namespace=k8s.io Aug 13 07:20:54.011040 containerd[1476]: time="2025-08-13T07:20:54.010791658Z" level=warning msg="cleaning up after shim disconnected" id=fb7adf642fa94d39c99f161bd770acbee56cc9fb54599b802466ca26d9f79166 namespace=k8s.io Aug 13 07:20:54.011040 containerd[1476]: time="2025-08-13T07:20:54.010804111Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:20:54.021639 containerd[1476]: time="2025-08-13T07:20:54.021590908Z" level=info msg="StopContainer for \"8141a88155888a401a3d18775941a7b98f100405b17b5c3e27c222c4c7674ab0\" returns successfully" Aug 13 07:20:54.022263 containerd[1476]: time="2025-08-13T07:20:54.022215346Z" level=info msg="StopPodSandbox for \"ad3ff4d11406e703baf602f5302f67d08d927b3bf41948762927d519f9d35278\"" Aug 13 07:20:54.022263 containerd[1476]: time="2025-08-13T07:20:54.022256885Z" level=info msg="Container to stop \"4216b89e259a5faa3aa3f275c396765e03de1f1ae16e61ca79e504ac498a28df\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:20:54.022263 containerd[1476]: time="2025-08-13T07:20:54.022269178Z" level=info msg="Container to stop \"8141a88155888a401a3d18775941a7b98f100405b17b5c3e27c222c4c7674ab0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:20:54.022477 containerd[1476]: time="2025-08-13T07:20:54.022282803Z" level=info msg="Container to stop \"5239ff9d45dbe7fdd97408667235d16ce2a0f40615064ca466cfd98bd3306a93\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:20:54.022477 containerd[1476]: time="2025-08-13T07:20:54.022292632Z" level=info msg="Container to stop \"ee0a6f3422f03778cc4c51b01280be46260ed2204349b4a96d5278ad03f305a2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:20:54.022477 containerd[1476]: time="2025-08-13T07:20:54.022302721Z" level=info msg="Container to stop \"5ed58b969232058470c5baa75d9f43bb4f57e4dec47f83c0beca409908ca526d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:20:54.028428 systemd[1]: cri-containerd-ad3ff4d11406e703baf602f5302f67d08d927b3bf41948762927d519f9d35278.scope: Deactivated successfully. Aug 13 07:20:54.033645 containerd[1476]: time="2025-08-13T07:20:54.033606193Z" level=info msg="TearDown network for sandbox \"fb7adf642fa94d39c99f161bd770acbee56cc9fb54599b802466ca26d9f79166\" successfully" Aug 13 07:20:54.033859 containerd[1476]: time="2025-08-13T07:20:54.033720378Z" level=info msg="StopPodSandbox for \"fb7adf642fa94d39c99f161bd770acbee56cc9fb54599b802466ca26d9f79166\" returns successfully" Aug 13 07:20:54.053214 containerd[1476]: time="2025-08-13T07:20:54.053087369Z" level=info msg="shim disconnected" id=ad3ff4d11406e703baf602f5302f67d08d927b3bf41948762927d519f9d35278 namespace=k8s.io Aug 13 07:20:54.053502 containerd[1476]: time="2025-08-13T07:20:54.053465422Z" level=warning msg="cleaning up after shim disconnected" id=ad3ff4d11406e703baf602f5302f67d08d927b3bf41948762927d519f9d35278 namespace=k8s.io Aug 13 07:20:54.053502 containerd[1476]: time="2025-08-13T07:20:54.053483025Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:20:54.068854 containerd[1476]: time="2025-08-13T07:20:54.068792744Z" level=info msg="TearDown network for sandbox \"ad3ff4d11406e703baf602f5302f67d08d927b3bf41948762927d519f9d35278\" successfully" Aug 13 07:20:54.069304 containerd[1476]: time="2025-08-13T07:20:54.069024571Z" level=info msg="StopPodSandbox for \"ad3ff4d11406e703baf602f5302f67d08d927b3bf41948762927d519f9d35278\" returns successfully" Aug 13 07:20:54.148163 kubelet[2513]: I0813 07:20:54.148102 2513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-cilium-run\") pod \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\" (UID: \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\") " Aug 13 07:20:54.148163 kubelet[2513]: I0813 07:20:54.148149 2513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-host-proc-sys-net\") pod \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\" (UID: \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\") " Aug 13 07:20:54.148163 kubelet[2513]: I0813 07:20:54.148171 2513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-cilium-cgroup\") pod \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\" (UID: \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\") " Aug 13 07:20:54.148673 kubelet[2513]: I0813 07:20:54.148184 2513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-bpf-maps\") pod \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\" (UID: \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\") " Aug 13 07:20:54.148673 kubelet[2513]: I0813 07:20:54.148206 2513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3fa38a5c-6cbe-4899-8188-7ef31f226de4-clustermesh-secrets\") pod \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\" (UID: \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\") " Aug 13 07:20:54.148673 kubelet[2513]: I0813 07:20:54.148220 2513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-hostproc\") pod \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\" (UID: \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\") " Aug 13 07:20:54.148673 kubelet[2513]: I0813 07:20:54.148236 2513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-host-proc-sys-kernel\") pod \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\" (UID: \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\") " Aug 13 07:20:54.148673 kubelet[2513]: I0813 07:20:54.148254 2513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j894b\" (UniqueName: \"kubernetes.io/projected/3fa38a5c-6cbe-4899-8188-7ef31f226de4-kube-api-access-j894b\") pod \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\" (UID: \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\") " Aug 13 07:20:54.148673 kubelet[2513]: I0813 07:20:54.148250 2513 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3fa38a5c-6cbe-4899-8188-7ef31f226de4" (UID: "3fa38a5c-6cbe-4899-8188-7ef31f226de4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:20:54.148832 kubelet[2513]: I0813 07:20:54.148286 2513 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-hostproc" (OuterVolumeSpecName: "hostproc") pod "3fa38a5c-6cbe-4899-8188-7ef31f226de4" (UID: "3fa38a5c-6cbe-4899-8188-7ef31f226de4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:20:54.148832 kubelet[2513]: I0813 07:20:54.148256 2513 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3fa38a5c-6cbe-4899-8188-7ef31f226de4" (UID: "3fa38a5c-6cbe-4899-8188-7ef31f226de4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:20:54.148832 kubelet[2513]: I0813 07:20:54.148304 2513 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-cni-path" (OuterVolumeSpecName: "cni-path") pod "3fa38a5c-6cbe-4899-8188-7ef31f226de4" (UID: "3fa38a5c-6cbe-4899-8188-7ef31f226de4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:20:54.148832 kubelet[2513]: I0813 07:20:54.148319 2513 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3fa38a5c-6cbe-4899-8188-7ef31f226de4" (UID: "3fa38a5c-6cbe-4899-8188-7ef31f226de4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:20:54.148832 kubelet[2513]: I0813 07:20:54.148267 2513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-cni-path\") pod \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\" (UID: \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\") " Aug 13 07:20:54.148986 kubelet[2513]: I0813 07:20:54.148319 2513 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3fa38a5c-6cbe-4899-8188-7ef31f226de4" (UID: "3fa38a5c-6cbe-4899-8188-7ef31f226de4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:20:54.148986 kubelet[2513]: I0813 07:20:54.148373 2513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-lib-modules\") pod \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\" (UID: \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\") " Aug 13 07:20:54.148986 kubelet[2513]: I0813 07:20:54.148398 2513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3fa38a5c-6cbe-4899-8188-7ef31f226de4-hubble-tls\") pod \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\" (UID: \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\") " Aug 13 07:20:54.148986 kubelet[2513]: I0813 07:20:54.148414 2513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-xtables-lock\") pod \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\" (UID: \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\") " Aug 13 07:20:54.148986 kubelet[2513]: I0813 07:20:54.148433 2513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3fa38a5c-6cbe-4899-8188-7ef31f226de4-cilium-config-path\") pod \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\" (UID: \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\") " Aug 13 07:20:54.148986 kubelet[2513]: I0813 07:20:54.148450 2513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc56m\" (UniqueName: \"kubernetes.io/projected/8a6f13f7-86d1-4e95-9e7e-7d6e074d270a-kube-api-access-kc56m\") pod \"8a6f13f7-86d1-4e95-9e7e-7d6e074d270a\" (UID: \"8a6f13f7-86d1-4e95-9e7e-7d6e074d270a\") " Aug 13 07:20:54.149120 kubelet[2513]: I0813 07:20:54.148466 2513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a6f13f7-86d1-4e95-9e7e-7d6e074d270a-cilium-config-path\") pod \"8a6f13f7-86d1-4e95-9e7e-7d6e074d270a\" (UID: \"8a6f13f7-86d1-4e95-9e7e-7d6e074d270a\") " Aug 13 07:20:54.149120 kubelet[2513]: I0813 07:20:54.148487 2513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-etc-cni-netd\") pod \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\" (UID: \"3fa38a5c-6cbe-4899-8188-7ef31f226de4\") " Aug 13 07:20:54.149120 kubelet[2513]: I0813 07:20:54.148527 2513 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 13 07:20:54.149120 kubelet[2513]: I0813 07:20:54.148538 2513 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 13 07:20:54.149120 kubelet[2513]: I0813 07:20:54.148552 2513 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 13 07:20:54.149120 kubelet[2513]: I0813 07:20:54.148560 2513 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 13 07:20:54.149120 kubelet[2513]: I0813 07:20:54.148569 2513 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 13 07:20:54.149120 kubelet[2513]: I0813 07:20:54.148577 2513 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 13 07:20:54.149369 kubelet[2513]: I0813 07:20:54.148599 2513 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3fa38a5c-6cbe-4899-8188-7ef31f226de4" (UID: "3fa38a5c-6cbe-4899-8188-7ef31f226de4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:20:54.149369 kubelet[2513]: I0813 07:20:54.148614 2513 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3fa38a5c-6cbe-4899-8188-7ef31f226de4" (UID: "3fa38a5c-6cbe-4899-8188-7ef31f226de4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:20:54.149369 kubelet[2513]: I0813 07:20:54.148753 2513 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3fa38a5c-6cbe-4899-8188-7ef31f226de4" (UID: "3fa38a5c-6cbe-4899-8188-7ef31f226de4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:20:54.151886 kubelet[2513]: I0813 07:20:54.151039 2513 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3fa38a5c-6cbe-4899-8188-7ef31f226de4" (UID: "3fa38a5c-6cbe-4899-8188-7ef31f226de4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:20:54.152638 kubelet[2513]: I0813 07:20:54.152606 2513 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fa38a5c-6cbe-4899-8188-7ef31f226de4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3fa38a5c-6cbe-4899-8188-7ef31f226de4" (UID: "3fa38a5c-6cbe-4899-8188-7ef31f226de4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 07:20:54.152721 kubelet[2513]: I0813 07:20:54.152660 2513 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fa38a5c-6cbe-4899-8188-7ef31f226de4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3fa38a5c-6cbe-4899-8188-7ef31f226de4" (UID: "3fa38a5c-6cbe-4899-8188-7ef31f226de4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 07:20:54.154508 kubelet[2513]: I0813 07:20:54.154368 2513 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fa38a5c-6cbe-4899-8188-7ef31f226de4-kube-api-access-j894b" (OuterVolumeSpecName: "kube-api-access-j894b") pod "3fa38a5c-6cbe-4899-8188-7ef31f226de4" (UID: "3fa38a5c-6cbe-4899-8188-7ef31f226de4"). InnerVolumeSpecName "kube-api-access-j894b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 07:20:54.154508 kubelet[2513]: I0813 07:20:54.154421 2513 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fa38a5c-6cbe-4899-8188-7ef31f226de4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3fa38a5c-6cbe-4899-8188-7ef31f226de4" (UID: "3fa38a5c-6cbe-4899-8188-7ef31f226de4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 07:20:54.154669 kubelet[2513]: I0813 07:20:54.154525 2513 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a6f13f7-86d1-4e95-9e7e-7d6e074d270a-kube-api-access-kc56m" (OuterVolumeSpecName: "kube-api-access-kc56m") pod "8a6f13f7-86d1-4e95-9e7e-7d6e074d270a" (UID: "8a6f13f7-86d1-4e95-9e7e-7d6e074d270a"). InnerVolumeSpecName "kube-api-access-kc56m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 07:20:54.154768 kubelet[2513]: I0813 07:20:54.154747 2513 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a6f13f7-86d1-4e95-9e7e-7d6e074d270a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8a6f13f7-86d1-4e95-9e7e-7d6e074d270a" (UID: "8a6f13f7-86d1-4e95-9e7e-7d6e074d270a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 07:20:54.237633 kubelet[2513]: E0813 07:20:54.237590 2513 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 07:20:54.248910 kubelet[2513]: I0813 07:20:54.248875 2513 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kc56m\" (UniqueName: \"kubernetes.io/projected/8a6f13f7-86d1-4e95-9e7e-7d6e074d270a-kube-api-access-kc56m\") on node \"localhost\" DevicePath \"\"" Aug 13 07:20:54.248967 kubelet[2513]: I0813 07:20:54.248914 2513 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a6f13f7-86d1-4e95-9e7e-7d6e074d270a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 07:20:54.248967 kubelet[2513]: I0813 07:20:54.248926 2513 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 13 07:20:54.248967 kubelet[2513]: I0813 07:20:54.248935 2513 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 13 07:20:54.248967 kubelet[2513]: I0813 07:20:54.248944 2513 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3fa38a5c-6cbe-4899-8188-7ef31f226de4-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 13 07:20:54.248967 kubelet[2513]: I0813 07:20:54.248953 2513 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j894b\" (UniqueName: \"kubernetes.io/projected/3fa38a5c-6cbe-4899-8188-7ef31f226de4-kube-api-access-j894b\") on node \"localhost\" DevicePath \"\"" Aug 13 07:20:54.248967 kubelet[2513]: I0813 07:20:54.248960 2513 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 13 07:20:54.248967 kubelet[2513]: I0813 07:20:54.248968 2513 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3fa38a5c-6cbe-4899-8188-7ef31f226de4-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 13 07:20:54.249142 kubelet[2513]: I0813 07:20:54.248978 2513 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3fa38a5c-6cbe-4899-8188-7ef31f226de4-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 13 07:20:54.249142 kubelet[2513]: I0813 07:20:54.248986 2513 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3fa38a5c-6cbe-4899-8188-7ef31f226de4-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 07:20:54.500878 kubelet[2513]: I0813 07:20:54.500445 2513 scope.go:117] "RemoveContainer" containerID="8141a88155888a401a3d18775941a7b98f100405b17b5c3e27c222c4c7674ab0" Aug 13 07:20:54.501672 containerd[1476]: time="2025-08-13T07:20:54.501636032Z" level=info msg="RemoveContainer for \"8141a88155888a401a3d18775941a7b98f100405b17b5c3e27c222c4c7674ab0\"" Aug 13 07:20:54.509998 systemd[1]: Removed slice kubepods-burstable-pod3fa38a5c_6cbe_4899_8188_7ef31f226de4.slice - libcontainer container kubepods-burstable-pod3fa38a5c_6cbe_4899_8188_7ef31f226de4.slice. Aug 13 07:20:54.510428 systemd[1]: kubepods-burstable-pod3fa38a5c_6cbe_4899_8188_7ef31f226de4.slice: Consumed 6.716s CPU time. Aug 13 07:20:54.512348 systemd[1]: Removed slice kubepods-besteffort-pod8a6f13f7_86d1_4e95_9e7e_7d6e074d270a.slice - libcontainer container kubepods-besteffort-pod8a6f13f7_86d1_4e95_9e7e_7d6e074d270a.slice. Aug 13 07:20:54.513971 containerd[1476]: time="2025-08-13T07:20:54.513938859Z" level=info msg="RemoveContainer for \"8141a88155888a401a3d18775941a7b98f100405b17b5c3e27c222c4c7674ab0\" returns successfully" Aug 13 07:20:54.514203 kubelet[2513]: I0813 07:20:54.514175 2513 scope.go:117] "RemoveContainer" containerID="5ed58b969232058470c5baa75d9f43bb4f57e4dec47f83c0beca409908ca526d" Aug 13 07:20:54.515303 containerd[1476]: time="2025-08-13T07:20:54.515270891Z" level=info msg="RemoveContainer for \"5ed58b969232058470c5baa75d9f43bb4f57e4dec47f83c0beca409908ca526d\"" Aug 13 07:20:54.518540 containerd[1476]: time="2025-08-13T07:20:54.518505222Z" level=info msg="RemoveContainer for \"5ed58b969232058470c5baa75d9f43bb4f57e4dec47f83c0beca409908ca526d\" returns successfully" Aug 13 07:20:54.518694 kubelet[2513]: I0813 07:20:54.518665 2513 scope.go:117] "RemoveContainer" containerID="4216b89e259a5faa3aa3f275c396765e03de1f1ae16e61ca79e504ac498a28df" Aug 13 07:20:54.520401 containerd[1476]: time="2025-08-13T07:20:54.520365060Z" level=info msg="RemoveContainer for \"4216b89e259a5faa3aa3f275c396765e03de1f1ae16e61ca79e504ac498a28df\"" Aug 13 07:20:54.523833 containerd[1476]: time="2025-08-13T07:20:54.523806381Z" level=info msg="RemoveContainer for \"4216b89e259a5faa3aa3f275c396765e03de1f1ae16e61ca79e504ac498a28df\" returns successfully" Aug 13 07:20:54.524110 kubelet[2513]: I0813 07:20:54.523993 2513 scope.go:117] "RemoveContainer" containerID="ee0a6f3422f03778cc4c51b01280be46260ed2204349b4a96d5278ad03f305a2" Aug 13 07:20:54.525029 containerd[1476]: time="2025-08-13T07:20:54.524999782Z" level=info msg="RemoveContainer for \"ee0a6f3422f03778cc4c51b01280be46260ed2204349b4a96d5278ad03f305a2\"" Aug 13 07:20:54.528045 containerd[1476]: time="2025-08-13T07:20:54.528012034Z" level=info msg="RemoveContainer for \"ee0a6f3422f03778cc4c51b01280be46260ed2204349b4a96d5278ad03f305a2\" returns successfully" Aug 13 07:20:54.528215 kubelet[2513]: I0813 07:20:54.528190 2513 scope.go:117] "RemoveContainer" containerID="5239ff9d45dbe7fdd97408667235d16ce2a0f40615064ca466cfd98bd3306a93" Aug 13 07:20:54.529555 containerd[1476]: time="2025-08-13T07:20:54.529485484Z" level=info msg="RemoveContainer for \"5239ff9d45dbe7fdd97408667235d16ce2a0f40615064ca466cfd98bd3306a93\"" Aug 13 07:20:54.532425 containerd[1476]: time="2025-08-13T07:20:54.532401505Z" level=info msg="RemoveContainer for \"5239ff9d45dbe7fdd97408667235d16ce2a0f40615064ca466cfd98bd3306a93\" returns successfully" Aug 13 07:20:54.532589 kubelet[2513]: I0813 07:20:54.532565 2513 scope.go:117] "RemoveContainer" containerID="8141a88155888a401a3d18775941a7b98f100405b17b5c3e27c222c4c7674ab0" Aug 13 07:20:54.535712 containerd[1476]: time="2025-08-13T07:20:54.535671713Z" level=error msg="ContainerStatus for \"8141a88155888a401a3d18775941a7b98f100405b17b5c3e27c222c4c7674ab0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8141a88155888a401a3d18775941a7b98f100405b17b5c3e27c222c4c7674ab0\": not found" Aug 13 07:20:54.543573 kubelet[2513]: E0813 07:20:54.543537 2513 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8141a88155888a401a3d18775941a7b98f100405b17b5c3e27c222c4c7674ab0\": not found" containerID="8141a88155888a401a3d18775941a7b98f100405b17b5c3e27c222c4c7674ab0" Aug 13 07:20:54.543654 kubelet[2513]: I0813 07:20:54.543571 2513 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8141a88155888a401a3d18775941a7b98f100405b17b5c3e27c222c4c7674ab0"} err="failed to get container status \"8141a88155888a401a3d18775941a7b98f100405b17b5c3e27c222c4c7674ab0\": rpc error: code = NotFound desc = an error occurred when try to find container \"8141a88155888a401a3d18775941a7b98f100405b17b5c3e27c222c4c7674ab0\": not found" Aug 13 07:20:54.543654 kubelet[2513]: I0813 07:20:54.543649 2513 scope.go:117] "RemoveContainer" containerID="5ed58b969232058470c5baa75d9f43bb4f57e4dec47f83c0beca409908ca526d" Aug 13 07:20:54.543873 containerd[1476]: time="2025-08-13T07:20:54.543812318Z" level=error msg="ContainerStatus for \"5ed58b969232058470c5baa75d9f43bb4f57e4dec47f83c0beca409908ca526d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5ed58b969232058470c5baa75d9f43bb4f57e4dec47f83c0beca409908ca526d\": not found" Aug 13 07:20:54.543960 kubelet[2513]: E0813 07:20:54.543941 2513 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5ed58b969232058470c5baa75d9f43bb4f57e4dec47f83c0beca409908ca526d\": not found" containerID="5ed58b969232058470c5baa75d9f43bb4f57e4dec47f83c0beca409908ca526d" Aug 13 07:20:54.544016 kubelet[2513]: I0813 07:20:54.543957 2513 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5ed58b969232058470c5baa75d9f43bb4f57e4dec47f83c0beca409908ca526d"} err="failed to get container status \"5ed58b969232058470c5baa75d9f43bb4f57e4dec47f83c0beca409908ca526d\": rpc error: code = NotFound desc = an error occurred when try to find container \"5ed58b969232058470c5baa75d9f43bb4f57e4dec47f83c0beca409908ca526d\": not found" Aug 13 07:20:54.544016 kubelet[2513]: I0813 07:20:54.543970 2513 scope.go:117] "RemoveContainer" containerID="4216b89e259a5faa3aa3f275c396765e03de1f1ae16e61ca79e504ac498a28df" Aug 13 07:20:54.544206 containerd[1476]: time="2025-08-13T07:20:54.544155134Z" level=error msg="ContainerStatus for \"4216b89e259a5faa3aa3f275c396765e03de1f1ae16e61ca79e504ac498a28df\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4216b89e259a5faa3aa3f275c396765e03de1f1ae16e61ca79e504ac498a28df\": not found" Aug 13 07:20:54.544344 kubelet[2513]: E0813 07:20:54.544319 2513 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4216b89e259a5faa3aa3f275c396765e03de1f1ae16e61ca79e504ac498a28df\": not found" containerID="4216b89e259a5faa3aa3f275c396765e03de1f1ae16e61ca79e504ac498a28df" Aug 13 07:20:54.544383 kubelet[2513]: I0813 07:20:54.544349 2513 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4216b89e259a5faa3aa3f275c396765e03de1f1ae16e61ca79e504ac498a28df"} err="failed to get container status \"4216b89e259a5faa3aa3f275c396765e03de1f1ae16e61ca79e504ac498a28df\": rpc error: code = NotFound desc = an error occurred when try to find container \"4216b89e259a5faa3aa3f275c396765e03de1f1ae16e61ca79e504ac498a28df\": not found" Aug 13 07:20:54.544383 kubelet[2513]: I0813 07:20:54.544373 2513 scope.go:117] "RemoveContainer" containerID="ee0a6f3422f03778cc4c51b01280be46260ed2204349b4a96d5278ad03f305a2" Aug 13 07:20:54.544560 containerd[1476]: time="2025-08-13T07:20:54.544532857Z" level=error msg="ContainerStatus for \"ee0a6f3422f03778cc4c51b01280be46260ed2204349b4a96d5278ad03f305a2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ee0a6f3422f03778cc4c51b01280be46260ed2204349b4a96d5278ad03f305a2\": not found" Aug 13 07:20:54.544693 kubelet[2513]: E0813 07:20:54.544667 2513 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ee0a6f3422f03778cc4c51b01280be46260ed2204349b4a96d5278ad03f305a2\": not found" containerID="ee0a6f3422f03778cc4c51b01280be46260ed2204349b4a96d5278ad03f305a2" Aug 13 07:20:54.544735 kubelet[2513]: I0813 07:20:54.544707 2513 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ee0a6f3422f03778cc4c51b01280be46260ed2204349b4a96d5278ad03f305a2"} err="failed to get container status \"ee0a6f3422f03778cc4c51b01280be46260ed2204349b4a96d5278ad03f305a2\": rpc error: code = NotFound desc = an error occurred when try to find container \"ee0a6f3422f03778cc4c51b01280be46260ed2204349b4a96d5278ad03f305a2\": not found" Aug 13 07:20:54.544759 kubelet[2513]: I0813 07:20:54.544737 2513 scope.go:117] "RemoveContainer" containerID="5239ff9d45dbe7fdd97408667235d16ce2a0f40615064ca466cfd98bd3306a93" Aug 13 07:20:54.544969 containerd[1476]: time="2025-08-13T07:20:54.544934365Z" level=error msg="ContainerStatus for \"5239ff9d45dbe7fdd97408667235d16ce2a0f40615064ca466cfd98bd3306a93\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5239ff9d45dbe7fdd97408667235d16ce2a0f40615064ca466cfd98bd3306a93\": not found" Aug 13 07:20:54.545075 kubelet[2513]: E0813 07:20:54.545054 2513 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5239ff9d45dbe7fdd97408667235d16ce2a0f40615064ca466cfd98bd3306a93\": not found" containerID="5239ff9d45dbe7fdd97408667235d16ce2a0f40615064ca466cfd98bd3306a93" Aug 13 07:20:54.545127 kubelet[2513]: I0813 07:20:54.545077 2513 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5239ff9d45dbe7fdd97408667235d16ce2a0f40615064ca466cfd98bd3306a93"} err="failed to get container status \"5239ff9d45dbe7fdd97408667235d16ce2a0f40615064ca466cfd98bd3306a93\": rpc error: code = NotFound desc = an error occurred when try to find container \"5239ff9d45dbe7fdd97408667235d16ce2a0f40615064ca466cfd98bd3306a93\": not found" Aug 13 07:20:54.545127 kubelet[2513]: I0813 07:20:54.545090 2513 scope.go:117] "RemoveContainer" containerID="21bca8fd40b43c2c6571f69b4295d4673b1b7d6aba9658d1c3b8eaa7875a9b32" Aug 13 07:20:54.546050 containerd[1476]: time="2025-08-13T07:20:54.546026565Z" level=info msg="RemoveContainer for \"21bca8fd40b43c2c6571f69b4295d4673b1b7d6aba9658d1c3b8eaa7875a9b32\"" Aug 13 07:20:54.549270 containerd[1476]: time="2025-08-13T07:20:54.549241339Z" level=info msg="RemoveContainer for \"21bca8fd40b43c2c6571f69b4295d4673b1b7d6aba9658d1c3b8eaa7875a9b32\" returns successfully" Aug 13 07:20:54.549390 kubelet[2513]: I0813 07:20:54.549362 2513 scope.go:117] "RemoveContainer" containerID="21bca8fd40b43c2c6571f69b4295d4673b1b7d6aba9658d1c3b8eaa7875a9b32" Aug 13 07:20:54.549571 containerd[1476]: time="2025-08-13T07:20:54.549533850Z" level=error msg="ContainerStatus for \"21bca8fd40b43c2c6571f69b4295d4673b1b7d6aba9658d1c3b8eaa7875a9b32\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"21bca8fd40b43c2c6571f69b4295d4673b1b7d6aba9658d1c3b8eaa7875a9b32\": not found" Aug 13 07:20:54.549691 kubelet[2513]: E0813 07:20:54.549667 2513 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"21bca8fd40b43c2c6571f69b4295d4673b1b7d6aba9658d1c3b8eaa7875a9b32\": not found" containerID="21bca8fd40b43c2c6571f69b4295d4673b1b7d6aba9658d1c3b8eaa7875a9b32" Aug 13 07:20:54.549735 kubelet[2513]: I0813 07:20:54.549691 2513 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"21bca8fd40b43c2c6571f69b4295d4673b1b7d6aba9658d1c3b8eaa7875a9b32"} err="failed to get container status \"21bca8fd40b43c2c6571f69b4295d4673b1b7d6aba9658d1c3b8eaa7875a9b32\": rpc error: code = NotFound desc = an error occurred when try to find container \"21bca8fd40b43c2c6571f69b4295d4673b1b7d6aba9658d1c3b8eaa7875a9b32\": not found" Aug 13 07:20:54.913067 systemd[1]: var-lib-kubelet-pods-8a6f13f7\x2d86d1\x2d4e95\x2d9e7e\x2d7d6e074d270a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkc56m.mount: Deactivated successfully. Aug 13 07:20:54.913200 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad3ff4d11406e703baf602f5302f67d08d927b3bf41948762927d519f9d35278-rootfs.mount: Deactivated successfully. Aug 13 07:20:54.913276 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ad3ff4d11406e703baf602f5302f67d08d927b3bf41948762927d519f9d35278-shm.mount: Deactivated successfully. Aug 13 07:20:54.913351 systemd[1]: var-lib-kubelet-pods-3fa38a5c\x2d6cbe\x2d4899\x2d8188\x2d7ef31f226de4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 07:20:54.913437 systemd[1]: var-lib-kubelet-pods-3fa38a5c\x2d6cbe\x2d4899\x2d8188\x2d7ef31f226de4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 07:20:54.913513 systemd[1]: var-lib-kubelet-pods-3fa38a5c\x2d6cbe\x2d4899\x2d8188\x2d7ef31f226de4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj894b.mount: Deactivated successfully. Aug 13 07:20:55.175731 kubelet[2513]: I0813 07:20:55.175619 2513 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fa38a5c-6cbe-4899-8188-7ef31f226de4" path="/var/lib/kubelet/pods/3fa38a5c-6cbe-4899-8188-7ef31f226de4/volumes" Aug 13 07:20:55.176502 kubelet[2513]: I0813 07:20:55.176474 2513 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a6f13f7-86d1-4e95-9e7e-7d6e074d270a" path="/var/lib/kubelet/pods/8a6f13f7-86d1-4e95-9e7e-7d6e074d270a/volumes" Aug 13 07:20:55.872629 sshd[4162]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:55.882983 systemd[1]: sshd@23-10.0.0.153:22-10.0.0.1:38302.service: Deactivated successfully. Aug 13 07:20:55.884796 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 07:20:55.886553 systemd-logind[1453]: Session 24 logged out. Waiting for processes to exit. Aug 13 07:20:55.894148 systemd[1]: Started sshd@24-10.0.0.153:22-10.0.0.1:38304.service - OpenSSH per-connection server daemon (10.0.0.1:38304). Aug 13 07:20:55.895104 systemd-logind[1453]: Removed session 24. Aug 13 07:20:55.932142 sshd[4325]: Accepted publickey for core from 10.0.0.1 port 38304 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:20:55.933803 sshd[4325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:55.937704 systemd-logind[1453]: New session 25 of user core. Aug 13 07:20:55.943026 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 07:20:56.257961 sshd[4325]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:56.270144 systemd[1]: sshd@24-10.0.0.153:22-10.0.0.1:38304.service: Deactivated successfully. Aug 13 07:20:56.272193 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 07:20:56.280695 kubelet[2513]: I0813 07:20:56.279185 2513 memory_manager.go:355] "RemoveStaleState removing state" podUID="8a6f13f7-86d1-4e95-9e7e-7d6e074d270a" containerName="cilium-operator" Aug 13 07:20:56.280695 kubelet[2513]: I0813 07:20:56.279213 2513 memory_manager.go:355] "RemoveStaleState removing state" podUID="3fa38a5c-6cbe-4899-8188-7ef31f226de4" containerName="cilium-agent" Aug 13 07:20:56.283299 systemd-logind[1453]: Session 25 logged out. Waiting for processes to exit. Aug 13 07:20:56.287309 kubelet[2513]: W0813 07:20:56.286286 2513 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Aug 13 07:20:56.287309 kubelet[2513]: E0813 07:20:56.286330 2513 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Aug 13 07:20:56.287309 kubelet[2513]: W0813 07:20:56.286370 2513 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Aug 13 07:20:56.287309 kubelet[2513]: E0813 07:20:56.286381 2513 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Aug 13 07:20:56.287309 kubelet[2513]: W0813 07:20:56.286410 2513 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Aug 13 07:20:56.287492 kubelet[2513]: E0813 07:20:56.286420 2513 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Aug 13 07:20:56.287492 kubelet[2513]: I0813 07:20:56.286442 2513 status_manager.go:890] "Failed to get status for pod" podUID="57376c87-050c-4390-b047-8210aa6f6c7d" pod="kube-system/cilium-cdbzq" err="pods \"cilium-cdbzq\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Aug 13 07:20:56.287492 kubelet[2513]: W0813 07:20:56.286478 2513 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Aug 13 07:20:56.287492 kubelet[2513]: E0813 07:20:56.286488 2513 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Aug 13 07:20:56.297100 systemd[1]: Started sshd@25-10.0.0.153:22-10.0.0.1:38318.service - OpenSSH per-connection server daemon (10.0.0.1:38318). Aug 13 07:20:56.301950 systemd-logind[1453]: Removed session 25. Aug 13 07:20:56.309557 systemd[1]: Created slice kubepods-burstable-pod57376c87_050c_4390_b047_8210aa6f6c7d.slice - libcontainer container kubepods-burstable-pod57376c87_050c_4390_b047_8210aa6f6c7d.slice. Aug 13 07:20:56.336908 sshd[4338]: Accepted publickey for core from 10.0.0.1 port 38318 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:20:56.338652 sshd[4338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:56.342789 systemd-logind[1453]: New session 26 of user core. Aug 13 07:20:56.353045 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 07:20:56.360025 kubelet[2513]: I0813 07:20:56.359994 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/57376c87-050c-4390-b047-8210aa6f6c7d-lib-modules\") pod \"cilium-cdbzq\" (UID: \"57376c87-050c-4390-b047-8210aa6f6c7d\") " pod="kube-system/cilium-cdbzq" Aug 13 07:20:56.360111 kubelet[2513]: I0813 07:20:56.360033 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/57376c87-050c-4390-b047-8210aa6f6c7d-bpf-maps\") pod \"cilium-cdbzq\" (UID: \"57376c87-050c-4390-b047-8210aa6f6c7d\") " pod="kube-system/cilium-cdbzq" Aug 13 07:20:56.360111 kubelet[2513]: I0813 07:20:56.360050 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/57376c87-050c-4390-b047-8210aa6f6c7d-cilium-config-path\") pod \"cilium-cdbzq\" (UID: \"57376c87-050c-4390-b047-8210aa6f6c7d\") " pod="kube-system/cilium-cdbzq" Aug 13 07:20:56.360111 kubelet[2513]: I0813 07:20:56.360066 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/57376c87-050c-4390-b047-8210aa6f6c7d-hostproc\") pod \"cilium-cdbzq\" (UID: \"57376c87-050c-4390-b047-8210aa6f6c7d\") " pod="kube-system/cilium-cdbzq" Aug 13 07:20:56.360111 kubelet[2513]: I0813 07:20:56.360080 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/57376c87-050c-4390-b047-8210aa6f6c7d-cilium-cgroup\") pod \"cilium-cdbzq\" (UID: \"57376c87-050c-4390-b047-8210aa6f6c7d\") " pod="kube-system/cilium-cdbzq" Aug 13 07:20:56.360111 kubelet[2513]: I0813 07:20:56.360107 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/57376c87-050c-4390-b047-8210aa6f6c7d-etc-cni-netd\") pod \"cilium-cdbzq\" (UID: \"57376c87-050c-4390-b047-8210aa6f6c7d\") " pod="kube-system/cilium-cdbzq" Aug 13 07:20:56.360304 kubelet[2513]: I0813 07:20:56.360123 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/57376c87-050c-4390-b047-8210aa6f6c7d-hubble-tls\") pod \"cilium-cdbzq\" (UID: \"57376c87-050c-4390-b047-8210aa6f6c7d\") " pod="kube-system/cilium-cdbzq" Aug 13 07:20:56.360304 kubelet[2513]: I0813 07:20:56.360205 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/57376c87-050c-4390-b047-8210aa6f6c7d-cilium-run\") pod \"cilium-cdbzq\" (UID: \"57376c87-050c-4390-b047-8210aa6f6c7d\") " pod="kube-system/cilium-cdbzq" Aug 13 07:20:56.360304 kubelet[2513]: I0813 07:20:56.360256 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/57376c87-050c-4390-b047-8210aa6f6c7d-xtables-lock\") pod \"cilium-cdbzq\" (UID: \"57376c87-050c-4390-b047-8210aa6f6c7d\") " pod="kube-system/cilium-cdbzq" Aug 13 07:20:56.360304 kubelet[2513]: I0813 07:20:56.360278 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/57376c87-050c-4390-b047-8210aa6f6c7d-clustermesh-secrets\") pod \"cilium-cdbzq\" (UID: \"57376c87-050c-4390-b047-8210aa6f6c7d\") " pod="kube-system/cilium-cdbzq" Aug 13 07:20:56.360304 kubelet[2513]: I0813 07:20:56.360304 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/57376c87-050c-4390-b047-8210aa6f6c7d-host-proc-sys-net\") pod \"cilium-cdbzq\" (UID: \"57376c87-050c-4390-b047-8210aa6f6c7d\") " pod="kube-system/cilium-cdbzq" Aug 13 07:20:56.360438 kubelet[2513]: I0813 07:20:56.360322 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/57376c87-050c-4390-b047-8210aa6f6c7d-host-proc-sys-kernel\") pod \"cilium-cdbzq\" (UID: \"57376c87-050c-4390-b047-8210aa6f6c7d\") " pod="kube-system/cilium-cdbzq" Aug 13 07:20:56.360438 kubelet[2513]: I0813 07:20:56.360348 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/57376c87-050c-4390-b047-8210aa6f6c7d-cni-path\") pod \"cilium-cdbzq\" (UID: \"57376c87-050c-4390-b047-8210aa6f6c7d\") " pod="kube-system/cilium-cdbzq" Aug 13 07:20:56.360438 kubelet[2513]: I0813 07:20:56.360362 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/57376c87-050c-4390-b047-8210aa6f6c7d-cilium-ipsec-secrets\") pod \"cilium-cdbzq\" (UID: \"57376c87-050c-4390-b047-8210aa6f6c7d\") " pod="kube-system/cilium-cdbzq" Aug 13 07:20:56.360438 kubelet[2513]: I0813 07:20:56.360377 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwdkz\" (UniqueName: \"kubernetes.io/projected/57376c87-050c-4390-b047-8210aa6f6c7d-kube-api-access-hwdkz\") pod \"cilium-cdbzq\" (UID: \"57376c87-050c-4390-b047-8210aa6f6c7d\") " pod="kube-system/cilium-cdbzq" Aug 13 07:20:56.403960 sshd[4338]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:56.414651 systemd[1]: sshd@25-10.0.0.153:22-10.0.0.1:38318.service: Deactivated successfully. Aug 13 07:20:56.416522 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 07:20:56.418138 systemd-logind[1453]: Session 26 logged out. Waiting for processes to exit. Aug 13 07:20:56.425303 systemd[1]: Started sshd@26-10.0.0.153:22-10.0.0.1:38320.service - OpenSSH per-connection server daemon (10.0.0.1:38320). Aug 13 07:20:56.426993 systemd-logind[1453]: Removed session 26. Aug 13 07:20:56.459910 sshd[4346]: Accepted publickey for core from 10.0.0.1 port 38320 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:20:56.461528 sshd[4346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:56.465877 systemd-logind[1453]: New session 27 of user core. Aug 13 07:20:56.473046 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 07:20:57.461821 kubelet[2513]: E0813 07:20:57.461755 2513 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Aug 13 07:20:57.462227 kubelet[2513]: E0813 07:20:57.461860 2513 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/57376c87-050c-4390-b047-8210aa6f6c7d-cilium-config-path podName:57376c87-050c-4390-b047-8210aa6f6c7d nodeName:}" failed. No retries permitted until 2025-08-13 07:20:57.961834489 +0000 UTC m=+78.863254957 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/57376c87-050c-4390-b047-8210aa6f6c7d-cilium-config-path") pod "cilium-cdbzq" (UID: "57376c87-050c-4390-b047-8210aa6f6c7d") : failed to sync configmap cache: timed out waiting for the condition Aug 13 07:20:57.462227 kubelet[2513]: E0813 07:20:57.462186 2513 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Aug 13 07:20:57.462360 kubelet[2513]: E0813 07:20:57.462284 2513 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57376c87-050c-4390-b047-8210aa6f6c7d-cilium-ipsec-secrets podName:57376c87-050c-4390-b047-8210aa6f6c7d nodeName:}" failed. No retries permitted until 2025-08-13 07:20:57.96226341 +0000 UTC m=+78.863683877 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/57376c87-050c-4390-b047-8210aa6f6c7d-cilium-ipsec-secrets") pod "cilium-cdbzq" (UID: "57376c87-050c-4390-b047-8210aa6f6c7d") : failed to sync secret cache: timed out waiting for the condition Aug 13 07:20:58.115448 kubelet[2513]: E0813 07:20:58.115405 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:20:58.116010 containerd[1476]: time="2025-08-13T07:20:58.115967561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cdbzq,Uid:57376c87-050c-4390-b047-8210aa6f6c7d,Namespace:kube-system,Attempt:0,}" Aug 13 07:20:58.137240 containerd[1476]: time="2025-08-13T07:20:58.137151167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:20:58.137240 containerd[1476]: time="2025-08-13T07:20:58.137206372Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:20:58.137240 containerd[1476]: time="2025-08-13T07:20:58.137217983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:20:58.137427 containerd[1476]: time="2025-08-13T07:20:58.137305289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:20:58.158026 systemd[1]: Started cri-containerd-df45db3243dc6e57d4ef546ec7c99a8b8b9044e2c8e5e1047b9eab8c19db973e.scope - libcontainer container df45db3243dc6e57d4ef546ec7c99a8b8b9044e2c8e5e1047b9eab8c19db973e. Aug 13 07:20:58.178588 containerd[1476]: time="2025-08-13T07:20:58.178538516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cdbzq,Uid:57376c87-050c-4390-b047-8210aa6f6c7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"df45db3243dc6e57d4ef546ec7c99a8b8b9044e2c8e5e1047b9eab8c19db973e\"" Aug 13 07:20:58.179351 kubelet[2513]: E0813 07:20:58.179319 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:20:58.214243 containerd[1476]: time="2025-08-13T07:20:58.214192992Z" level=info msg="CreateContainer within sandbox \"df45db3243dc6e57d4ef546ec7c99a8b8b9044e2c8e5e1047b9eab8c19db973e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 07:20:58.227666 containerd[1476]: time="2025-08-13T07:20:58.227614477Z" level=info msg="CreateContainer within sandbox \"df45db3243dc6e57d4ef546ec7c99a8b8b9044e2c8e5e1047b9eab8c19db973e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4948e16a9bfeffd41df8437fac5c8de889b27ff88546cecb7727b1e4a14f504b\"" Aug 13 07:20:58.228256 containerd[1476]: time="2025-08-13T07:20:58.228166582Z" level=info msg="StartContainer for \"4948e16a9bfeffd41df8437fac5c8de889b27ff88546cecb7727b1e4a14f504b\"" Aug 13 07:20:58.256042 systemd[1]: Started cri-containerd-4948e16a9bfeffd41df8437fac5c8de889b27ff88546cecb7727b1e4a14f504b.scope - libcontainer container 4948e16a9bfeffd41df8437fac5c8de889b27ff88546cecb7727b1e4a14f504b. Aug 13 07:20:58.280077 containerd[1476]: time="2025-08-13T07:20:58.280020647Z" level=info msg="StartContainer for \"4948e16a9bfeffd41df8437fac5c8de889b27ff88546cecb7727b1e4a14f504b\" returns successfully" Aug 13 07:20:58.290880 systemd[1]: cri-containerd-4948e16a9bfeffd41df8437fac5c8de889b27ff88546cecb7727b1e4a14f504b.scope: Deactivated successfully. Aug 13 07:20:58.323863 containerd[1476]: time="2025-08-13T07:20:58.323800381Z" level=info msg="shim disconnected" id=4948e16a9bfeffd41df8437fac5c8de889b27ff88546cecb7727b1e4a14f504b namespace=k8s.io Aug 13 07:20:58.323863 containerd[1476]: time="2025-08-13T07:20:58.323861907Z" level=warning msg="cleaning up after shim disconnected" id=4948e16a9bfeffd41df8437fac5c8de889b27ff88546cecb7727b1e4a14f504b namespace=k8s.io Aug 13 07:20:58.324099 containerd[1476]: time="2025-08-13T07:20:58.323873769Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:20:58.513475 kubelet[2513]: E0813 07:20:58.513446 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:20:58.515501 containerd[1476]: time="2025-08-13T07:20:58.515459209Z" level=info msg="CreateContainer within sandbox \"df45db3243dc6e57d4ef546ec7c99a8b8b9044e2c8e5e1047b9eab8c19db973e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 07:20:58.529173 containerd[1476]: time="2025-08-13T07:20:58.529129464Z" level=info msg="CreateContainer within sandbox \"df45db3243dc6e57d4ef546ec7c99a8b8b9044e2c8e5e1047b9eab8c19db973e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7e434782ca3c3ca4703a98e7c0b40df40703f287285ce58b49a36d0167c427cb\"" Aug 13 07:20:58.529643 containerd[1476]: time="2025-08-13T07:20:58.529596788Z" level=info msg="StartContainer for \"7e434782ca3c3ca4703a98e7c0b40df40703f287285ce58b49a36d0167c427cb\"" Aug 13 07:20:58.560027 systemd[1]: Started cri-containerd-7e434782ca3c3ca4703a98e7c0b40df40703f287285ce58b49a36d0167c427cb.scope - libcontainer container 7e434782ca3c3ca4703a98e7c0b40df40703f287285ce58b49a36d0167c427cb. Aug 13 07:20:58.588047 containerd[1476]: time="2025-08-13T07:20:58.587875766Z" level=info msg="StartContainer for \"7e434782ca3c3ca4703a98e7c0b40df40703f287285ce58b49a36d0167c427cb\" returns successfully" Aug 13 07:20:58.595397 systemd[1]: cri-containerd-7e434782ca3c3ca4703a98e7c0b40df40703f287285ce58b49a36d0167c427cb.scope: Deactivated successfully. Aug 13 07:20:58.632518 containerd[1476]: time="2025-08-13T07:20:58.632448699Z" level=info msg="shim disconnected" id=7e434782ca3c3ca4703a98e7c0b40df40703f287285ce58b49a36d0167c427cb namespace=k8s.io Aug 13 07:20:58.632518 containerd[1476]: time="2025-08-13T07:20:58.632515676Z" level=warning msg="cleaning up after shim disconnected" id=7e434782ca3c3ca4703a98e7c0b40df40703f287285ce58b49a36d0167c427cb namespace=k8s.io Aug 13 07:20:58.632724 containerd[1476]: time="2025-08-13T07:20:58.632525374Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:20:59.238861 kubelet[2513]: E0813 07:20:59.238818 2513 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 07:20:59.517376 kubelet[2513]: E0813 07:20:59.517249 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:20:59.519020 containerd[1476]: time="2025-08-13T07:20:59.518934223Z" level=info msg="CreateContainer within sandbox \"df45db3243dc6e57d4ef546ec7c99a8b8b9044e2c8e5e1047b9eab8c19db973e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 07:20:59.541107 containerd[1476]: time="2025-08-13T07:20:59.541038317Z" level=info msg="CreateContainer within sandbox \"df45db3243dc6e57d4ef546ec7c99a8b8b9044e2c8e5e1047b9eab8c19db973e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6e3d133c0ffbe1842659e46a3452cdb2c0587503f9fc9bbf5aa5f3fdf84778fc\"" Aug 13 07:20:59.541664 containerd[1476]: time="2025-08-13T07:20:59.541621080Z" level=info msg="StartContainer for \"6e3d133c0ffbe1842659e46a3452cdb2c0587503f9fc9bbf5aa5f3fdf84778fc\"" Aug 13 07:20:59.572067 systemd[1]: Started cri-containerd-6e3d133c0ffbe1842659e46a3452cdb2c0587503f9fc9bbf5aa5f3fdf84778fc.scope - libcontainer container 6e3d133c0ffbe1842659e46a3452cdb2c0587503f9fc9bbf5aa5f3fdf84778fc. Aug 13 07:20:59.607312 systemd[1]: cri-containerd-6e3d133c0ffbe1842659e46a3452cdb2c0587503f9fc9bbf5aa5f3fdf84778fc.scope: Deactivated successfully. Aug 13 07:20:59.622284 containerd[1476]: time="2025-08-13T07:20:59.622224496Z" level=info msg="StartContainer for \"6e3d133c0ffbe1842659e46a3452cdb2c0587503f9fc9bbf5aa5f3fdf84778fc\" returns successfully" Aug 13 07:20:59.645622 containerd[1476]: time="2025-08-13T07:20:59.645536966Z" level=info msg="shim disconnected" id=6e3d133c0ffbe1842659e46a3452cdb2c0587503f9fc9bbf5aa5f3fdf84778fc namespace=k8s.io Aug 13 07:20:59.645622 containerd[1476]: time="2025-08-13T07:20:59.645606297Z" level=warning msg="cleaning up after shim disconnected" id=6e3d133c0ffbe1842659e46a3452cdb2c0587503f9fc9bbf5aa5f3fdf84778fc namespace=k8s.io Aug 13 07:20:59.645622 containerd[1476]: time="2025-08-13T07:20:59.645617818Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:20:59.977804 systemd[1]: run-containerd-runc-k8s.io-6e3d133c0ffbe1842659e46a3452cdb2c0587503f9fc9bbf5aa5f3fdf84778fc-runc.myPngC.mount: Deactivated successfully. Aug 13 07:20:59.977934 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e3d133c0ffbe1842659e46a3452cdb2c0587503f9fc9bbf5aa5f3fdf84778fc-rootfs.mount: Deactivated successfully. Aug 13 07:21:00.521513 kubelet[2513]: E0813 07:21:00.521481 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:21:00.524067 containerd[1476]: time="2025-08-13T07:21:00.523995146Z" level=info msg="CreateContainer within sandbox \"df45db3243dc6e57d4ef546ec7c99a8b8b9044e2c8e5e1047b9eab8c19db973e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 07:21:00.541025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1966919289.mount: Deactivated successfully. Aug 13 07:21:00.676218 containerd[1476]: time="2025-08-13T07:21:00.676175469Z" level=info msg="CreateContainer within sandbox \"df45db3243dc6e57d4ef546ec7c99a8b8b9044e2c8e5e1047b9eab8c19db973e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4a18ef28105b96ecda7c97232d54b82ae48c94348829c4e8a1b99f8e47b7626f\"" Aug 13 07:21:00.676755 containerd[1476]: time="2025-08-13T07:21:00.676687599Z" level=info msg="StartContainer for \"4a18ef28105b96ecda7c97232d54b82ae48c94348829c4e8a1b99f8e47b7626f\"" Aug 13 07:21:00.707025 systemd[1]: Started cri-containerd-4a18ef28105b96ecda7c97232d54b82ae48c94348829c4e8a1b99f8e47b7626f.scope - libcontainer container 4a18ef28105b96ecda7c97232d54b82ae48c94348829c4e8a1b99f8e47b7626f. Aug 13 07:21:00.731789 systemd[1]: cri-containerd-4a18ef28105b96ecda7c97232d54b82ae48c94348829c4e8a1b99f8e47b7626f.scope: Deactivated successfully. Aug 13 07:21:00.733572 containerd[1476]: time="2025-08-13T07:21:00.733520021Z" level=info msg="StartContainer for \"4a18ef28105b96ecda7c97232d54b82ae48c94348829c4e8a1b99f8e47b7626f\" returns successfully" Aug 13 07:21:00.756831 containerd[1476]: time="2025-08-13T07:21:00.756759470Z" level=info msg="shim disconnected" id=4a18ef28105b96ecda7c97232d54b82ae48c94348829c4e8a1b99f8e47b7626f namespace=k8s.io Aug 13 07:21:00.756831 containerd[1476]: time="2025-08-13T07:21:00.756822900Z" level=warning msg="cleaning up after shim disconnected" id=4a18ef28105b96ecda7c97232d54b82ae48c94348829c4e8a1b99f8e47b7626f namespace=k8s.io Aug 13 07:21:00.756831 containerd[1476]: time="2025-08-13T07:21:00.756831867Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:21:00.977812 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a18ef28105b96ecda7c97232d54b82ae48c94348829c4e8a1b99f8e47b7626f-rootfs.mount: Deactivated successfully. Aug 13 07:21:01.526149 kubelet[2513]: E0813 07:21:01.526099 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:21:01.528391 containerd[1476]: time="2025-08-13T07:21:01.528335499Z" level=info msg="CreateContainer within sandbox \"df45db3243dc6e57d4ef546ec7c99a8b8b9044e2c8e5e1047b9eab8c19db973e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 07:21:01.543856 containerd[1476]: time="2025-08-13T07:21:01.543804860Z" level=info msg="CreateContainer within sandbox \"df45db3243dc6e57d4ef546ec7c99a8b8b9044e2c8e5e1047b9eab8c19db973e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"68ec4b4919cde098f2cbf47b4f23e5b0c8469daace03af12255f957f5573400a\"" Aug 13 07:21:01.544381 containerd[1476]: time="2025-08-13T07:21:01.544325226Z" level=info msg="StartContainer for \"68ec4b4919cde098f2cbf47b4f23e5b0c8469daace03af12255f957f5573400a\"" Aug 13 07:21:01.550363 kubelet[2513]: I0813 07:21:01.550305 2513 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T07:21:01Z","lastTransitionTime":"2025-08-13T07:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 13 07:21:01.578053 systemd[1]: Started cri-containerd-68ec4b4919cde098f2cbf47b4f23e5b0c8469daace03af12255f957f5573400a.scope - libcontainer container 68ec4b4919cde098f2cbf47b4f23e5b0c8469daace03af12255f957f5573400a. Aug 13 07:21:01.608727 containerd[1476]: time="2025-08-13T07:21:01.608686428Z" level=info msg="StartContainer for \"68ec4b4919cde098f2cbf47b4f23e5b0c8469daace03af12255f957f5573400a\" returns successfully" Aug 13 07:21:02.017931 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Aug 13 07:21:02.530354 kubelet[2513]: E0813 07:21:02.530320 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:21:02.542039 kubelet[2513]: I0813 07:21:02.541953 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cdbzq" podStartSLOduration=6.541929134 podStartE2EDuration="6.541929134s" podCreationTimestamp="2025-08-13 07:20:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:21:02.541678228 +0000 UTC m=+83.443098695" watchObservedRunningTime="2025-08-13 07:21:02.541929134 +0000 UTC m=+83.443349601" Aug 13 07:21:04.116873 kubelet[2513]: E0813 07:21:04.116835 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:21:04.173867 kubelet[2513]: E0813 07:21:04.173818 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:21:05.066825 systemd-networkd[1398]: lxc_health: Link UP Aug 13 07:21:05.072132 systemd-networkd[1398]: lxc_health: Gained carrier Aug 13 07:21:06.118534 kubelet[2513]: E0813 07:21:06.118495 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:21:06.537756 kubelet[2513]: E0813 07:21:06.537721 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:21:06.655153 systemd-networkd[1398]: lxc_health: Gained IPv6LL Aug 13 07:21:07.538881 kubelet[2513]: E0813 07:21:07.538844 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:21:11.270208 kubelet[2513]: E0813 07:21:11.270129 2513 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:39288->127.0.0.1:33771: write tcp 127.0.0.1:39288->127.0.0.1:33771: write: broken pipe Aug 13 07:21:11.274298 sshd[4346]: pam_unix(sshd:session): session closed for user core Aug 13 07:21:11.278544 systemd[1]: sshd@26-10.0.0.153:22-10.0.0.1:38320.service: Deactivated successfully. Aug 13 07:21:11.280581 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 07:21:11.281209 systemd-logind[1453]: Session 27 logged out. Waiting for processes to exit. Aug 13 07:21:11.282045 systemd-logind[1453]: Removed session 27.