Oct 13 05:26:57.404266 kernel: Linux version 6.12.51-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Mon Oct 13 03:31:29 -00 2025 Oct 13 05:26:57.404301 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4919840803704517a91afcb9d57d99e9935244ff049349c54216d9a31bc1da5d Oct 13 05:26:57.404310 kernel: BIOS-provided physical RAM map: Oct 13 05:26:57.404317 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 13 05:26:57.404324 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 13 05:26:57.404331 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 13 05:26:57.404342 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Oct 13 05:26:57.404349 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Oct 13 05:26:57.404359 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 13 05:26:57.404366 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 13 05:26:57.404372 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 13 05:26:57.404379 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 13 05:26:57.404386 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 13 05:26:57.404393 kernel: NX (Execute Disable) protection: active Oct 13 05:26:57.404404 kernel: APIC: Static calls initialized Oct 13 05:26:57.404412 kernel: SMBIOS 2.8 present. Oct 13 05:26:57.404422 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Oct 13 05:26:57.404430 kernel: DMI: Memory slots populated: 1/1 Oct 13 05:26:57.404437 kernel: Hypervisor detected: KVM Oct 13 05:26:57.404444 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 13 05:26:57.404452 kernel: kvm-clock: using sched offset of 3636909245 cycles Oct 13 05:26:57.404462 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 13 05:26:57.404470 kernel: tsc: Detected 2794.750 MHz processor Oct 13 05:26:57.404478 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 13 05:26:57.404486 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 13 05:26:57.404494 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Oct 13 05:26:57.404502 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 13 05:26:57.404510 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 13 05:26:57.404520 kernel: Using GB pages for direct mapping Oct 13 05:26:57.404528 kernel: ACPI: Early table checksum verification disabled Oct 13 05:26:57.404536 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Oct 13 05:26:57.404544 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:26:57.404552 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:26:57.404559 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:26:57.404567 kernel: ACPI: FACS 0x000000009CFE0000 000040 Oct 13 05:26:57.404575 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:26:57.404585 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:26:57.404593 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:26:57.404601 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:26:57.404608 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Oct 13 05:26:57.404620 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Oct 13 05:26:57.404630 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Oct 13 05:26:57.404638 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Oct 13 05:26:57.404677 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Oct 13 05:26:57.404686 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Oct 13 05:26:57.404708 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Oct 13 05:26:57.404716 kernel: No NUMA configuration found Oct 13 05:26:57.404727 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Oct 13 05:26:57.404736 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Oct 13 05:26:57.404757 kernel: Zone ranges: Oct 13 05:26:57.404789 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 13 05:26:57.404809 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Oct 13 05:26:57.404829 kernel: Normal empty Oct 13 05:26:57.404848 kernel: Device empty Oct 13 05:26:57.404857 kernel: Movable zone start for each node Oct 13 05:26:57.404867 kernel: Early memory node ranges Oct 13 05:26:57.404875 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 13 05:26:57.404883 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Oct 13 05:26:57.404917 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Oct 13 05:26:57.404925 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 13 05:26:57.404933 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 13 05:26:57.404945 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Oct 13 05:26:57.404956 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 13 05:26:57.404966 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 13 05:26:57.404974 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 13 05:26:57.404982 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 13 05:26:57.404990 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 13 05:26:57.405001 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 13 05:26:57.405009 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 13 05:26:57.405020 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 13 05:26:57.405028 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 13 05:26:57.405036 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 13 05:26:57.405044 kernel: TSC deadline timer available Oct 13 05:26:57.405052 kernel: CPU topo: Max. logical packages: 1 Oct 13 05:26:57.405060 kernel: CPU topo: Max. logical dies: 1 Oct 13 05:26:57.405093 kernel: CPU topo: Max. dies per package: 1 Oct 13 05:26:57.405102 kernel: CPU topo: Max. threads per core: 1 Oct 13 05:26:57.405112 kernel: CPU topo: Num. cores per package: 4 Oct 13 05:26:57.405120 kernel: CPU topo: Num. threads per package: 4 Oct 13 05:26:57.405128 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Oct 13 05:26:57.405136 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 13 05:26:57.405144 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 13 05:26:57.405159 kernel: kvm-guest: setup PV sched yield Oct 13 05:26:57.405186 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 13 05:26:57.405215 kernel: Booting paravirtualized kernel on KVM Oct 13 05:26:57.405225 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 13 05:26:57.405233 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 13 05:26:57.405241 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Oct 13 05:26:57.405249 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Oct 13 05:26:57.405257 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 13 05:26:57.405265 kernel: kvm-guest: PV spinlocks enabled Oct 13 05:26:57.405274 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 13 05:26:57.405286 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4919840803704517a91afcb9d57d99e9935244ff049349c54216d9a31bc1da5d Oct 13 05:26:57.405294 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 13 05:26:57.405303 kernel: random: crng init done Oct 13 05:26:57.405311 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 13 05:26:57.405319 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 13 05:26:57.405327 kernel: Fallback order for Node 0: 0 Oct 13 05:26:57.405338 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Oct 13 05:26:57.405346 kernel: Policy zone: DMA32 Oct 13 05:26:57.405368 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 13 05:26:57.405388 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 13 05:26:57.405396 kernel: ftrace: allocating 40210 entries in 158 pages Oct 13 05:26:57.405404 kernel: ftrace: allocated 158 pages with 5 groups Oct 13 05:26:57.405412 kernel: Dynamic Preempt: voluntary Oct 13 05:26:57.405420 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 13 05:26:57.405432 kernel: rcu: RCU event tracing is enabled. Oct 13 05:26:57.405440 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 13 05:26:57.405448 kernel: Trampoline variant of Tasks RCU enabled. Oct 13 05:26:57.405460 kernel: Rude variant of Tasks RCU enabled. Oct 13 05:26:57.405469 kernel: Tracing variant of Tasks RCU enabled. Oct 13 05:26:57.405477 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 13 05:26:57.405485 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 13 05:26:57.405496 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 05:26:57.405505 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 05:26:57.405513 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 05:26:57.405521 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 13 05:26:57.405529 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 13 05:26:57.405547 kernel: Console: colour VGA+ 80x25 Oct 13 05:26:57.405556 kernel: printk: legacy console [ttyS0] enabled Oct 13 05:26:57.405564 kernel: ACPI: Core revision 20240827 Oct 13 05:26:57.405573 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 13 05:26:57.405584 kernel: APIC: Switch to symmetric I/O mode setup Oct 13 05:26:57.405592 kernel: x2apic enabled Oct 13 05:26:57.405602 kernel: APIC: Switched APIC routing to: physical x2apic Oct 13 05:26:57.405611 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 13 05:26:57.405620 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 13 05:26:57.405637 kernel: kvm-guest: setup PV IPIs Oct 13 05:26:57.405658 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 13 05:26:57.405668 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Oct 13 05:26:57.405677 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Oct 13 05:26:57.405697 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 13 05:26:57.405707 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 13 05:26:57.405725 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 13 05:26:57.405734 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 13 05:26:57.405742 kernel: Spectre V2 : Mitigation: Retpolines Oct 13 05:26:57.405750 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 13 05:26:57.405759 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 13 05:26:57.405767 kernel: active return thunk: retbleed_return_thunk Oct 13 05:26:57.405776 kernel: RETBleed: Mitigation: untrained return thunk Oct 13 05:26:57.405791 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 13 05:26:57.405800 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 13 05:26:57.405808 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 13 05:26:57.405817 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 13 05:26:57.405826 kernel: active return thunk: srso_return_thunk Oct 13 05:26:57.405834 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 13 05:26:57.405843 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 13 05:26:57.405858 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 13 05:26:57.405867 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 13 05:26:57.405875 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 13 05:26:57.405884 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 13 05:26:57.405892 kernel: Freeing SMP alternatives memory: 32K Oct 13 05:26:57.405900 kernel: pid_max: default: 32768 minimum: 301 Oct 13 05:26:57.405909 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 13 05:26:57.405923 kernel: landlock: Up and running. Oct 13 05:26:57.405931 kernel: SELinux: Initializing. Oct 13 05:26:57.405942 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 13 05:26:57.405950 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 13 05:26:57.405959 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 13 05:26:57.405967 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 13 05:26:57.405975 kernel: ... version: 0 Oct 13 05:26:57.405990 kernel: ... bit width: 48 Oct 13 05:26:57.405998 kernel: ... generic registers: 6 Oct 13 05:26:57.406007 kernel: ... value mask: 0000ffffffffffff Oct 13 05:26:57.406015 kernel: ... max period: 00007fffffffffff Oct 13 05:26:57.406023 kernel: ... fixed-purpose events: 0 Oct 13 05:26:57.406031 kernel: ... event mask: 000000000000003f Oct 13 05:26:57.406040 kernel: signal: max sigframe size: 1776 Oct 13 05:26:57.406054 kernel: rcu: Hierarchical SRCU implementation. Oct 13 05:26:57.406063 kernel: rcu: Max phase no-delay instances is 400. Oct 13 05:26:57.406072 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 13 05:26:57.406080 kernel: smp: Bringing up secondary CPUs ... Oct 13 05:26:57.406088 kernel: smpboot: x86: Booting SMP configuration: Oct 13 05:26:57.406097 kernel: .... node #0, CPUs: #1 #2 #3 Oct 13 05:26:57.406105 kernel: smp: Brought up 1 node, 4 CPUs Oct 13 05:26:57.406113 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Oct 13 05:26:57.406129 kernel: Memory: 2459628K/2571752K available (14336K kernel code, 2450K rwdata, 10012K rodata, 24532K init, 1684K bss, 106184K reserved, 0K cma-reserved) Oct 13 05:26:57.406137 kernel: devtmpfs: initialized Oct 13 05:26:57.406145 kernel: x86/mm: Memory block size: 128MB Oct 13 05:26:57.406162 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 13 05:26:57.406174 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 13 05:26:57.406187 kernel: pinctrl core: initialized pinctrl subsystem Oct 13 05:26:57.406199 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 13 05:26:57.406218 kernel: audit: initializing netlink subsys (disabled) Oct 13 05:26:57.406226 kernel: audit: type=2000 audit(1760333214.047:1): state=initialized audit_enabled=0 res=1 Oct 13 05:26:57.406235 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 13 05:26:57.406244 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 13 05:26:57.406252 kernel: cpuidle: using governor menu Oct 13 05:26:57.406260 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 13 05:26:57.406269 kernel: dca service started, version 1.12.1 Oct 13 05:26:57.406284 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Oct 13 05:26:57.406293 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 13 05:26:57.406301 kernel: PCI: Using configuration type 1 for base access Oct 13 05:26:57.406310 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 13 05:26:57.406318 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 13 05:26:57.406326 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 13 05:26:57.406335 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 13 05:26:57.406350 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 13 05:26:57.406358 kernel: ACPI: Added _OSI(Module Device) Oct 13 05:26:57.406366 kernel: ACPI: Added _OSI(Processor Device) Oct 13 05:26:57.406375 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 13 05:26:57.406383 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 13 05:26:57.406393 kernel: ACPI: Interpreter enabled Oct 13 05:26:57.406402 kernel: ACPI: PM: (supports S0 S3 S5) Oct 13 05:26:57.406416 kernel: ACPI: Using IOAPIC for interrupt routing Oct 13 05:26:57.406425 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 13 05:26:57.406433 kernel: PCI: Using E820 reservations for host bridge windows Oct 13 05:26:57.406446 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 13 05:26:57.406455 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 13 05:26:57.406723 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 13 05:26:57.406920 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 13 05:26:57.407104 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 13 05:26:57.407116 kernel: PCI host bridge to bus 0000:00 Oct 13 05:26:57.407326 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 13 05:26:57.407495 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 13 05:26:57.407733 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 13 05:26:57.407946 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Oct 13 05:26:57.408104 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 13 05:26:57.408324 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Oct 13 05:26:57.408484 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 13 05:26:57.408696 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Oct 13 05:26:57.408895 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Oct 13 05:26:57.409068 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Oct 13 05:26:57.409275 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Oct 13 05:26:57.409448 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Oct 13 05:26:57.409618 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 13 05:26:57.409818 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 13 05:26:57.410008 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Oct 13 05:26:57.410195 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Oct 13 05:26:57.410372 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Oct 13 05:26:57.410560 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 13 05:26:57.410754 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Oct 13 05:26:57.410927 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Oct 13 05:26:57.411117 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Oct 13 05:26:57.411322 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 13 05:26:57.411497 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Oct 13 05:26:57.411685 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Oct 13 05:26:57.411860 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Oct 13 05:26:57.412049 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Oct 13 05:26:57.412251 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Oct 13 05:26:57.412426 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 13 05:26:57.412608 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Oct 13 05:26:57.412802 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Oct 13 05:26:57.412974 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Oct 13 05:26:57.413188 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Oct 13 05:26:57.413373 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Oct 13 05:26:57.413385 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 13 05:26:57.413394 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 13 05:26:57.413403 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 13 05:26:57.413414 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 13 05:26:57.413434 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 13 05:26:57.413442 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 13 05:26:57.413451 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 13 05:26:57.413459 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 13 05:26:57.413467 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 13 05:26:57.413476 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 13 05:26:57.413484 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 13 05:26:57.413499 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 13 05:26:57.413508 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 13 05:26:57.413516 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 13 05:26:57.413525 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 13 05:26:57.413533 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 13 05:26:57.413541 kernel: iommu: Default domain type: Translated Oct 13 05:26:57.413550 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 13 05:26:57.413565 kernel: PCI: Using ACPI for IRQ routing Oct 13 05:26:57.413573 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 13 05:26:57.413582 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 13 05:26:57.413590 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Oct 13 05:26:57.413782 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 13 05:26:57.413952 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 13 05:26:57.414122 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 13 05:26:57.414146 kernel: vgaarb: loaded Oct 13 05:26:57.414163 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 13 05:26:57.414174 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 13 05:26:57.414185 kernel: clocksource: Switched to clocksource kvm-clock Oct 13 05:26:57.414196 kernel: VFS: Disk quotas dquot_6.6.0 Oct 13 05:26:57.414208 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 13 05:26:57.414216 kernel: pnp: PnP ACPI init Oct 13 05:26:57.414416 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 13 05:26:57.414429 kernel: pnp: PnP ACPI: found 6 devices Oct 13 05:26:57.414438 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 13 05:26:57.414447 kernel: NET: Registered PF_INET protocol family Oct 13 05:26:57.414455 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 13 05:26:57.414464 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 13 05:26:57.414472 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 13 05:26:57.414491 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 13 05:26:57.414500 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 13 05:26:57.414508 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 13 05:26:57.414517 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 13 05:26:57.414525 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 13 05:26:57.414534 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 13 05:26:57.414542 kernel: NET: Registered PF_XDP protocol family Oct 13 05:26:57.414729 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 13 05:26:57.414892 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 13 05:26:57.415049 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 13 05:26:57.415226 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Oct 13 05:26:57.415387 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 13 05:26:57.415544 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Oct 13 05:26:57.415661 kernel: PCI: CLS 0 bytes, default 64 Oct 13 05:26:57.415670 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Oct 13 05:26:57.415679 kernel: Initialise system trusted keyrings Oct 13 05:26:57.415687 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 13 05:26:57.415696 kernel: Key type asymmetric registered Oct 13 05:26:57.415704 kernel: Asymmetric key parser 'x509' registered Oct 13 05:26:57.415713 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 13 05:26:57.415730 kernel: io scheduler mq-deadline registered Oct 13 05:26:57.415739 kernel: io scheduler kyber registered Oct 13 05:26:57.415747 kernel: io scheduler bfq registered Oct 13 05:26:57.415756 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 13 05:26:57.415765 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 13 05:26:57.415774 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 13 05:26:57.415782 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 13 05:26:57.415791 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 13 05:26:57.415806 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 13 05:26:57.415815 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 13 05:26:57.415824 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 13 05:26:57.415832 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 13 05:26:57.416023 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 13 05:26:57.416036 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 13 05:26:57.416227 kernel: rtc_cmos 00:04: registered as rtc0 Oct 13 05:26:57.416400 kernel: rtc_cmos 00:04: setting system clock to 2025-10-13T05:26:55 UTC (1760333215) Oct 13 05:26:57.416563 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 13 05:26:57.416574 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 13 05:26:57.416583 kernel: NET: Registered PF_INET6 protocol family Oct 13 05:26:57.416591 kernel: Segment Routing with IPv6 Oct 13 05:26:57.416600 kernel: In-situ OAM (IOAM) with IPv6 Oct 13 05:26:57.416620 kernel: NET: Registered PF_PACKET protocol family Oct 13 05:26:57.416628 kernel: Key type dns_resolver registered Oct 13 05:26:57.416637 kernel: IPI shorthand broadcast: enabled Oct 13 05:26:57.416662 kernel: sched_clock: Marking stable (1321003216, 210066755)->(1591218925, -60148954) Oct 13 05:26:57.416671 kernel: registered taskstats version 1 Oct 13 05:26:57.416680 kernel: Loading compiled-in X.509 certificates Oct 13 05:26:57.416688 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.51-flatcar: 9f1258ccc510afd4f2a37f4774c4b2e958d823b7' Oct 13 05:26:57.416706 kernel: Demotion targets for Node 0: null Oct 13 05:26:57.416714 kernel: Key type .fscrypt registered Oct 13 05:26:57.416722 kernel: Key type fscrypt-provisioning registered Oct 13 05:26:57.416731 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 13 05:26:57.416739 kernel: ima: Allocated hash algorithm: sha1 Oct 13 05:26:57.416748 kernel: ima: No architecture policies found Oct 13 05:26:57.416756 kernel: clk: Disabling unused clocks Oct 13 05:26:57.416772 kernel: Freeing unused kernel image (initmem) memory: 24532K Oct 13 05:26:57.416780 kernel: Write protecting the kernel read-only data: 24576k Oct 13 05:26:57.416789 kernel: Freeing unused kernel image (rodata/data gap) memory: 228K Oct 13 05:26:57.416797 kernel: Run /init as init process Oct 13 05:26:57.416805 kernel: with arguments: Oct 13 05:26:57.416814 kernel: /init Oct 13 05:26:57.416822 kernel: with environment: Oct 13 05:26:57.416837 kernel: HOME=/ Oct 13 05:26:57.416845 kernel: TERM=linux Oct 13 05:26:57.416854 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 13 05:26:57.416862 kernel: SCSI subsystem initialized Oct 13 05:26:57.416871 kernel: libata version 3.00 loaded. Oct 13 05:26:57.417102 kernel: ahci 0000:00:1f.2: version 3.0 Oct 13 05:26:57.417115 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 13 05:26:57.417321 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Oct 13 05:26:57.417498 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Oct 13 05:26:57.417709 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 13 05:26:57.417915 kernel: scsi host0: ahci Oct 13 05:26:57.418104 kernel: scsi host1: ahci Oct 13 05:26:57.418309 kernel: scsi host2: ahci Oct 13 05:26:57.418511 kernel: scsi host3: ahci Oct 13 05:26:57.418714 kernel: scsi host4: ahci Oct 13 05:26:57.418903 kernel: scsi host5: ahci Oct 13 05:26:57.418928 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Oct 13 05:26:57.418937 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Oct 13 05:26:57.418953 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Oct 13 05:26:57.418961 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Oct 13 05:26:57.418970 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Oct 13 05:26:57.418979 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Oct 13 05:26:57.418988 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 13 05:26:57.418997 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 13 05:26:57.419005 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 13 05:26:57.419021 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 13 05:26:57.419029 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 13 05:26:57.419038 kernel: ata3.00: LPM support broken, forcing max_power Oct 13 05:26:57.419047 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 13 05:26:57.419055 kernel: ata3.00: applying bridge limits Oct 13 05:26:57.419064 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 13 05:26:57.419073 kernel: ata3.00: LPM support broken, forcing max_power Oct 13 05:26:57.419081 kernel: ata3.00: configured for UDMA/100 Oct 13 05:26:57.419315 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 13 05:26:57.419512 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 13 05:26:57.419703 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 13 05:26:57.419716 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 13 05:26:57.419725 kernel: GPT:16515071 != 27000831 Oct 13 05:26:57.419747 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 13 05:26:57.419756 kernel: GPT:16515071 != 27000831 Oct 13 05:26:57.419764 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 13 05:26:57.419772 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 05:26:57.419782 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:26:57.419977 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 13 05:26:57.419990 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 13 05:26:57.420207 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 13 05:26:57.420224 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 13 05:26:57.420236 kernel: device-mapper: uevent: version 1.0.3 Oct 13 05:26:57.420247 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 13 05:26:57.420256 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 13 05:26:57.420265 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:26:57.420284 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:26:57.420293 kernel: raid6: avx2x4 gen() 28104 MB/s Oct 13 05:26:57.420302 kernel: raid6: avx2x2 gen() 28595 MB/s Oct 13 05:26:57.420310 kernel: raid6: avx2x1 gen() 23144 MB/s Oct 13 05:26:57.420319 kernel: raid6: using algorithm avx2x2 gen() 28595 MB/s Oct 13 05:26:57.420328 kernel: raid6: .... xor() 18563 MB/s, rmw enabled Oct 13 05:26:57.420343 kernel: raid6: using avx2x2 recovery algorithm Oct 13 05:26:57.420352 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:26:57.420361 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:26:57.420369 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:26:57.420378 kernel: xor: automatically using best checksumming function avx Oct 13 05:26:57.420387 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:26:57.420395 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 13 05:26:57.420411 kernel: BTRFS: device fsid e87b15e9-127c-40e2-bae7-d0ea05b4f2e3 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (195) Oct 13 05:26:57.420420 kernel: BTRFS info (device dm-0): first mount of filesystem e87b15e9-127c-40e2-bae7-d0ea05b4f2e3 Oct 13 05:26:57.420429 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:26:57.420438 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 13 05:26:57.420447 kernel: BTRFS info (device dm-0): enabling free space tree Oct 13 05:26:57.420455 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:26:57.420464 kernel: loop: module loaded Oct 13 05:26:57.420472 kernel: loop0: detected capacity change from 0 to 100048 Oct 13 05:26:57.420488 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 13 05:26:57.420497 systemd[1]: Successfully made /usr/ read-only. Oct 13 05:26:57.420509 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 05:26:57.420519 systemd[1]: Detected virtualization kvm. Oct 13 05:26:57.420528 systemd[1]: Detected architecture x86-64. Oct 13 05:26:57.420544 systemd[1]: Running in initrd. Oct 13 05:26:57.420553 systemd[1]: No hostname configured, using default hostname. Oct 13 05:26:57.420563 systemd[1]: Hostname set to . Oct 13 05:26:57.420572 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 13 05:26:57.420581 systemd[1]: Queued start job for default target initrd.target. Oct 13 05:26:57.420590 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 13 05:26:57.420600 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:26:57.420621 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:26:57.420631 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 13 05:26:57.420640 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 05:26:57.420667 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 13 05:26:57.420678 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 13 05:26:57.420694 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:26:57.420704 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:26:57.420713 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 13 05:26:57.420722 systemd[1]: Reached target paths.target - Path Units. Oct 13 05:26:57.420732 systemd[1]: Reached target slices.target - Slice Units. Oct 13 05:26:57.420741 systemd[1]: Reached target swap.target - Swaps. Oct 13 05:26:57.420750 systemd[1]: Reached target timers.target - Timer Units. Oct 13 05:26:57.420766 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 05:26:57.420776 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 05:26:57.420785 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 13 05:26:57.420794 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 13 05:26:57.420803 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:26:57.420813 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 05:26:57.420822 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:26:57.420838 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 05:26:57.420848 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 13 05:26:57.420858 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 13 05:26:57.420867 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 05:26:57.420876 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 13 05:26:57.420886 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 13 05:26:57.420895 systemd[1]: Starting systemd-fsck-usr.service... Oct 13 05:26:57.420912 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 05:26:57.420921 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 05:26:57.420930 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:26:57.420940 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:26:57.420956 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 13 05:26:57.420965 systemd[1]: Finished systemd-fsck-usr.service. Oct 13 05:26:57.420975 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 05:26:57.421014 systemd-journald[328]: Collecting audit messages is disabled. Oct 13 05:26:57.421042 systemd-journald[328]: Journal started Oct 13 05:26:57.421062 systemd-journald[328]: Runtime Journal (/run/log/journal/f5348e2332d64b0c9c6732b50ac0e478) is 6M, max 48.6M, 42.5M free. Oct 13 05:26:57.424672 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 05:26:57.427054 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:26:57.430664 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 13 05:26:57.432918 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 05:26:57.438075 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 05:26:57.442679 kernel: Bridge firewalling registered Oct 13 05:26:57.442737 systemd-modules-load[330]: Inserted module 'br_netfilter' Oct 13 05:26:57.447941 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 05:26:57.512830 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:26:57.525969 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:26:57.530984 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 13 05:26:57.534960 systemd-tmpfiles[349]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 13 05:26:57.535876 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 05:26:57.545066 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:26:57.559618 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:26:57.561969 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 05:26:57.566523 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 05:26:57.570825 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 13 05:26:57.592156 dracut-cmdline[373]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4919840803704517a91afcb9d57d99e9935244ff049349c54216d9a31bc1da5d Oct 13 05:26:57.621775 systemd-resolved[372]: Positive Trust Anchors: Oct 13 05:26:57.621792 systemd-resolved[372]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 05:26:57.621799 systemd-resolved[372]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 13 05:26:57.621831 systemd-resolved[372]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 05:26:57.647803 systemd-resolved[372]: Defaulting to hostname 'linux'. Oct 13 05:26:57.649403 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 05:26:57.653272 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:26:57.726679 kernel: Loading iSCSI transport class v2.0-870. Oct 13 05:26:57.741676 kernel: iscsi: registered transport (tcp) Oct 13 05:26:57.765782 kernel: iscsi: registered transport (qla4xxx) Oct 13 05:26:57.765817 kernel: QLogic iSCSI HBA Driver Oct 13 05:26:57.805103 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 05:26:57.838107 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:26:57.841221 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 05:26:57.903962 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 13 05:26:57.906283 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 13 05:26:57.908723 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 13 05:26:57.955112 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 13 05:26:57.957891 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:26:57.995642 systemd-udevd[612]: Using default interface naming scheme 'v257'. Oct 13 05:26:58.011982 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:26:58.016296 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 13 05:26:58.052293 dracut-pre-trigger[671]: rd.md=0: removing MD RAID activation Oct 13 05:26:58.062891 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 05:26:58.065952 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 05:26:58.089886 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 05:26:58.093245 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 05:26:58.125821 systemd-networkd[731]: lo: Link UP Oct 13 05:26:58.125828 systemd-networkd[731]: lo: Gained carrier Oct 13 05:26:58.126453 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 05:26:58.127278 systemd[1]: Reached target network.target - Network. Oct 13 05:26:58.187633 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:26:58.190941 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 13 05:26:58.266946 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 13 05:26:58.283167 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 13 05:26:58.304056 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 13 05:26:58.321916 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 13 05:26:58.325121 kernel: cryptd: max_cpu_qlen set to 1000 Oct 13 05:26:58.332157 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 13 05:26:58.336848 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Oct 13 05:26:58.335482 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:26:58.336696 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:26:58.337496 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:26:58.340059 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:26:58.352730 kernel: AES CTR mode by8 optimization enabled Oct 13 05:26:58.357401 systemd-networkd[731]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:26:58.357406 systemd-networkd[731]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 05:26:58.357836 systemd-networkd[731]: eth0: Link UP Oct 13 05:26:58.358794 systemd-networkd[731]: eth0: Gained carrier Oct 13 05:26:58.358804 systemd-networkd[731]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:26:58.373757 systemd-networkd[731]: eth0: DHCPv4 address 10.0.0.33/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 13 05:26:58.376400 disk-uuid[829]: Primary Header is updated. Oct 13 05:26:58.376400 disk-uuid[829]: Secondary Entries is updated. Oct 13 05:26:58.376400 disk-uuid[829]: Secondary Header is updated. Oct 13 05:26:58.525384 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:26:58.571039 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 13 05:26:58.573435 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 05:26:58.576372 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:26:58.578278 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 05:26:58.581208 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 13 05:26:58.609112 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 13 05:26:59.421493 disk-uuid[839]: Warning: The kernel is still using the old partition table. Oct 13 05:26:59.421493 disk-uuid[839]: The new table will be used at the next reboot or after you Oct 13 05:26:59.421493 disk-uuid[839]: run partprobe(8) or kpartx(8) Oct 13 05:26:59.421493 disk-uuid[839]: The operation has completed successfully. Oct 13 05:26:59.438025 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 13 05:26:59.438198 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 13 05:26:59.440441 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 13 05:26:59.482956 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (879) Oct 13 05:26:59.483075 kernel: BTRFS info (device vda6): first mount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:26:59.483101 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:26:59.488681 kernel: BTRFS info (device vda6): turning on async discard Oct 13 05:26:59.488726 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 05:26:59.497676 kernel: BTRFS info (device vda6): last unmount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:26:59.498436 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 13 05:26:59.502964 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 13 05:26:59.511858 systemd-networkd[731]: eth0: Gained IPv6LL Oct 13 05:26:59.829026 ignition[898]: Ignition 2.22.0 Oct 13 05:26:59.829076 ignition[898]: Stage: fetch-offline Oct 13 05:26:59.829161 ignition[898]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:26:59.829175 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:26:59.829389 ignition[898]: parsed url from cmdline: "" Oct 13 05:26:59.829402 ignition[898]: no config URL provided Oct 13 05:26:59.829414 ignition[898]: reading system config file "/usr/lib/ignition/user.ign" Oct 13 05:26:59.829435 ignition[898]: no config at "/usr/lib/ignition/user.ign" Oct 13 05:26:59.829505 ignition[898]: op(1): [started] loading QEMU firmware config module Oct 13 05:26:59.829511 ignition[898]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 13 05:26:59.850434 ignition[898]: op(1): [finished] loading QEMU firmware config module Oct 13 05:26:59.850479 ignition[898]: QEMU firmware config was not found. Ignoring... Oct 13 05:26:59.931139 ignition[898]: parsing config with SHA512: 87595b709f3d095a613b7fb0fb0786feba55b167dc99e20ece1032cca1a54286e29730b4d76ed2ba22c47429971f07819ef354c67ec7fd963caed51a5e56db09 Oct 13 05:26:59.935433 unknown[898]: fetched base config from "system" Oct 13 05:26:59.935448 unknown[898]: fetched user config from "qemu" Oct 13 05:26:59.935850 ignition[898]: fetch-offline: fetch-offline passed Oct 13 05:26:59.939131 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 05:26:59.935920 ignition[898]: Ignition finished successfully Oct 13 05:26:59.939804 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 13 05:26:59.941244 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 13 05:27:00.004936 ignition[910]: Ignition 2.22.0 Oct 13 05:27:00.004950 ignition[910]: Stage: kargs Oct 13 05:27:00.005090 ignition[910]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:27:00.005112 ignition[910]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:27:00.005834 ignition[910]: kargs: kargs passed Oct 13 05:27:00.005882 ignition[910]: Ignition finished successfully Oct 13 05:27:00.016258 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 13 05:27:00.020561 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 13 05:27:00.068145 ignition[918]: Ignition 2.22.0 Oct 13 05:27:00.068170 ignition[918]: Stage: disks Oct 13 05:27:00.068362 ignition[918]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:27:00.068373 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:27:00.069515 ignition[918]: disks: disks passed Oct 13 05:27:00.069564 ignition[918]: Ignition finished successfully Oct 13 05:27:00.078684 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 13 05:27:00.079620 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 13 05:27:00.083183 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 13 05:27:00.086673 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 05:27:00.090379 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 05:27:00.091198 systemd[1]: Reached target basic.target - Basic System. Oct 13 05:27:00.099021 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 13 05:27:00.150455 systemd-fsck[928]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 13 05:27:00.158483 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 13 05:27:00.163908 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 13 05:27:00.281696 kernel: EXT4-fs (vda9): mounted filesystem c7d6ef00-6dd1-40b4-91f2-c4c5965e3cac r/w with ordered data mode. Quota mode: none. Oct 13 05:27:00.281822 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 13 05:27:00.283087 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 13 05:27:00.287887 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 05:27:00.289407 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 13 05:27:00.291338 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 13 05:27:00.291373 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 13 05:27:00.291399 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 05:27:00.311239 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 13 05:27:00.313678 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 13 05:27:00.321130 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (937) Oct 13 05:27:00.321159 kernel: BTRFS info (device vda6): first mount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:27:00.321176 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:27:00.326393 kernel: BTRFS info (device vda6): turning on async discard Oct 13 05:27:00.326425 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 05:27:00.328275 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 05:27:00.369563 initrd-setup-root[961]: cut: /sysroot/etc/passwd: No such file or directory Oct 13 05:27:00.375402 initrd-setup-root[968]: cut: /sysroot/etc/group: No such file or directory Oct 13 05:27:00.381186 initrd-setup-root[975]: cut: /sysroot/etc/shadow: No such file or directory Oct 13 05:27:00.386845 initrd-setup-root[982]: cut: /sysroot/etc/gshadow: No such file or directory Oct 13 05:27:00.485133 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 13 05:27:00.489984 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 13 05:27:00.491373 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 13 05:27:00.511706 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 13 05:27:00.514168 kernel: BTRFS info (device vda6): last unmount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:27:00.527807 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 13 05:27:00.674647 ignition[1051]: INFO : Ignition 2.22.0 Oct 13 05:27:00.674647 ignition[1051]: INFO : Stage: mount Oct 13 05:27:00.677379 ignition[1051]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:27:00.677379 ignition[1051]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:27:00.677379 ignition[1051]: INFO : mount: mount passed Oct 13 05:27:00.677379 ignition[1051]: INFO : Ignition finished successfully Oct 13 05:27:00.678415 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 13 05:27:00.681874 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 13 05:27:00.699908 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 05:27:00.734686 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1063) Oct 13 05:27:00.734760 kernel: BTRFS info (device vda6): first mount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:27:00.737702 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:27:00.741415 kernel: BTRFS info (device vda6): turning on async discard Oct 13 05:27:00.741485 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 05:27:00.743187 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 05:27:00.791012 ignition[1080]: INFO : Ignition 2.22.0 Oct 13 05:27:00.791012 ignition[1080]: INFO : Stage: files Oct 13 05:27:00.793510 ignition[1080]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:27:00.793510 ignition[1080]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:27:00.797678 ignition[1080]: DEBUG : files: compiled without relabeling support, skipping Oct 13 05:27:00.800781 ignition[1080]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 13 05:27:00.800781 ignition[1080]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 13 05:27:00.805602 ignition[1080]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 13 05:27:00.807937 ignition[1080]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 13 05:27:00.810506 unknown[1080]: wrote ssh authorized keys file for user: core Oct 13 05:27:00.812171 ignition[1080]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 13 05:27:00.815473 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 13 05:27:00.818549 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 13 05:27:00.867002 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 13 05:27:00.969513 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 13 05:27:00.972777 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 13 05:27:00.972777 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 13 05:27:00.972777 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 13 05:27:00.972777 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 13 05:27:00.972777 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 05:27:00.972777 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 05:27:00.972777 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 05:27:00.972777 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 05:27:00.995349 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 05:27:00.995349 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 05:27:00.995349 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 13 05:27:00.995349 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 13 05:27:00.995349 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 13 05:27:00.995349 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Oct 13 05:27:01.414230 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 13 05:27:01.933212 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 13 05:27:01.937446 ignition[1080]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 13 05:27:01.937446 ignition[1080]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 05:27:01.942507 ignition[1080]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 05:27:01.942507 ignition[1080]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 13 05:27:01.942507 ignition[1080]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 13 05:27:01.942507 ignition[1080]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 13 05:27:01.942507 ignition[1080]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 13 05:27:01.942507 ignition[1080]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 13 05:27:01.942507 ignition[1080]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 13 05:27:01.969195 ignition[1080]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 13 05:27:01.976531 ignition[1080]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 13 05:27:01.979075 ignition[1080]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 13 05:27:01.979075 ignition[1080]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 13 05:27:01.979075 ignition[1080]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 13 05:27:01.979075 ignition[1080]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 13 05:27:01.979075 ignition[1080]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 13 05:27:01.979075 ignition[1080]: INFO : files: files passed Oct 13 05:27:01.979075 ignition[1080]: INFO : Ignition finished successfully Oct 13 05:27:01.992764 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 13 05:27:01.997887 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 13 05:27:02.002255 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 13 05:27:02.015629 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 13 05:27:02.015785 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 13 05:27:02.022981 initrd-setup-root-after-ignition[1110]: grep: /sysroot/oem/oem-release: No such file or directory Oct 13 05:27:02.028288 initrd-setup-root-after-ignition[1112]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:27:02.028288 initrd-setup-root-after-ignition[1112]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:27:02.033292 initrd-setup-root-after-ignition[1116]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:27:02.038293 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 05:27:02.039401 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 13 05:27:02.040858 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 13 05:27:02.087864 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 13 05:27:02.088012 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 13 05:27:02.089316 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 13 05:27:02.089633 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 13 05:27:02.099395 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 13 05:27:02.100453 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 13 05:27:02.132301 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 05:27:02.134611 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 13 05:27:02.170931 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 13 05:27:02.171084 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:27:02.172280 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:27:02.173116 systemd[1]: Stopped target timers.target - Timer Units. Oct 13 05:27:02.180301 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 13 05:27:02.180417 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 05:27:02.185697 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 13 05:27:02.186527 systemd[1]: Stopped target basic.target - Basic System. Oct 13 05:27:02.191722 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 13 05:27:02.192585 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 05:27:02.197134 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 13 05:27:02.200360 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 13 05:27:02.203702 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 13 05:27:02.209451 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 05:27:02.210171 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 13 05:27:02.214183 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 13 05:27:02.217239 systemd[1]: Stopped target swap.target - Swaps. Oct 13 05:27:02.217998 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 13 05:27:02.218118 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 13 05:27:02.224968 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:27:02.226104 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:27:02.230221 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 13 05:27:02.233198 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:27:02.236998 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 13 05:27:02.237121 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 13 05:27:02.243077 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 13 05:27:02.243197 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 05:27:02.244290 systemd[1]: Stopped target paths.target - Path Units. Oct 13 05:27:02.248554 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 13 05:27:02.254713 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:27:02.255438 systemd[1]: Stopped target slices.target - Slice Units. Oct 13 05:27:02.259594 systemd[1]: Stopped target sockets.target - Socket Units. Oct 13 05:27:02.262414 systemd[1]: iscsid.socket: Deactivated successfully. Oct 13 05:27:02.262506 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 05:27:02.265183 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 13 05:27:02.265272 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 05:27:02.268199 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 13 05:27:02.268315 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 05:27:02.271121 systemd[1]: ignition-files.service: Deactivated successfully. Oct 13 05:27:02.271231 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 13 05:27:02.278147 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 13 05:27:02.279724 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 13 05:27:02.283351 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 13 05:27:02.283525 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:27:02.286198 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 13 05:27:02.286352 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:27:02.289550 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 13 05:27:02.289725 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 05:27:02.304952 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 13 05:27:02.305730 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 13 05:27:02.320117 ignition[1136]: INFO : Ignition 2.22.0 Oct 13 05:27:02.320117 ignition[1136]: INFO : Stage: umount Oct 13 05:27:02.322520 ignition[1136]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:27:02.322520 ignition[1136]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:27:02.322520 ignition[1136]: INFO : umount: umount passed Oct 13 05:27:02.322520 ignition[1136]: INFO : Ignition finished successfully Oct 13 05:27:02.329892 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 13 05:27:02.330600 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 13 05:27:02.330750 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 13 05:27:02.333796 systemd[1]: Stopped target network.target - Network. Oct 13 05:27:02.335296 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 13 05:27:02.335361 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 13 05:27:02.336232 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 13 05:27:02.336284 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 13 05:27:02.341189 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 13 05:27:02.341248 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 13 05:27:02.344110 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 13 05:27:02.344164 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 13 05:27:02.347150 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 13 05:27:02.352149 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 13 05:27:02.364017 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 13 05:27:02.364211 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 13 05:27:02.370825 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 13 05:27:02.372717 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 13 05:27:02.372768 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:27:02.378033 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 13 05:27:02.379085 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 13 05:27:02.379163 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 05:27:02.383096 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:27:02.388811 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 13 05:27:02.388938 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 13 05:27:02.399336 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 13 05:27:02.399468 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 13 05:27:02.401624 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 13 05:27:02.401703 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 13 05:27:02.404209 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 13 05:27:02.404268 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:27:02.404695 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 13 05:27:02.404743 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 13 05:27:02.415019 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 13 05:27:02.415220 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:27:02.416847 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 13 05:27:02.416895 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 13 05:27:02.420272 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 13 05:27:02.420320 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:27:02.423001 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 13 05:27:02.423068 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 13 05:27:02.429014 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 13 05:27:02.429077 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 13 05:27:02.433624 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 13 05:27:02.433698 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 05:27:02.439342 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 13 05:27:02.440624 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 13 05:27:02.440704 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:27:02.443717 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 13 05:27:02.443774 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:27:02.448300 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 13 05:27:02.448355 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:27:02.451667 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 13 05:27:02.451721 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:27:02.452496 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:27:02.452545 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:27:02.467088 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 13 05:27:02.467207 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 13 05:27:02.498341 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 13 05:27:02.498477 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 13 05:27:02.499513 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 13 05:27:02.500861 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 13 05:27:02.513466 systemd[1]: Switching root. Oct 13 05:27:02.555793 systemd-journald[328]: Journal stopped Oct 13 05:27:04.036527 systemd-journald[328]: Received SIGTERM from PID 1 (systemd). Oct 13 05:27:04.036598 kernel: SELinux: policy capability network_peer_controls=1 Oct 13 05:27:04.036618 kernel: SELinux: policy capability open_perms=1 Oct 13 05:27:04.036630 kernel: SELinux: policy capability extended_socket_class=1 Oct 13 05:27:04.036660 kernel: SELinux: policy capability always_check_network=0 Oct 13 05:27:04.036674 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 13 05:27:04.036744 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 13 05:27:04.036760 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 13 05:27:04.036778 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 13 05:27:04.036790 kernel: SELinux: policy capability userspace_initial_context=0 Oct 13 05:27:04.036803 kernel: audit: type=1403 audit(1760333223.035:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 13 05:27:04.036816 systemd[1]: Successfully loaded SELinux policy in 66.524ms. Oct 13 05:27:04.036835 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.235ms. Oct 13 05:27:04.036858 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 05:27:04.036872 systemd[1]: Detected virtualization kvm. Oct 13 05:27:04.036885 systemd[1]: Detected architecture x86-64. Oct 13 05:27:04.036898 systemd[1]: Detected first boot. Oct 13 05:27:04.036911 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 13 05:27:04.036925 zram_generator::config[1182]: No configuration found. Oct 13 05:27:04.036938 kernel: Guest personality initialized and is inactive Oct 13 05:27:04.036959 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 13 05:27:04.036978 kernel: Initialized host personality Oct 13 05:27:04.036993 kernel: NET: Registered PF_VSOCK protocol family Oct 13 05:27:04.037006 systemd[1]: Populated /etc with preset unit settings. Oct 13 05:27:04.037026 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 13 05:27:04.037042 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 13 05:27:04.037055 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 13 05:27:04.037085 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 13 05:27:04.037099 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 13 05:27:04.037114 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 13 05:27:04.037128 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 13 05:27:04.037143 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 13 05:27:04.037156 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 13 05:27:04.037169 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 13 05:27:04.037190 systemd[1]: Created slice user.slice - User and Session Slice. Oct 13 05:27:04.037203 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:27:04.037221 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:27:04.037234 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 13 05:27:04.037247 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 13 05:27:04.037260 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 13 05:27:04.037281 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 05:27:04.037294 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 13 05:27:04.037307 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:27:04.037325 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:27:04.037338 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 13 05:27:04.037351 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 13 05:27:04.037364 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 13 05:27:04.037385 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 13 05:27:04.037398 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:27:04.037411 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 05:27:04.037431 systemd[1]: Reached target slices.target - Slice Units. Oct 13 05:27:04.037446 systemd[1]: Reached target swap.target - Swaps. Oct 13 05:27:04.037459 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 13 05:27:04.037471 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 13 05:27:04.037495 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 13 05:27:04.037508 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:27:04.037521 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 05:27:04.037534 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:27:04.037547 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 13 05:27:04.037560 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 13 05:27:04.037573 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 13 05:27:04.037597 systemd[1]: Mounting media.mount - External Media Directory... Oct 13 05:27:04.037615 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:27:04.037628 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 13 05:27:04.037641 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 13 05:27:04.037669 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 13 05:27:04.037685 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 13 05:27:04.037701 systemd[1]: Reached target machines.target - Containers. Oct 13 05:27:04.037723 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 13 05:27:04.037737 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:27:04.037750 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 05:27:04.037763 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 13 05:27:04.037778 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:27:04.037792 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 05:27:04.037806 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:27:04.037826 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 13 05:27:04.037839 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:27:04.037852 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 13 05:27:04.037866 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 13 05:27:04.037878 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 13 05:27:04.037891 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 13 05:27:04.037911 systemd[1]: Stopped systemd-fsck-usr.service. Oct 13 05:27:04.037925 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:27:04.037942 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 05:27:04.037954 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 05:27:04.037967 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 05:27:04.037981 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 13 05:27:04.037993 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 13 05:27:04.038015 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 05:27:04.038036 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:27:04.038049 kernel: fuse: init (API version 7.41) Oct 13 05:27:04.038067 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 13 05:27:04.038088 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 13 05:27:04.038100 systemd[1]: Mounted media.mount - External Media Directory. Oct 13 05:27:04.038134 systemd-journald[1238]: Collecting audit messages is disabled. Oct 13 05:27:04.038161 systemd-journald[1238]: Journal started Oct 13 05:27:04.038184 systemd-journald[1238]: Runtime Journal (/run/log/journal/f5348e2332d64b0c9c6732b50ac0e478) is 6M, max 48.6M, 42.5M free. Oct 13 05:27:03.629068 systemd[1]: Queued start job for default target multi-user.target. Oct 13 05:27:03.651845 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 13 05:27:03.652424 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 13 05:27:04.121740 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 05:27:04.123845 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 13 05:27:04.125977 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 13 05:27:04.127951 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 13 05:27:04.130387 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 13 05:27:04.133166 kernel: ACPI: bus type drm_connector registered Oct 13 05:27:04.134586 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:27:04.136968 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 13 05:27:04.137253 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 13 05:27:04.139800 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:27:04.140042 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:27:04.142384 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 05:27:04.142619 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 05:27:04.144679 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:27:04.144908 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:27:04.147381 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 13 05:27:04.147607 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 13 05:27:04.149813 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:27:04.150049 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:27:04.152241 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 05:27:04.154574 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:27:04.158157 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 13 05:27:04.160709 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 13 05:27:04.174740 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:27:04.181563 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 05:27:04.184082 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 13 05:27:04.187504 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 13 05:27:04.190375 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 13 05:27:04.192123 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 13 05:27:04.192159 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 05:27:04.194725 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 13 05:27:04.196879 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:27:04.198622 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 13 05:27:04.201528 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 13 05:27:04.203351 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 05:27:04.204375 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 13 05:27:04.205201 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 05:27:04.206487 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 05:27:04.210782 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 13 05:27:04.215805 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 05:27:04.217858 systemd-journald[1238]: Time spent on flushing to /var/log/journal/f5348e2332d64b0c9c6732b50ac0e478 is 16.655ms for 977 entries. Oct 13 05:27:04.217858 systemd-journald[1238]: System Journal (/var/log/journal/f5348e2332d64b0c9c6732b50ac0e478) is 8M, max 163.5M, 155.5M free. Oct 13 05:27:04.259759 systemd-journald[1238]: Received client request to flush runtime journal. Oct 13 05:27:04.260140 kernel: loop1: detected capacity change from 0 to 128048 Oct 13 05:27:04.260369 kernel: loop2: detected capacity change from 0 to 110984 Oct 13 05:27:04.219624 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 13 05:27:04.221630 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 13 05:27:04.247706 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 13 05:27:04.250149 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 13 05:27:04.256818 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 13 05:27:04.267915 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 13 05:27:04.270919 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:27:04.288219 systemd-tmpfiles[1303]: ACLs are not supported, ignoring. Oct 13 05:27:04.288240 systemd-tmpfiles[1303]: ACLs are not supported, ignoring. Oct 13 05:27:04.292737 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 13 05:27:04.294680 kernel: loop3: detected capacity change from 0 to 229808 Oct 13 05:27:04.297081 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:27:04.301586 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 13 05:27:04.318667 kernel: loop4: detected capacity change from 0 to 128048 Oct 13 05:27:04.328679 kernel: loop5: detected capacity change from 0 to 110984 Oct 13 05:27:04.338663 kernel: loop6: detected capacity change from 0 to 229808 Oct 13 05:27:04.344028 (sd-merge)[1321]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 13 05:27:04.345874 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 13 05:27:04.348163 (sd-merge)[1321]: Merged extensions into '/usr'. Oct 13 05:27:04.350570 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 05:27:04.353364 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 05:27:04.357497 systemd[1]: Reload requested from client PID 1302 ('systemd-sysext') (unit systemd-sysext.service)... Oct 13 05:27:04.357590 systemd[1]: Reloading... Oct 13 05:27:04.380551 systemd-tmpfiles[1325]: ACLs are not supported, ignoring. Oct 13 05:27:04.380574 systemd-tmpfiles[1325]: ACLs are not supported, ignoring. Oct 13 05:27:04.512737 zram_generator::config[1352]: No configuration found. Oct 13 05:27:04.817399 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 13 05:27:04.818745 systemd[1]: Reloading finished in 460 ms. Oct 13 05:27:05.012338 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 13 05:27:05.015047 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:27:05.047495 systemd[1]: Starting ensure-sysext.service... Oct 13 05:27:05.050061 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 05:27:05.063432 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 13 05:27:05.077703 systemd[1]: Reload requested from client PID 1389 ('systemctl') (unit ensure-sysext.service)... Oct 13 05:27:05.077730 systemd[1]: Reloading... Oct 13 05:27:05.081973 systemd-tmpfiles[1390]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 13 05:27:05.082028 systemd-tmpfiles[1390]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 13 05:27:05.082288 systemd-tmpfiles[1390]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 13 05:27:05.082528 systemd-tmpfiles[1390]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 13 05:27:05.083841 systemd-tmpfiles[1390]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 13 05:27:05.084178 systemd-tmpfiles[1390]: ACLs are not supported, ignoring. Oct 13 05:27:05.084310 systemd-tmpfiles[1390]: ACLs are not supported, ignoring. Oct 13 05:27:05.090408 systemd-tmpfiles[1390]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 05:27:05.090486 systemd-tmpfiles[1390]: Skipping /boot Oct 13 05:27:05.158034 systemd-tmpfiles[1390]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 05:27:05.158058 systemd-tmpfiles[1390]: Skipping /boot Oct 13 05:27:05.259685 zram_generator::config[1425]: No configuration found. Oct 13 05:27:05.339008 systemd-resolved[1324]: Positive Trust Anchors: Oct 13 05:27:05.339026 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 05:27:05.339031 systemd-resolved[1324]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 13 05:27:05.339062 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 05:27:05.355016 systemd-resolved[1324]: Defaulting to hostname 'linux'. Oct 13 05:27:05.472802 systemd[1]: Reloading finished in 394 ms. Oct 13 05:27:05.493838 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 13 05:27:05.495932 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 05:27:05.498110 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 13 05:27:05.526430 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:27:05.536908 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:27:05.540868 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 05:27:05.543533 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 13 05:27:05.566217 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 13 05:27:05.569780 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 13 05:27:05.573460 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:27:05.578000 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 13 05:27:05.582804 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:27:05.582975 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:27:05.584566 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:27:05.594845 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:27:05.600135 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:27:05.602322 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:27:05.602499 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:27:05.602613 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:27:05.604513 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:27:05.604820 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:27:05.620395 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:27:05.620708 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:27:05.623898 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:27:05.625797 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:27:05.625907 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:27:05.626034 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:27:05.627938 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 13 05:27:05.630893 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:27:05.631223 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:27:05.635221 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 13 05:27:05.635803 systemd-udevd[1470]: Using default interface naming scheme 'v257'. Oct 13 05:27:05.638204 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:27:05.638471 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:27:05.641493 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:27:05.641731 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:27:05.653739 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:27:05.654004 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:27:05.658930 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:27:05.661828 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 05:27:05.664836 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:27:05.669948 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:27:05.672659 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:27:05.672781 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:27:05.672917 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:27:05.691205 augenrules[1505]: No rules Oct 13 05:27:05.697246 systemd[1]: Finished ensure-sysext.service. Oct 13 05:27:05.699056 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:27:05.701645 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 05:27:05.701931 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 05:27:05.704001 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:27:05.704216 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:27:05.710921 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:27:05.711172 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:27:05.715077 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 05:27:05.715413 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 05:27:05.718301 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:27:05.718516 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:27:05.731440 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 05:27:05.733692 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 05:27:05.733763 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 05:27:05.737300 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 13 05:27:05.756803 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 13 05:27:05.759905 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 13 05:27:05.800706 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 13 05:27:05.859616 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 13 05:27:05.863199 systemd[1]: Reached target time-set.target - System Time Set. Oct 13 05:27:05.887381 systemd-networkd[1530]: lo: Link UP Oct 13 05:27:05.887392 systemd-networkd[1530]: lo: Gained carrier Oct 13 05:27:05.891411 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 05:27:05.891419 systemd-networkd[1530]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:27:05.891436 systemd-networkd[1530]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 05:27:05.893716 systemd[1]: Reached target network.target - Network. Oct 13 05:27:05.897992 systemd-networkd[1530]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:27:05.898066 systemd-networkd[1530]: eth0: Link UP Oct 13 05:27:05.898469 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 13 05:27:05.899091 systemd-networkd[1530]: eth0: Gained carrier Oct 13 05:27:05.899114 systemd-networkd[1530]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:27:05.902008 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 13 05:27:05.913731 systemd-networkd[1530]: eth0: DHCPv4 address 10.0.0.33/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 13 05:27:05.916302 systemd-timesyncd[1534]: Network configuration changed, trying to establish connection. Oct 13 05:27:05.917349 systemd-timesyncd[1534]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 13 05:27:05.918779 systemd-timesyncd[1534]: Initial clock synchronization to Mon 2025-10-13 05:27:05.911237 UTC. Oct 13 05:27:05.926900 kernel: mousedev: PS/2 mouse device common for all mice Oct 13 05:27:05.938689 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 13 05:27:05.941570 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 13 05:27:05.947940 kernel: ACPI: button: Power Button [PWRF] Oct 13 05:27:05.944245 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 13 05:27:05.950915 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 13 05:27:05.972371 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 13 05:27:05.986435 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 13 05:27:05.993919 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 13 05:27:06.179930 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:27:06.336604 kernel: kvm_amd: TSC scaling supported Oct 13 05:27:06.336719 kernel: kvm_amd: Nested Virtualization enabled Oct 13 05:27:06.336759 kernel: kvm_amd: Nested Paging enabled Oct 13 05:27:06.337726 kernel: kvm_amd: LBR virtualization supported Oct 13 05:27:06.339607 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 13 05:27:06.339661 kernel: kvm_amd: Virtual GIF supported Oct 13 05:27:06.381685 kernel: EDAC MC: Ver: 3.0.0 Oct 13 05:27:06.402468 ldconfig[1467]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 13 05:27:06.410161 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 13 05:27:06.469593 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:27:06.475614 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 13 05:27:06.629941 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 13 05:27:06.632063 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 05:27:06.633936 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 13 05:27:06.636058 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 13 05:27:06.638522 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 13 05:27:06.640575 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 13 05:27:06.642455 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 13 05:27:06.644589 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 13 05:27:06.646641 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 13 05:27:06.646734 systemd[1]: Reached target paths.target - Path Units. Oct 13 05:27:06.648222 systemd[1]: Reached target timers.target - Timer Units. Oct 13 05:27:06.651345 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 13 05:27:06.654783 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 13 05:27:06.658827 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 13 05:27:06.660964 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 13 05:27:06.662916 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 13 05:27:06.667308 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 13 05:27:06.669194 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 13 05:27:06.671598 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 13 05:27:06.674030 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 05:27:06.675511 systemd[1]: Reached target basic.target - Basic System. Oct 13 05:27:06.677074 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 13 05:27:06.677104 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 13 05:27:06.678321 systemd[1]: Starting containerd.service - containerd container runtime... Oct 13 05:27:06.681555 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 13 05:27:06.684295 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 13 05:27:06.692145 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 13 05:27:06.695212 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 13 05:27:06.696790 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 13 05:27:06.698473 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 13 05:27:06.702614 jq[1590]: false Oct 13 05:27:06.703117 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 13 05:27:06.707066 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 13 05:27:06.710890 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 13 05:27:06.714603 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Refreshing passwd entry cache Oct 13 05:27:06.714162 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 13 05:27:06.713913 oslogin_cache_refresh[1592]: Refreshing passwd entry cache Oct 13 05:27:06.716891 extend-filesystems[1591]: Found /dev/vda6 Oct 13 05:27:06.721319 extend-filesystems[1591]: Found /dev/vda9 Oct 13 05:27:06.725026 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 13 05:27:06.725552 extend-filesystems[1591]: Checking size of /dev/vda9 Oct 13 05:27:06.728510 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 13 05:27:06.729245 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 13 05:27:06.730306 systemd[1]: Starting update-engine.service - Update Engine... Oct 13 05:27:06.733615 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 13 05:27:06.795705 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 13 05:27:06.798193 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 13 05:27:06.798482 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 13 05:27:06.798905 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Failure getting users, quitting Oct 13 05:27:06.798897 oslogin_cache_refresh[1592]: Failure getting users, quitting Oct 13 05:27:06.798994 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 13 05:27:06.798994 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Refreshing group entry cache Oct 13 05:27:06.798928 oslogin_cache_refresh[1592]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 13 05:27:06.798995 oslogin_cache_refresh[1592]: Refreshing group entry cache Oct 13 05:27:06.799787 systemd[1]: motdgen.service: Deactivated successfully. Oct 13 05:27:06.800102 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 13 05:27:06.805962 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Failure getting groups, quitting Oct 13 05:27:06.805962 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 13 05:27:06.805935 oslogin_cache_refresh[1592]: Failure getting groups, quitting Oct 13 05:27:06.805947 oslogin_cache_refresh[1592]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 13 05:27:06.806225 extend-filesystems[1591]: Resized partition /dev/vda9 Oct 13 05:27:06.810563 jq[1612]: true Oct 13 05:27:06.813018 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 13 05:27:06.813331 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 13 05:27:06.815467 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 13 05:27:06.816100 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 13 05:27:06.821343 extend-filesystems[1622]: resize2fs 1.47.3 (8-Jul-2025) Oct 13 05:27:06.828686 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 13 05:27:06.837818 jq[1627]: true Oct 13 05:27:06.840543 update_engine[1611]: I20251013 05:27:06.840458 1611 main.cc:92] Flatcar Update Engine starting Oct 13 05:27:06.855098 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 13 05:27:06.861591 (ntainerd)[1634]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 13 05:27:06.886511 tar[1620]: linux-amd64/LICENSE Oct 13 05:27:06.887119 tar[1620]: linux-amd64/helm Oct 13 05:27:06.887906 extend-filesystems[1622]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 13 05:27:06.887906 extend-filesystems[1622]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 13 05:27:06.887906 extend-filesystems[1622]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 13 05:27:06.898134 extend-filesystems[1591]: Resized filesystem in /dev/vda9 Oct 13 05:27:06.889817 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 13 05:27:06.890103 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 13 05:27:06.910101 bash[1655]: Updated "/home/core/.ssh/authorized_keys" Oct 13 05:27:06.912744 systemd-logind[1604]: Watching system buttons on /dev/input/event2 (Power Button) Oct 13 05:27:06.912774 systemd-logind[1604]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 13 05:27:06.914746 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 13 05:27:06.916528 systemd-logind[1604]: New seat seat0. Oct 13 05:27:06.918050 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 13 05:27:06.919126 systemd[1]: Started systemd-logind.service - User Login Management. Oct 13 05:27:06.932539 dbus-daemon[1588]: [system] SELinux support is enabled Oct 13 05:27:06.932900 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 13 05:27:06.938122 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 13 05:27:06.938161 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 13 05:27:06.939723 update_engine[1611]: I20251013 05:27:06.939483 1611 update_check_scheduler.cc:74] Next update check in 11m53s Oct 13 05:27:07.102560 sshd_keygen[1617]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 13 05:27:07.168253 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 13 05:27:07.168410 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 13 05:27:07.172465 systemd[1]: Started update-engine.service - Update Engine. Oct 13 05:27:07.174827 dbus-daemon[1588]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 13 05:27:07.176782 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 13 05:27:07.180311 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 13 05:27:07.187935 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 13 05:27:07.204048 systemd[1]: issuegen.service: Deactivated successfully. Oct 13 05:27:07.204497 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 13 05:27:07.224112 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 13 05:27:07.257961 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 13 05:27:07.265175 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 13 05:27:07.266436 systemd-networkd[1530]: eth0: Gained IPv6LL Oct 13 05:27:07.278118 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 13 05:27:07.280293 systemd[1]: Reached target getty.target - Login Prompts. Oct 13 05:27:07.282742 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 13 05:27:07.286764 systemd[1]: Reached target network-online.target - Network is Online. Oct 13 05:27:07.292896 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 13 05:27:07.300042 locksmithd[1672]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 13 05:27:07.302045 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:27:07.440365 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 13 05:27:07.478277 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 13 05:27:07.478567 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 13 05:27:07.481316 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 13 05:27:07.484347 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 13 05:27:07.651061 containerd[1634]: time="2025-10-13T05:27:07Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 13 05:27:07.652054 containerd[1634]: time="2025-10-13T05:27:07.651970569Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 13 05:27:07.744236 containerd[1634]: time="2025-10-13T05:27:07.742830033Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="18.731µs" Oct 13 05:27:07.744236 containerd[1634]: time="2025-10-13T05:27:07.742884074Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 13 05:27:07.744236 containerd[1634]: time="2025-10-13T05:27:07.742905581Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 13 05:27:07.744236 containerd[1634]: time="2025-10-13T05:27:07.743362644Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 13 05:27:07.744236 containerd[1634]: time="2025-10-13T05:27:07.743410875Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 13 05:27:07.744236 containerd[1634]: time="2025-10-13T05:27:07.743479752Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 05:27:07.744236 containerd[1634]: time="2025-10-13T05:27:07.743707928Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 05:27:07.744236 containerd[1634]: time="2025-10-13T05:27:07.743724667Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 05:27:07.744236 containerd[1634]: time="2025-10-13T05:27:07.744127108Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 05:27:07.744236 containerd[1634]: time="2025-10-13T05:27:07.744174078Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 05:27:07.744236 containerd[1634]: time="2025-10-13T05:27:07.744211060Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 05:27:07.744236 containerd[1634]: time="2025-10-13T05:27:07.744237965Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 13 05:27:07.744629 containerd[1634]: time="2025-10-13T05:27:07.744394180Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 13 05:27:07.744825 containerd[1634]: time="2025-10-13T05:27:07.744791462Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 05:27:07.744856 containerd[1634]: time="2025-10-13T05:27:07.744835146Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 05:27:07.744856 containerd[1634]: time="2025-10-13T05:27:07.744844512Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 13 05:27:07.745093 containerd[1634]: time="2025-10-13T05:27:07.745071206Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 13 05:27:07.745531 containerd[1634]: time="2025-10-13T05:27:07.745487781Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 13 05:27:07.745739 containerd[1634]: time="2025-10-13T05:27:07.745720384Z" level=info msg="metadata content store policy set" policy=shared Oct 13 05:27:07.753191 containerd[1634]: time="2025-10-13T05:27:07.753120609Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 13 05:27:07.753680 containerd[1634]: time="2025-10-13T05:27:07.753301826Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 13 05:27:07.753680 containerd[1634]: time="2025-10-13T05:27:07.753403018Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 13 05:27:07.753680 containerd[1634]: time="2025-10-13T05:27:07.753518082Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 13 05:27:07.753680 containerd[1634]: time="2025-10-13T05:27:07.753536133Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 13 05:27:07.753680 containerd[1634]: time="2025-10-13T05:27:07.753568567Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 13 05:27:07.753680 containerd[1634]: time="2025-10-13T05:27:07.753609217Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 13 05:27:07.754711 containerd[1634]: time="2025-10-13T05:27:07.754360118Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 13 05:27:07.754711 containerd[1634]: time="2025-10-13T05:27:07.754411244Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 13 05:27:07.754711 containerd[1634]: time="2025-10-13T05:27:07.754423755Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 13 05:27:07.754711 containerd[1634]: time="2025-10-13T05:27:07.754432590Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 13 05:27:07.754711 containerd[1634]: time="2025-10-13T05:27:07.754445572Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 13 05:27:07.754993 containerd[1634]: time="2025-10-13T05:27:07.754974868Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 13 05:27:07.755166 containerd[1634]: time="2025-10-13T05:27:07.755147310Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 13 05:27:07.755226 containerd[1634]: time="2025-10-13T05:27:07.755214443Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 13 05:27:07.755300 containerd[1634]: time="2025-10-13T05:27:07.755285023Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 13 05:27:07.755412 containerd[1634]: time="2025-10-13T05:27:07.755375737Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 13 05:27:07.755886 containerd[1634]: time="2025-10-13T05:27:07.755454059Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 13 05:27:07.755886 containerd[1634]: time="2025-10-13T05:27:07.755471368Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 13 05:27:07.755886 containerd[1634]: time="2025-10-13T05:27:07.755487685Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 13 05:27:07.755886 containerd[1634]: time="2025-10-13T05:27:07.755498445Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 13 05:27:07.755886 containerd[1634]: time="2025-10-13T05:27:07.755507860Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 13 05:27:07.755886 containerd[1634]: time="2025-10-13T05:27:07.755517506Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 13 05:27:07.755886 containerd[1634]: time="2025-10-13T05:27:07.755664134Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 13 05:27:07.755886 containerd[1634]: time="2025-10-13T05:27:07.755711585Z" level=info msg="Start snapshots syncer" Oct 13 05:27:07.755886 containerd[1634]: time="2025-10-13T05:27:07.755775824Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 13 05:27:07.756864 containerd[1634]: time="2025-10-13T05:27:07.756697253Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 13 05:27:07.757180 containerd[1634]: time="2025-10-13T05:27:07.757118225Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 13 05:27:07.759619 containerd[1634]: time="2025-10-13T05:27:07.759586643Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 13 05:27:07.760026 containerd[1634]: time="2025-10-13T05:27:07.759992430Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 13 05:27:07.760113 containerd[1634]: time="2025-10-13T05:27:07.760097949Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 13 05:27:07.760238 containerd[1634]: time="2025-10-13T05:27:07.760172806Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 13 05:27:07.760238 containerd[1634]: time="2025-10-13T05:27:07.760193671Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 13 05:27:07.760238 containerd[1634]: time="2025-10-13T05:27:07.760215688Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 13 05:27:07.760398 containerd[1634]: time="2025-10-13T05:27:07.760329841Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 13 05:27:07.760398 containerd[1634]: time="2025-10-13T05:27:07.760351748Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 13 05:27:07.760462 containerd[1634]: time="2025-10-13T05:27:07.760380868Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 13 05:27:07.760620 containerd[1634]: time="2025-10-13T05:27:07.760543032Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 13 05:27:07.760620 containerd[1634]: time="2025-10-13T05:27:07.760562826Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 13 05:27:07.760821 containerd[1634]: time="2025-10-13T05:27:07.760596082Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 05:27:07.760821 containerd[1634]: time="2025-10-13T05:27:07.760772540Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 05:27:07.760821 containerd[1634]: time="2025-10-13T05:27:07.760783769Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 05:27:07.760821 containerd[1634]: time="2025-10-13T05:27:07.760795359Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 05:27:07.760945 containerd[1634]: time="2025-10-13T05:27:07.760804355Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 13 05:27:07.761087 containerd[1634]: time="2025-10-13T05:27:07.761019398Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 13 05:27:07.761087 containerd[1634]: time="2025-10-13T05:27:07.761042227Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 13 05:27:07.761178 containerd[1634]: time="2025-10-13T05:27:07.761165176Z" level=info msg="runtime interface created" Oct 13 05:27:07.761235 containerd[1634]: time="2025-10-13T05:27:07.761213317Z" level=info msg="created NRI interface" Oct 13 05:27:07.761352 containerd[1634]: time="2025-10-13T05:27:07.761300264Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 13 05:27:07.761352 containerd[1634]: time="2025-10-13T05:27:07.761326889Z" level=info msg="Connect containerd service" Oct 13 05:27:07.761352 containerd[1634]: time="2025-10-13T05:27:07.761375551Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 13 05:27:07.762438 containerd[1634]: time="2025-10-13T05:27:07.762412968Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 05:27:08.063194 containerd[1634]: time="2025-10-13T05:27:08.062508911Z" level=info msg="Start subscribing containerd event" Oct 13 05:27:08.063194 containerd[1634]: time="2025-10-13T05:27:08.062630246Z" level=info msg="Start recovering state" Oct 13 05:27:08.063194 containerd[1634]: time="2025-10-13T05:27:08.062841506Z" level=info msg="Start event monitor" Oct 13 05:27:08.063194 containerd[1634]: time="2025-10-13T05:27:08.062861719Z" level=info msg="Start cni network conf syncer for default" Oct 13 05:27:08.063194 containerd[1634]: time="2025-10-13T05:27:08.062877056Z" level=info msg="Start streaming server" Oct 13 05:27:08.063194 containerd[1634]: time="2025-10-13T05:27:08.062895887Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 13 05:27:08.063194 containerd[1634]: time="2025-10-13T05:27:08.062905034Z" level=info msg="runtime interface starting up..." Oct 13 05:27:08.063194 containerd[1634]: time="2025-10-13T05:27:08.062922213Z" level=info msg="starting plugins..." Oct 13 05:27:08.063194 containerd[1634]: time="2025-10-13T05:27:08.062945583Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 13 05:27:08.063194 containerd[1634]: time="2025-10-13T05:27:08.063055629Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 13 05:27:08.063194 containerd[1634]: time="2025-10-13T05:27:08.063152554Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 13 05:27:08.063873 systemd[1]: Started containerd.service - containerd container runtime. Oct 13 05:27:08.065717 containerd[1634]: time="2025-10-13T05:27:08.065686787Z" level=info msg="containerd successfully booted in 0.415416s" Oct 13 05:27:08.093529 tar[1620]: linux-amd64/README.md Oct 13 05:27:08.118376 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 13 05:27:08.438398 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 13 05:27:08.441822 systemd[1]: Started sshd@0-10.0.0.33:22-10.0.0.1:58640.service - OpenSSH per-connection server daemon (10.0.0.1:58640). Oct 13 05:27:08.543272 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 58640 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:27:08.546285 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:27:08.556286 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 13 05:27:08.560134 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 13 05:27:08.570774 systemd-logind[1604]: New session 1 of user core. Oct 13 05:27:08.589612 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 13 05:27:08.595271 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 13 05:27:08.611156 (systemd)[1728]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 13 05:27:08.614058 systemd-logind[1604]: New session c1 of user core. Oct 13 05:27:08.770676 systemd[1728]: Queued start job for default target default.target. Oct 13 05:27:08.781877 systemd[1728]: Created slice app.slice - User Application Slice. Oct 13 05:27:08.781904 systemd[1728]: Reached target paths.target - Paths. Oct 13 05:27:08.781946 systemd[1728]: Reached target timers.target - Timers. Oct 13 05:27:08.783477 systemd[1728]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 13 05:27:08.796918 systemd[1728]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 13 05:27:08.797050 systemd[1728]: Reached target sockets.target - Sockets. Oct 13 05:27:08.797098 systemd[1728]: Reached target basic.target - Basic System. Oct 13 05:27:08.797144 systemd[1728]: Reached target default.target - Main User Target. Oct 13 05:27:08.797175 systemd[1728]: Startup finished in 175ms. Oct 13 05:27:08.797566 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 13 05:27:08.813806 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 13 05:27:08.882032 systemd[1]: Started sshd@1-10.0.0.33:22-10.0.0.1:58656.service - OpenSSH per-connection server daemon (10.0.0.1:58656). Oct 13 05:27:08.938081 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 58656 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:27:08.939938 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:27:08.944748 systemd-logind[1604]: New session 2 of user core. Oct 13 05:27:08.954837 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 13 05:27:09.004699 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:27:09.007105 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 13 05:27:09.008942 systemd[1]: Startup finished in 2.699s (kernel) + 6.077s (initrd) + 6.037s (userspace) = 14.814s. Oct 13 05:27:09.014584 sshd[1744]: Connection closed by 10.0.0.1 port 58656 Oct 13 05:27:09.015087 (kubelet)[1749]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:27:09.015433 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Oct 13 05:27:09.020985 systemd[1]: sshd@1-10.0.0.33:22-10.0.0.1:58656.service: Deactivated successfully. Oct 13 05:27:09.026896 systemd-logind[1604]: Session 2 logged out. Waiting for processes to exit. Oct 13 05:27:09.027017 systemd[1]: Started sshd@2-10.0.0.33:22-10.0.0.1:58658.service - OpenSSH per-connection server daemon (10.0.0.1:58658). Oct 13 05:27:09.048510 systemd[1]: session-2.scope: Deactivated successfully. Oct 13 05:27:09.059898 systemd-logind[1604]: Removed session 2. Oct 13 05:27:09.599025 sshd[1754]: Accepted publickey for core from 10.0.0.1 port 58658 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:27:09.600992 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:27:09.605503 systemd-logind[1604]: New session 3 of user core. Oct 13 05:27:09.611766 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 13 05:27:09.670322 sshd[1768]: Connection closed by 10.0.0.1 port 58658 Oct 13 05:27:09.670977 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Oct 13 05:27:09.684446 systemd[1]: sshd@2-10.0.0.33:22-10.0.0.1:58658.service: Deactivated successfully. Oct 13 05:27:09.686813 systemd[1]: session-3.scope: Deactivated successfully. Oct 13 05:27:09.687517 systemd-logind[1604]: Session 3 logged out. Waiting for processes to exit. Oct 13 05:27:09.690487 systemd[1]: Started sshd@3-10.0.0.33:22-10.0.0.1:58672.service - OpenSSH per-connection server daemon (10.0.0.1:58672). Oct 13 05:27:09.691287 systemd-logind[1604]: Removed session 3. Oct 13 05:27:09.741940 sshd[1774]: Accepted publickey for core from 10.0.0.1 port 58672 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:27:09.743471 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:27:09.748057 systemd-logind[1604]: New session 4 of user core. Oct 13 05:27:09.758811 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 13 05:27:09.804532 kubelet[1749]: E1013 05:27:09.804433 1749 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:27:09.808887 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:27:09.809079 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:27:09.809470 systemd[1]: kubelet.service: Consumed 2.141s CPU time, 267.7M memory peak. Oct 13 05:27:09.835795 sshd[1777]: Connection closed by 10.0.0.1 port 58672 Oct 13 05:27:09.836237 sshd-session[1774]: pam_unix(sshd:session): session closed for user core Oct 13 05:27:09.848815 systemd[1]: sshd@3-10.0.0.33:22-10.0.0.1:58672.service: Deactivated successfully. Oct 13 05:27:09.851442 systemd[1]: session-4.scope: Deactivated successfully. Oct 13 05:27:09.852514 systemd-logind[1604]: Session 4 logged out. Waiting for processes to exit. Oct 13 05:27:09.856739 systemd[1]: Started sshd@4-10.0.0.33:22-10.0.0.1:58682.service - OpenSSH per-connection server daemon (10.0.0.1:58682). Oct 13 05:27:09.857543 systemd-logind[1604]: Removed session 4. Oct 13 05:27:09.918306 sshd[1784]: Accepted publickey for core from 10.0.0.1 port 58682 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:27:09.919977 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:27:09.926378 systemd-logind[1604]: New session 5 of user core. Oct 13 05:27:09.938794 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 13 05:27:10.002829 sudo[1788]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 13 05:27:10.003246 sudo[1788]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:27:10.021795 sudo[1788]: pam_unix(sudo:session): session closed for user root Oct 13 05:27:10.024689 sshd[1787]: Connection closed by 10.0.0.1 port 58682 Oct 13 05:27:10.025314 sshd-session[1784]: pam_unix(sshd:session): session closed for user core Oct 13 05:27:10.044852 systemd[1]: sshd@4-10.0.0.33:22-10.0.0.1:58682.service: Deactivated successfully. Oct 13 05:27:10.046961 systemd[1]: session-5.scope: Deactivated successfully. Oct 13 05:27:10.047759 systemd-logind[1604]: Session 5 logged out. Waiting for processes to exit. Oct 13 05:27:10.050913 systemd[1]: Started sshd@5-10.0.0.33:22-10.0.0.1:58698.service - OpenSSH per-connection server daemon (10.0.0.1:58698). Oct 13 05:27:10.051518 systemd-logind[1604]: Removed session 5. Oct 13 05:27:10.119635 sshd[1794]: Accepted publickey for core from 10.0.0.1 port 58698 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:27:10.120971 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:27:10.124986 systemd-logind[1604]: New session 6 of user core. Oct 13 05:27:10.136768 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 13 05:27:10.190672 sudo[1799]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 13 05:27:10.190990 sudo[1799]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:27:10.198300 sudo[1799]: pam_unix(sudo:session): session closed for user root Oct 13 05:27:10.205750 sudo[1798]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 13 05:27:10.206040 sudo[1798]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:27:10.216856 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 05:27:10.270877 augenrules[1821]: No rules Oct 13 05:27:10.272812 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 05:27:10.273112 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 05:27:10.274225 sudo[1798]: pam_unix(sudo:session): session closed for user root Oct 13 05:27:10.275870 sshd[1797]: Connection closed by 10.0.0.1 port 58698 Oct 13 05:27:10.276189 sshd-session[1794]: pam_unix(sshd:session): session closed for user core Oct 13 05:27:10.285035 systemd[1]: sshd@5-10.0.0.33:22-10.0.0.1:58698.service: Deactivated successfully. Oct 13 05:27:10.286749 systemd[1]: session-6.scope: Deactivated successfully. Oct 13 05:27:10.287463 systemd-logind[1604]: Session 6 logged out. Waiting for processes to exit. Oct 13 05:27:10.289893 systemd[1]: Started sshd@6-10.0.0.33:22-10.0.0.1:58714.service - OpenSSH per-connection server daemon (10.0.0.1:58714). Oct 13 05:27:10.290413 systemd-logind[1604]: Removed session 6. Oct 13 05:27:10.343630 sshd[1830]: Accepted publickey for core from 10.0.0.1 port 58714 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:27:10.344930 sshd-session[1830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:27:10.349123 systemd-logind[1604]: New session 7 of user core. Oct 13 05:27:10.362773 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 13 05:27:10.418495 sudo[1834]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 13 05:27:10.418895 sudo[1834]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:27:11.140157 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 13 05:27:11.160996 (dockerd)[1854]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 13 05:27:11.671945 dockerd[1854]: time="2025-10-13T05:27:11.671835595Z" level=info msg="Starting up" Oct 13 05:27:11.672775 dockerd[1854]: time="2025-10-13T05:27:11.672714011Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 13 05:27:11.689960 dockerd[1854]: time="2025-10-13T05:27:11.689907355Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 13 05:27:12.233659 dockerd[1854]: time="2025-10-13T05:27:12.233560982Z" level=info msg="Loading containers: start." Oct 13 05:27:12.246130 kernel: Initializing XFRM netlink socket Oct 13 05:27:12.550505 systemd-networkd[1530]: docker0: Link UP Oct 13 05:27:12.555192 dockerd[1854]: time="2025-10-13T05:27:12.555139789Z" level=info msg="Loading containers: done." Oct 13 05:27:12.570230 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3269778635-merged.mount: Deactivated successfully. Oct 13 05:27:12.571401 dockerd[1854]: time="2025-10-13T05:27:12.571356450Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 13 05:27:12.571481 dockerd[1854]: time="2025-10-13T05:27:12.571462212Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 13 05:27:12.571593 dockerd[1854]: time="2025-10-13T05:27:12.571575436Z" level=info msg="Initializing buildkit" Oct 13 05:27:12.601400 dockerd[1854]: time="2025-10-13T05:27:12.601357061Z" level=info msg="Completed buildkit initialization" Oct 13 05:27:12.608233 dockerd[1854]: time="2025-10-13T05:27:12.608197461Z" level=info msg="Daemon has completed initialization" Oct 13 05:27:12.608403 dockerd[1854]: time="2025-10-13T05:27:12.608321144Z" level=info msg="API listen on /run/docker.sock" Oct 13 05:27:12.608541 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 13 05:27:13.594902 containerd[1634]: time="2025-10-13T05:27:13.594832843Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Oct 13 05:27:14.192569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount505938964.mount: Deactivated successfully. Oct 13 05:27:15.410869 containerd[1634]: time="2025-10-13T05:27:15.410802094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:15.411809 containerd[1634]: time="2025-10-13T05:27:15.411690984Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Oct 13 05:27:15.413051 containerd[1634]: time="2025-10-13T05:27:15.413007206Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:15.416200 containerd[1634]: time="2025-10-13T05:27:15.416165622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:15.419328 containerd[1634]: time="2025-10-13T05:27:15.418699414Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 1.82379639s" Oct 13 05:27:15.419328 containerd[1634]: time="2025-10-13T05:27:15.418750732Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Oct 13 05:27:15.420199 containerd[1634]: time="2025-10-13T05:27:15.420141944Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Oct 13 05:27:16.847638 containerd[1634]: time="2025-10-13T05:27:16.847561245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:16.848454 containerd[1634]: time="2025-10-13T05:27:16.848378305Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Oct 13 05:27:16.849962 containerd[1634]: time="2025-10-13T05:27:16.849836388Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:16.853128 containerd[1634]: time="2025-10-13T05:27:16.853074526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:16.855005 containerd[1634]: time="2025-10-13T05:27:16.854959753Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.434772329s" Oct 13 05:27:16.855005 containerd[1634]: time="2025-10-13T05:27:16.855005982Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Oct 13 05:27:16.855628 containerd[1634]: time="2025-10-13T05:27:16.855577406Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Oct 13 05:27:19.027103 containerd[1634]: time="2025-10-13T05:27:19.027025172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:19.085505 containerd[1634]: time="2025-10-13T05:27:19.085430861Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Oct 13 05:27:19.134992 containerd[1634]: time="2025-10-13T05:27:19.134941513Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:19.196295 containerd[1634]: time="2025-10-13T05:27:19.196217047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:19.197242 containerd[1634]: time="2025-10-13T05:27:19.197190201Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 2.341578486s" Oct 13 05:27:19.197242 containerd[1634]: time="2025-10-13T05:27:19.197229840Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Oct 13 05:27:19.197843 containerd[1634]: time="2025-10-13T05:27:19.197814833Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Oct 13 05:27:20.059863 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 13 05:27:20.062178 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:27:20.354577 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:27:20.358562 (kubelet)[2148]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:27:20.414782 kubelet[2148]: E1013 05:27:20.414686 2148 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:27:20.422325 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:27:20.422587 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:27:20.423114 systemd[1]: kubelet.service: Consumed 316ms CPU time, 110.4M memory peak. Oct 13 05:27:21.591616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1321742085.mount: Deactivated successfully. Oct 13 05:27:22.412125 containerd[1634]: time="2025-10-13T05:27:22.412041053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:22.412824 containerd[1634]: time="2025-10-13T05:27:22.412799730Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Oct 13 05:27:22.414161 containerd[1634]: time="2025-10-13T05:27:22.414089872Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:22.416084 containerd[1634]: time="2025-10-13T05:27:22.416049435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:22.416851 containerd[1634]: time="2025-10-13T05:27:22.416797813Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 3.218946175s" Oct 13 05:27:22.416914 containerd[1634]: time="2025-10-13T05:27:22.416856797Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Oct 13 05:27:22.417625 containerd[1634]: time="2025-10-13T05:27:22.417571927Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Oct 13 05:27:22.909476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3396185288.mount: Deactivated successfully. Oct 13 05:27:25.684309 containerd[1634]: time="2025-10-13T05:27:25.684229383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:25.685017 containerd[1634]: time="2025-10-13T05:27:25.684970636Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Oct 13 05:27:25.686527 containerd[1634]: time="2025-10-13T05:27:25.686489317Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:25.689233 containerd[1634]: time="2025-10-13T05:27:25.689184313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:25.690206 containerd[1634]: time="2025-10-13T05:27:25.690116374Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 3.272466351s" Oct 13 05:27:25.690206 containerd[1634]: time="2025-10-13T05:27:25.690187491Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Oct 13 05:27:25.690984 containerd[1634]: time="2025-10-13T05:27:25.690814021Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 13 05:27:26.307716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3506974844.mount: Deactivated successfully. Oct 13 05:27:26.313916 containerd[1634]: time="2025-10-13T05:27:26.313869593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:27:26.314570 containerd[1634]: time="2025-10-13T05:27:26.314531188Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 13 05:27:26.315784 containerd[1634]: time="2025-10-13T05:27:26.315747127Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:27:26.317984 containerd[1634]: time="2025-10-13T05:27:26.317949927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:27:26.318778 containerd[1634]: time="2025-10-13T05:27:26.318733398Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 627.890236ms" Oct 13 05:27:26.318778 containerd[1634]: time="2025-10-13T05:27:26.318770864Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 13 05:27:26.319295 containerd[1634]: time="2025-10-13T05:27:26.319267475Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Oct 13 05:27:26.741231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3647416960.mount: Deactivated successfully. Oct 13 05:27:28.554191 containerd[1634]: time="2025-10-13T05:27:28.554105336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:28.554870 containerd[1634]: time="2025-10-13T05:27:28.554811504Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Oct 13 05:27:28.556031 containerd[1634]: time="2025-10-13T05:27:28.555980887Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:28.560253 containerd[1634]: time="2025-10-13T05:27:28.559821938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:28.562356 containerd[1634]: time="2025-10-13T05:27:28.562310211Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.24300617s" Oct 13 05:27:28.562356 containerd[1634]: time="2025-10-13T05:27:28.562347628Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Oct 13 05:27:30.673065 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 13 05:27:30.674908 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:27:30.889524 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:27:30.900044 (kubelet)[2308]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:27:31.032906 kubelet[2308]: E1013 05:27:31.032738 2308 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:27:31.037183 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:27:31.037385 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:27:31.037817 systemd[1]: kubelet.service: Consumed 306ms CPU time, 109.4M memory peak. Oct 13 05:27:31.732755 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:27:31.732920 systemd[1]: kubelet.service: Consumed 306ms CPU time, 109.4M memory peak. Oct 13 05:27:31.735135 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:27:31.762434 systemd[1]: Reload requested from client PID 2323 ('systemctl') (unit session-7.scope)... Oct 13 05:27:31.762450 systemd[1]: Reloading... Oct 13 05:27:31.853698 zram_generator::config[2372]: No configuration found. Oct 13 05:27:32.460726 systemd[1]: Reloading finished in 697 ms. Oct 13 05:27:32.543455 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 13 05:27:32.543583 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 13 05:27:32.543966 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:27:32.544080 systemd[1]: kubelet.service: Consumed 160ms CPU time, 98.2M memory peak. Oct 13 05:27:32.546250 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:27:32.727381 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:27:32.731594 (kubelet)[2414]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 05:27:32.778934 kubelet[2414]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:27:32.778934 kubelet[2414]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 05:27:32.778934 kubelet[2414]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:27:32.779351 kubelet[2414]: I1013 05:27:32.779011 2414 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 05:27:34.041552 kubelet[2414]: I1013 05:27:34.041486 2414 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 13 05:27:34.041552 kubelet[2414]: I1013 05:27:34.041526 2414 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 05:27:34.042193 kubelet[2414]: I1013 05:27:34.041890 2414 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 05:27:34.067888 kubelet[2414]: I1013 05:27:34.067825 2414 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:27:34.069529 kubelet[2414]: E1013 05:27:34.069484 2414 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.33:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 13 05:27:34.075608 kubelet[2414]: I1013 05:27:34.075579 2414 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 05:27:34.082390 kubelet[2414]: I1013 05:27:34.082324 2414 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 13 05:27:34.082841 kubelet[2414]: I1013 05:27:34.082788 2414 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 05:27:34.083057 kubelet[2414]: I1013 05:27:34.082843 2414 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 05:27:34.083169 kubelet[2414]: I1013 05:27:34.083077 2414 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 05:27:34.083169 kubelet[2414]: I1013 05:27:34.083093 2414 container_manager_linux.go:303] "Creating device plugin manager" Oct 13 05:27:34.083305 kubelet[2414]: I1013 05:27:34.083288 2414 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:27:34.086881 kubelet[2414]: I1013 05:27:34.086823 2414 kubelet.go:480] "Attempting to sync node with API server" Oct 13 05:27:34.086881 kubelet[2414]: I1013 05:27:34.086873 2414 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 05:27:34.087076 kubelet[2414]: I1013 05:27:34.086916 2414 kubelet.go:386] "Adding apiserver pod source" Oct 13 05:27:34.089236 kubelet[2414]: I1013 05:27:34.089052 2414 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 05:27:34.092259 kubelet[2414]: E1013 05:27:34.092192 2414 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 05:27:34.092259 kubelet[2414]: E1013 05:27:34.092191 2414 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 13 05:27:34.094624 kubelet[2414]: I1013 05:27:34.094577 2414 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 05:27:34.095115 kubelet[2414]: I1013 05:27:34.095078 2414 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 05:27:34.096180 kubelet[2414]: W1013 05:27:34.096151 2414 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 13 05:27:34.098949 kubelet[2414]: I1013 05:27:34.098920 2414 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 13 05:27:34.098999 kubelet[2414]: I1013 05:27:34.098975 2414 server.go:1289] "Started kubelet" Oct 13 05:27:34.101299 kubelet[2414]: I1013 05:27:34.100451 2414 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 05:27:34.102479 kubelet[2414]: I1013 05:27:34.102449 2414 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 05:27:34.102604 kubelet[2414]: I1013 05:27:34.102455 2414 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 05:27:34.105216 kubelet[2414]: I1013 05:27:34.105170 2414 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 05:27:34.106066 kubelet[2414]: E1013 05:27:34.104708 2414 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.33:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.33:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186df5c498dffd29 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-13 05:27:34.098943273 +0000 UTC m=+1.360165844,LastTimestamp:2025-10-13 05:27:34.098943273 +0000 UTC m=+1.360165844,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 13 05:27:34.106186 kubelet[2414]: I1013 05:27:34.102475 2414 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 05:27:34.106801 kubelet[2414]: I1013 05:27:34.106574 2414 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 13 05:27:34.106801 kubelet[2414]: I1013 05:27:34.106742 2414 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 13 05:27:34.106908 kubelet[2414]: I1013 05:27:34.106844 2414 reconciler.go:26] "Reconciler: start to sync state" Oct 13 05:27:34.106908 kubelet[2414]: E1013 05:27:34.106858 2414 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:27:34.106987 kubelet[2414]: E1013 05:27:34.106955 2414 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="200ms" Oct 13 05:27:34.107211 kubelet[2414]: I1013 05:27:34.107185 2414 server.go:317] "Adding debug handlers to kubelet server" Oct 13 05:27:34.107794 kubelet[2414]: E1013 05:27:34.107761 2414 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 13 05:27:34.107977 kubelet[2414]: I1013 05:27:34.107943 2414 factory.go:223] Registration of the systemd container factory successfully Oct 13 05:27:34.108337 kubelet[2414]: I1013 05:27:34.108071 2414 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 05:27:34.109038 kubelet[2414]: E1013 05:27:34.109007 2414 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 05:27:34.109365 kubelet[2414]: I1013 05:27:34.109340 2414 factory.go:223] Registration of the containerd container factory successfully Oct 13 05:27:34.126697 kubelet[2414]: I1013 05:27:34.126209 2414 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 05:27:34.126697 kubelet[2414]: I1013 05:27:34.126233 2414 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 05:27:34.126697 kubelet[2414]: I1013 05:27:34.126256 2414 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:27:34.128792 kubelet[2414]: I1013 05:27:34.128733 2414 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 13 05:27:34.130241 kubelet[2414]: I1013 05:27:34.130196 2414 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 13 05:27:34.130241 kubelet[2414]: I1013 05:27:34.130239 2414 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 13 05:27:34.130310 kubelet[2414]: I1013 05:27:34.130267 2414 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 05:27:34.130310 kubelet[2414]: I1013 05:27:34.130279 2414 kubelet.go:2436] "Starting kubelet main sync loop" Oct 13 05:27:34.130355 kubelet[2414]: E1013 05:27:34.130340 2414 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 05:27:34.207785 kubelet[2414]: E1013 05:27:34.207722 2414 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:27:34.230965 kubelet[2414]: E1013 05:27:34.230919 2414 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 13 05:27:34.307933 kubelet[2414]: E1013 05:27:34.307830 2414 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:27:34.307933 kubelet[2414]: E1013 05:27:34.307910 2414 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="400ms" Oct 13 05:27:34.322706 kubelet[2414]: E1013 05:27:34.322668 2414 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 13 05:27:34.328636 kubelet[2414]: I1013 05:27:34.328605 2414 policy_none.go:49] "None policy: Start" Oct 13 05:27:34.328709 kubelet[2414]: I1013 05:27:34.328643 2414 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 13 05:27:34.328709 kubelet[2414]: I1013 05:27:34.328693 2414 state_mem.go:35] "Initializing new in-memory state store" Oct 13 05:27:34.360127 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 13 05:27:34.373032 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 13 05:27:34.376712 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 13 05:27:34.388731 kubelet[2414]: E1013 05:27:34.388697 2414 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 05:27:34.388981 kubelet[2414]: I1013 05:27:34.388951 2414 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 05:27:34.389064 kubelet[2414]: I1013 05:27:34.388972 2414 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 05:27:34.389668 kubelet[2414]: I1013 05:27:34.389189 2414 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 05:27:34.390371 kubelet[2414]: E1013 05:27:34.390338 2414 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 05:27:34.390461 kubelet[2414]: E1013 05:27:34.390407 2414 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 13 05:27:34.442608 systemd[1]: Created slice kubepods-burstable-pod84e6e337e3264aa3e3e8e4ff7cc2b228.slice - libcontainer container kubepods-burstable-pod84e6e337e3264aa3e3e8e4ff7cc2b228.slice. Oct 13 05:27:34.458622 kubelet[2414]: E1013 05:27:34.458577 2414 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:27:34.461704 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Oct 13 05:27:34.480871 kubelet[2414]: E1013 05:27:34.480845 2414 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:27:34.484225 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Oct 13 05:27:34.486234 kubelet[2414]: E1013 05:27:34.486192 2414 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:27:34.490099 kubelet[2414]: I1013 05:27:34.490076 2414 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:27:34.490495 kubelet[2414]: E1013 05:27:34.490456 2414 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost" Oct 13 05:27:34.508803 kubelet[2414]: I1013 05:27:34.508772 2414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84e6e337e3264aa3e3e8e4ff7cc2b228-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"84e6e337e3264aa3e3e8e4ff7cc2b228\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:27:34.508882 kubelet[2414]: I1013 05:27:34.508809 2414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:27:34.508882 kubelet[2414]: I1013 05:27:34.508830 2414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:27:34.508882 kubelet[2414]: I1013 05:27:34.508855 2414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:27:34.508882 kubelet[2414]: I1013 05:27:34.508878 2414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:27:34.508987 kubelet[2414]: I1013 05:27:34.508922 2414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84e6e337e3264aa3e3e8e4ff7cc2b228-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"84e6e337e3264aa3e3e8e4ff7cc2b228\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:27:34.508987 kubelet[2414]: I1013 05:27:34.508958 2414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84e6e337e3264aa3e3e8e4ff7cc2b228-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"84e6e337e3264aa3e3e8e4ff7cc2b228\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:27:34.508987 kubelet[2414]: I1013 05:27:34.508981 2414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:27:34.509063 kubelet[2414]: I1013 05:27:34.509006 2414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 13 05:27:34.692035 kubelet[2414]: I1013 05:27:34.691881 2414 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:27:34.692179 kubelet[2414]: E1013 05:27:34.692101 2414 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost" Oct 13 05:27:34.708478 kubelet[2414]: E1013 05:27:34.708432 2414 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="800ms" Oct 13 05:27:34.759990 kubelet[2414]: E1013 05:27:34.759935 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:34.760673 containerd[1634]: time="2025-10-13T05:27:34.760608479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:84e6e337e3264aa3e3e8e4ff7cc2b228,Namespace:kube-system,Attempt:0,}" Oct 13 05:27:34.781831 kubelet[2414]: E1013 05:27:34.781794 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:34.782686 containerd[1634]: time="2025-10-13T05:27:34.782111993Z" level=info msg="connecting to shim 3442753e5fb2a64200c048f4f9954fd0fa67c788c83e939caafeba4e60354b5e" address="unix:///run/containerd/s/acb884f779400a7bfeea0d0b16229262908a0a10fdff57e987ce97e97bf9256d" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:27:34.782686 containerd[1634]: time="2025-10-13T05:27:34.782271389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Oct 13 05:27:34.787727 kubelet[2414]: E1013 05:27:34.787692 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:34.790045 containerd[1634]: time="2025-10-13T05:27:34.790012355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Oct 13 05:27:34.808828 systemd[1]: Started cri-containerd-3442753e5fb2a64200c048f4f9954fd0fa67c788c83e939caafeba4e60354b5e.scope - libcontainer container 3442753e5fb2a64200c048f4f9954fd0fa67c788c83e939caafeba4e60354b5e. Oct 13 05:27:34.814721 containerd[1634]: time="2025-10-13T05:27:34.814686118Z" level=info msg="connecting to shim 5d3d0ff20c21e98cb157ddcc81a3457ddfbdb56bb06faace0534c5168970d1bc" address="unix:///run/containerd/s/ff7a57bf6e20e573aa936068136e0ce17fc3d74cec5c180b8e3b8a6e0e9dbedb" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:27:34.828398 containerd[1634]: time="2025-10-13T05:27:34.828348467Z" level=info msg="connecting to shim 59b9f0654299115221d936bf29c963cec4c6cb71d3fecf3ced61f9dc90835131" address="unix:///run/containerd/s/8b582fd8e2634f4e4066317fe9943183bd96d501beb4138e9cea7849c57d308c" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:27:34.848823 systemd[1]: Started cri-containerd-5d3d0ff20c21e98cb157ddcc81a3457ddfbdb56bb06faace0534c5168970d1bc.scope - libcontainer container 5d3d0ff20c21e98cb157ddcc81a3457ddfbdb56bb06faace0534c5168970d1bc. Oct 13 05:27:34.854432 systemd[1]: Started cri-containerd-59b9f0654299115221d936bf29c963cec4c6cb71d3fecf3ced61f9dc90835131.scope - libcontainer container 59b9f0654299115221d936bf29c963cec4c6cb71d3fecf3ced61f9dc90835131. Oct 13 05:27:34.859666 containerd[1634]: time="2025-10-13T05:27:34.859606015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:84e6e337e3264aa3e3e8e4ff7cc2b228,Namespace:kube-system,Attempt:0,} returns sandbox id \"3442753e5fb2a64200c048f4f9954fd0fa67c788c83e939caafeba4e60354b5e\"" Oct 13 05:27:34.861016 kubelet[2414]: E1013 05:27:34.860990 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:34.866513 containerd[1634]: time="2025-10-13T05:27:34.866467850Z" level=info msg="CreateContainer within sandbox \"3442753e5fb2a64200c048f4f9954fd0fa67c788c83e939caafeba4e60354b5e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 13 05:27:34.876743 containerd[1634]: time="2025-10-13T05:27:34.875905345Z" level=info msg="Container dc00fe2f1bfab04870bdce1df0b91e28fcbb4d20ccc615ed2b595f941de675ec: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:27:34.885716 containerd[1634]: time="2025-10-13T05:27:34.885692226Z" level=info msg="CreateContainer within sandbox \"3442753e5fb2a64200c048f4f9954fd0fa67c788c83e939caafeba4e60354b5e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dc00fe2f1bfab04870bdce1df0b91e28fcbb4d20ccc615ed2b595f941de675ec\"" Oct 13 05:27:34.886690 containerd[1634]: time="2025-10-13T05:27:34.886668392Z" level=info msg="StartContainer for \"dc00fe2f1bfab04870bdce1df0b91e28fcbb4d20ccc615ed2b595f941de675ec\"" Oct 13 05:27:34.888254 containerd[1634]: time="2025-10-13T05:27:34.888226693Z" level=info msg="connecting to shim dc00fe2f1bfab04870bdce1df0b91e28fcbb4d20ccc615ed2b595f941de675ec" address="unix:///run/containerd/s/acb884f779400a7bfeea0d0b16229262908a0a10fdff57e987ce97e97bf9256d" protocol=ttrpc version=3 Oct 13 05:27:34.903532 containerd[1634]: time="2025-10-13T05:27:34.903497272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d3d0ff20c21e98cb157ddcc81a3457ddfbdb56bb06faace0534c5168970d1bc\"" Oct 13 05:27:34.904327 kubelet[2414]: E1013 05:27:34.904304 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:34.909428 containerd[1634]: time="2025-10-13T05:27:34.909384827Z" level=info msg="CreateContainer within sandbox \"5d3d0ff20c21e98cb157ddcc81a3457ddfbdb56bb06faace0534c5168970d1bc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 13 05:27:34.912371 containerd[1634]: time="2025-10-13T05:27:34.912333597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"59b9f0654299115221d936bf29c963cec4c6cb71d3fecf3ced61f9dc90835131\"" Oct 13 05:27:34.912847 systemd[1]: Started cri-containerd-dc00fe2f1bfab04870bdce1df0b91e28fcbb4d20ccc615ed2b595f941de675ec.scope - libcontainer container dc00fe2f1bfab04870bdce1df0b91e28fcbb4d20ccc615ed2b595f941de675ec. Oct 13 05:27:34.912952 kubelet[2414]: E1013 05:27:34.912866 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:34.918414 containerd[1634]: time="2025-10-13T05:27:34.918379004Z" level=info msg="CreateContainer within sandbox \"59b9f0654299115221d936bf29c963cec4c6cb71d3fecf3ced61f9dc90835131\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 13 05:27:34.922968 containerd[1634]: time="2025-10-13T05:27:34.922915809Z" level=info msg="Container 7c35b51d2f58712154f1bf3ec6fcb4f4718d76e02cda5ac85cc572f57b0a0f9c: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:27:34.929101 containerd[1634]: time="2025-10-13T05:27:34.929081012Z" level=info msg="Container 5027a4dff5268cb88c7b27d6662b08d3765b7fe4b70e437305a7ef4f7f2bd119: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:27:34.939981 containerd[1634]: time="2025-10-13T05:27:34.939881837Z" level=info msg="CreateContainer within sandbox \"5d3d0ff20c21e98cb157ddcc81a3457ddfbdb56bb06faace0534c5168970d1bc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7c35b51d2f58712154f1bf3ec6fcb4f4718d76e02cda5ac85cc572f57b0a0f9c\"" Oct 13 05:27:34.940685 containerd[1634]: time="2025-10-13T05:27:34.940665065Z" level=info msg="StartContainer for \"7c35b51d2f58712154f1bf3ec6fcb4f4718d76e02cda5ac85cc572f57b0a0f9c\"" Oct 13 05:27:34.941246 containerd[1634]: time="2025-10-13T05:27:34.941200387Z" level=info msg="CreateContainer within sandbox \"59b9f0654299115221d936bf29c963cec4c6cb71d3fecf3ced61f9dc90835131\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5027a4dff5268cb88c7b27d6662b08d3765b7fe4b70e437305a7ef4f7f2bd119\"" Oct 13 05:27:34.941545 containerd[1634]: time="2025-10-13T05:27:34.941516505Z" level=info msg="StartContainer for \"5027a4dff5268cb88c7b27d6662b08d3765b7fe4b70e437305a7ef4f7f2bd119\"" Oct 13 05:27:34.941945 containerd[1634]: time="2025-10-13T05:27:34.941920962Z" level=info msg="connecting to shim 7c35b51d2f58712154f1bf3ec6fcb4f4718d76e02cda5ac85cc572f57b0a0f9c" address="unix:///run/containerd/s/ff7a57bf6e20e573aa936068136e0ce17fc3d74cec5c180b8e3b8a6e0e9dbedb" protocol=ttrpc version=3 Oct 13 05:27:34.943133 containerd[1634]: time="2025-10-13T05:27:34.942507888Z" level=info msg="connecting to shim 5027a4dff5268cb88c7b27d6662b08d3765b7fe4b70e437305a7ef4f7f2bd119" address="unix:///run/containerd/s/8b582fd8e2634f4e4066317fe9943183bd96d501beb4138e9cea7849c57d308c" protocol=ttrpc version=3 Oct 13 05:27:34.963900 systemd[1]: Started cri-containerd-7c35b51d2f58712154f1bf3ec6fcb4f4718d76e02cda5ac85cc572f57b0a0f9c.scope - libcontainer container 7c35b51d2f58712154f1bf3ec6fcb4f4718d76e02cda5ac85cc572f57b0a0f9c. Oct 13 05:27:34.970282 containerd[1634]: time="2025-10-13T05:27:34.970240059Z" level=info msg="StartContainer for \"dc00fe2f1bfab04870bdce1df0b91e28fcbb4d20ccc615ed2b595f941de675ec\" returns successfully" Oct 13 05:27:34.973911 systemd[1]: Started cri-containerd-5027a4dff5268cb88c7b27d6662b08d3765b7fe4b70e437305a7ef4f7f2bd119.scope - libcontainer container 5027a4dff5268cb88c7b27d6662b08d3765b7fe4b70e437305a7ef4f7f2bd119. Oct 13 05:27:35.015032 kubelet[2414]: E1013 05:27:35.014978 2414 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 13 05:27:35.030310 containerd[1634]: time="2025-10-13T05:27:35.030259129Z" level=info msg="StartContainer for \"7c35b51d2f58712154f1bf3ec6fcb4f4718d76e02cda5ac85cc572f57b0a0f9c\" returns successfully" Oct 13 05:27:35.040974 containerd[1634]: time="2025-10-13T05:27:35.040862425Z" level=info msg="StartContainer for \"5027a4dff5268cb88c7b27d6662b08d3765b7fe4b70e437305a7ef4f7f2bd119\" returns successfully" Oct 13 05:27:35.096047 kubelet[2414]: I1013 05:27:35.095731 2414 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:27:35.144195 kubelet[2414]: E1013 05:27:35.144103 2414 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:27:35.144375 kubelet[2414]: E1013 05:27:35.144296 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:35.150463 kubelet[2414]: E1013 05:27:35.150432 2414 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:27:35.150546 kubelet[2414]: E1013 05:27:35.150533 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:35.152975 kubelet[2414]: E1013 05:27:35.152953 2414 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:27:35.153120 kubelet[2414]: E1013 05:27:35.153096 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:36.156190 kubelet[2414]: E1013 05:27:36.156151 2414 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:27:36.156679 kubelet[2414]: E1013 05:27:36.156330 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:36.156679 kubelet[2414]: E1013 05:27:36.156567 2414 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:27:36.156679 kubelet[2414]: E1013 05:27:36.156672 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:36.910871 kubelet[2414]: E1013 05:27:36.910805 2414 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 13 05:27:37.000532 kubelet[2414]: I1013 05:27:37.000458 2414 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 13 05:27:37.000722 kubelet[2414]: E1013 05:27:37.000569 2414 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 13 05:27:37.007526 kubelet[2414]: I1013 05:27:37.007479 2414 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 05:27:37.031498 kubelet[2414]: E1013 05:27:37.031398 2414 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 13 05:27:37.031498 kubelet[2414]: I1013 05:27:37.031483 2414 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:27:37.033484 kubelet[2414]: E1013 05:27:37.033270 2414 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:27:37.033484 kubelet[2414]: I1013 05:27:37.033299 2414 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 05:27:37.035235 kubelet[2414]: E1013 05:27:37.035192 2414 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 13 05:27:37.091764 kubelet[2414]: I1013 05:27:37.091717 2414 apiserver.go:52] "Watching apiserver" Oct 13 05:27:37.107559 kubelet[2414]: I1013 05:27:37.107496 2414 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 13 05:27:39.183940 systemd[1]: Reload requested from client PID 2701 ('systemctl') (unit session-7.scope)... Oct 13 05:27:39.183958 systemd[1]: Reloading... Oct 13 05:27:39.270701 zram_generator::config[2751]: No configuration found. Oct 13 05:27:39.509824 systemd[1]: Reloading finished in 325 ms. Oct 13 05:27:39.535877 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:27:39.549023 systemd[1]: kubelet.service: Deactivated successfully. Oct 13 05:27:39.549349 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:27:39.549402 systemd[1]: kubelet.service: Consumed 1.484s CPU time, 130.7M memory peak. Oct 13 05:27:39.551293 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:27:39.819616 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:27:39.831060 (kubelet)[2790]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 05:27:39.884823 kubelet[2790]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:27:39.885307 kubelet[2790]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 05:27:39.885360 kubelet[2790]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:27:39.885527 kubelet[2790]: I1013 05:27:39.885466 2790 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 05:27:39.892137 kubelet[2790]: I1013 05:27:39.892102 2790 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 13 05:27:39.892137 kubelet[2790]: I1013 05:27:39.892125 2790 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 05:27:39.892359 kubelet[2790]: I1013 05:27:39.892330 2790 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 05:27:39.893765 kubelet[2790]: I1013 05:27:39.893743 2790 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 13 05:27:39.896108 kubelet[2790]: I1013 05:27:39.896080 2790 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:27:39.904263 kubelet[2790]: I1013 05:27:39.904236 2790 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 05:27:39.909248 kubelet[2790]: I1013 05:27:39.909217 2790 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 13 05:27:39.909560 kubelet[2790]: I1013 05:27:39.909510 2790 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 05:27:39.909787 kubelet[2790]: I1013 05:27:39.909547 2790 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 05:27:39.909908 kubelet[2790]: I1013 05:27:39.909799 2790 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 05:27:39.909908 kubelet[2790]: I1013 05:27:39.909812 2790 container_manager_linux.go:303] "Creating device plugin manager" Oct 13 05:27:39.909908 kubelet[2790]: I1013 05:27:39.909874 2790 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:27:39.910066 kubelet[2790]: I1013 05:27:39.910047 2790 kubelet.go:480] "Attempting to sync node with API server" Oct 13 05:27:39.910066 kubelet[2790]: I1013 05:27:39.910067 2790 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 05:27:39.910118 kubelet[2790]: I1013 05:27:39.910092 2790 kubelet.go:386] "Adding apiserver pod source" Oct 13 05:27:39.910118 kubelet[2790]: I1013 05:27:39.910110 2790 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 05:27:39.911769 kubelet[2790]: I1013 05:27:39.911741 2790 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 05:27:39.912258 kubelet[2790]: I1013 05:27:39.912229 2790 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 05:27:39.920985 kubelet[2790]: I1013 05:27:39.920952 2790 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 13 05:27:39.921137 kubelet[2790]: I1013 05:27:39.921038 2790 server.go:1289] "Started kubelet" Oct 13 05:27:39.921673 kubelet[2790]: I1013 05:27:39.921453 2790 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 05:27:39.921673 kubelet[2790]: I1013 05:27:39.921538 2790 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 05:27:39.922007 kubelet[2790]: I1013 05:27:39.921976 2790 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 05:27:39.923126 kubelet[2790]: I1013 05:27:39.923103 2790 server.go:317] "Adding debug handlers to kubelet server" Oct 13 05:27:39.923749 kubelet[2790]: I1013 05:27:39.923710 2790 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 05:27:39.924057 kubelet[2790]: I1013 05:27:39.923982 2790 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 13 05:27:39.924179 kubelet[2790]: I1013 05:27:39.924115 2790 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 05:27:39.926985 kubelet[2790]: I1013 05:27:39.926912 2790 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 13 05:27:39.927055 kubelet[2790]: I1013 05:27:39.927033 2790 reconciler.go:26] "Reconciler: start to sync state" Oct 13 05:27:39.928235 kubelet[2790]: I1013 05:27:39.928183 2790 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 05:27:39.930625 kubelet[2790]: E1013 05:27:39.930495 2790 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 05:27:39.932974 kubelet[2790]: I1013 05:27:39.932901 2790 factory.go:223] Registration of the containerd container factory successfully Oct 13 05:27:39.932974 kubelet[2790]: I1013 05:27:39.932925 2790 factory.go:223] Registration of the systemd container factory successfully Oct 13 05:27:39.944996 kubelet[2790]: I1013 05:27:39.944966 2790 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 13 05:27:39.946430 kubelet[2790]: I1013 05:27:39.946327 2790 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 13 05:27:39.946430 kubelet[2790]: I1013 05:27:39.946360 2790 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 13 05:27:39.946430 kubelet[2790]: I1013 05:27:39.946402 2790 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 05:27:39.946430 kubelet[2790]: I1013 05:27:39.946410 2790 kubelet.go:2436] "Starting kubelet main sync loop" Oct 13 05:27:39.946558 kubelet[2790]: E1013 05:27:39.946483 2790 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 05:27:39.972390 kubelet[2790]: I1013 05:27:39.972344 2790 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 05:27:39.972390 kubelet[2790]: I1013 05:27:39.972363 2790 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 05:27:39.972390 kubelet[2790]: I1013 05:27:39.972388 2790 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:27:39.972594 kubelet[2790]: I1013 05:27:39.972522 2790 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 13 05:27:39.972594 kubelet[2790]: I1013 05:27:39.972534 2790 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 13 05:27:39.972594 kubelet[2790]: I1013 05:27:39.972561 2790 policy_none.go:49] "None policy: Start" Oct 13 05:27:39.972594 kubelet[2790]: I1013 05:27:39.972578 2790 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 13 05:27:39.972594 kubelet[2790]: I1013 05:27:39.972597 2790 state_mem.go:35] "Initializing new in-memory state store" Oct 13 05:27:39.972789 kubelet[2790]: I1013 05:27:39.972723 2790 state_mem.go:75] "Updated machine memory state" Oct 13 05:27:39.976883 kubelet[2790]: E1013 05:27:39.976855 2790 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 05:27:39.977103 kubelet[2790]: I1013 05:27:39.977054 2790 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 05:27:39.977103 kubelet[2790]: I1013 05:27:39.977077 2790 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 05:27:39.977319 kubelet[2790]: I1013 05:27:39.977294 2790 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 05:27:39.978115 kubelet[2790]: E1013 05:27:39.977964 2790 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 05:27:40.047792 kubelet[2790]: I1013 05:27:40.047513 2790 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 05:27:40.047792 kubelet[2790]: I1013 05:27:40.047564 2790 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 05:27:40.047792 kubelet[2790]: I1013 05:27:40.047793 2790 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:27:40.087431 kubelet[2790]: I1013 05:27:40.087328 2790 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:27:40.127929 kubelet[2790]: I1013 05:27:40.127881 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:27:40.127994 kubelet[2790]: I1013 05:27:40.127929 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:27:40.127994 kubelet[2790]: I1013 05:27:40.127959 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:27:40.128054 kubelet[2790]: I1013 05:27:40.128021 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84e6e337e3264aa3e3e8e4ff7cc2b228-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"84e6e337e3264aa3e3e8e4ff7cc2b228\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:27:40.128079 kubelet[2790]: I1013 05:27:40.128058 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:27:40.128113 kubelet[2790]: I1013 05:27:40.128078 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 13 05:27:40.128113 kubelet[2790]: I1013 05:27:40.128093 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84e6e337e3264aa3e3e8e4ff7cc2b228-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"84e6e337e3264aa3e3e8e4ff7cc2b228\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:27:40.128113 kubelet[2790]: I1013 05:27:40.128110 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84e6e337e3264aa3e3e8e4ff7cc2b228-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"84e6e337e3264aa3e3e8e4ff7cc2b228\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:27:40.128177 kubelet[2790]: I1013 05:27:40.128124 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:27:40.353405 kubelet[2790]: E1013 05:27:40.353247 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:40.358385 kubelet[2790]: E1013 05:27:40.358337 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:40.358570 kubelet[2790]: E1013 05:27:40.358416 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:40.380780 kubelet[2790]: I1013 05:27:40.380630 2790 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 13 05:27:40.380900 kubelet[2790]: I1013 05:27:40.380893 2790 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 13 05:27:40.911530 kubelet[2790]: I1013 05:27:40.911479 2790 apiserver.go:52] "Watching apiserver" Oct 13 05:27:40.927196 kubelet[2790]: I1013 05:27:40.927159 2790 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 13 05:27:40.959190 kubelet[2790]: I1013 05:27:40.959152 2790 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 05:27:40.959611 kubelet[2790]: E1013 05:27:40.959590 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:40.960098 kubelet[2790]: E1013 05:27:40.960070 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:40.965315 kubelet[2790]: E1013 05:27:40.965081 2790 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 13 05:27:40.965315 kubelet[2790]: E1013 05:27:40.965224 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:40.985160 kubelet[2790]: I1013 05:27:40.985094 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.985069729 podStartE2EDuration="985.069729ms" podCreationTimestamp="2025-10-13 05:27:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:27:40.978054104 +0000 UTC m=+1.137375479" watchObservedRunningTime="2025-10-13 05:27:40.985069729 +0000 UTC m=+1.144391095" Oct 13 05:27:40.985405 kubelet[2790]: I1013 05:27:40.985198 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.985192772 podStartE2EDuration="985.192772ms" podCreationTimestamp="2025-10-13 05:27:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:27:40.98477734 +0000 UTC m=+1.144098715" watchObservedRunningTime="2025-10-13 05:27:40.985192772 +0000 UTC m=+1.144514147" Oct 13 05:27:40.999021 kubelet[2790]: I1013 05:27:40.998957 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.998935611 podStartE2EDuration="998.935611ms" podCreationTimestamp="2025-10-13 05:27:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:27:40.991383744 +0000 UTC m=+1.150705119" watchObservedRunningTime="2025-10-13 05:27:40.998935611 +0000 UTC m=+1.158256986" Oct 13 05:27:41.961088 kubelet[2790]: E1013 05:27:41.961024 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:41.961635 kubelet[2790]: E1013 05:27:41.961177 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:41.982854 kubelet[2790]: E1013 05:27:41.982808 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:44.234921 kubelet[2790]: I1013 05:27:44.234859 2790 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 13 05:27:44.235543 containerd[1634]: time="2025-10-13T05:27:44.235407796Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 13 05:27:44.235857 kubelet[2790]: I1013 05:27:44.235638 2790 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 13 05:27:44.472560 kubelet[2790]: E1013 05:27:44.472519 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:44.826814 systemd[1]: Created slice kubepods-besteffort-pod3c4ae70e_c9c4_4ee9_bd52_9a4553e9ec14.slice - libcontainer container kubepods-besteffort-pod3c4ae70e_c9c4_4ee9_bd52_9a4553e9ec14.slice. Oct 13 05:27:44.859175 kubelet[2790]: I1013 05:27:44.859131 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3c4ae70e-c9c4-4ee9-bd52-9a4553e9ec14-kube-proxy\") pod \"kube-proxy-jhzlx\" (UID: \"3c4ae70e-c9c4-4ee9-bd52-9a4553e9ec14\") " pod="kube-system/kube-proxy-jhzlx" Oct 13 05:27:44.859175 kubelet[2790]: I1013 05:27:44.859160 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c4ae70e-c9c4-4ee9-bd52-9a4553e9ec14-xtables-lock\") pod \"kube-proxy-jhzlx\" (UID: \"3c4ae70e-c9c4-4ee9-bd52-9a4553e9ec14\") " pod="kube-system/kube-proxy-jhzlx" Oct 13 05:27:44.859175 kubelet[2790]: I1013 05:27:44.859179 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c4ae70e-c9c4-4ee9-bd52-9a4553e9ec14-lib-modules\") pod \"kube-proxy-jhzlx\" (UID: \"3c4ae70e-c9c4-4ee9-bd52-9a4553e9ec14\") " pod="kube-system/kube-proxy-jhzlx" Oct 13 05:27:44.859355 kubelet[2790]: I1013 05:27:44.859198 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8zqr\" (UniqueName: \"kubernetes.io/projected/3c4ae70e-c9c4-4ee9-bd52-9a4553e9ec14-kube-api-access-n8zqr\") pod \"kube-proxy-jhzlx\" (UID: \"3c4ae70e-c9c4-4ee9-bd52-9a4553e9ec14\") " pod="kube-system/kube-proxy-jhzlx" Oct 13 05:27:44.964173 kubelet[2790]: E1013 05:27:44.964141 2790 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 13 05:27:44.964173 kubelet[2790]: E1013 05:27:44.964173 2790 projected.go:194] Error preparing data for projected volume kube-api-access-n8zqr for pod kube-system/kube-proxy-jhzlx: configmap "kube-root-ca.crt" not found Oct 13 05:27:44.964316 kubelet[2790]: E1013 05:27:44.964239 2790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3c4ae70e-c9c4-4ee9-bd52-9a4553e9ec14-kube-api-access-n8zqr podName:3c4ae70e-c9c4-4ee9-bd52-9a4553e9ec14 nodeName:}" failed. No retries permitted until 2025-10-13 05:27:45.464219893 +0000 UTC m=+5.623541268 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-n8zqr" (UniqueName: "kubernetes.io/projected/3c4ae70e-c9c4-4ee9-bd52-9a4553e9ec14-kube-api-access-n8zqr") pod "kube-proxy-jhzlx" (UID: "3c4ae70e-c9c4-4ee9-bd52-9a4553e9ec14") : configmap "kube-root-ca.crt" not found Oct 13 05:27:45.391433 systemd[1]: Created slice kubepods-besteffort-pod5c233e96_3f7c_4708_b317_780328f4238d.slice - libcontainer container kubepods-besteffort-pod5c233e96_3f7c_4708_b317_780328f4238d.slice. Oct 13 05:27:45.463749 kubelet[2790]: I1013 05:27:45.463686 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxp4q\" (UniqueName: \"kubernetes.io/projected/5c233e96-3f7c-4708-b317-780328f4238d-kube-api-access-gxp4q\") pod \"tigera-operator-755d956888-qjn6d\" (UID: \"5c233e96-3f7c-4708-b317-780328f4238d\") " pod="tigera-operator/tigera-operator-755d956888-qjn6d" Oct 13 05:27:45.463749 kubelet[2790]: I1013 05:27:45.463745 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5c233e96-3f7c-4708-b317-780328f4238d-var-lib-calico\") pod \"tigera-operator-755d956888-qjn6d\" (UID: \"5c233e96-3f7c-4708-b317-780328f4238d\") " pod="tigera-operator/tigera-operator-755d956888-qjn6d" Oct 13 05:27:45.695042 containerd[1634]: time="2025-10-13T05:27:45.694948148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-qjn6d,Uid:5c233e96-3f7c-4708-b317-780328f4238d,Namespace:tigera-operator,Attempt:0,}" Oct 13 05:27:45.715250 containerd[1634]: time="2025-10-13T05:27:45.715196057Z" level=info msg="connecting to shim 2c07acd0bd494971fbd6ffeed33f65ec74bfc95ca22a9a3c18acb08896b895fe" address="unix:///run/containerd/s/5319bf45da63999e93ed474065a7b7000733c52fdaf60597cf8d560e183a2a8a" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:27:45.736201 kubelet[2790]: E1013 05:27:45.736164 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:45.737018 containerd[1634]: time="2025-10-13T05:27:45.736927797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jhzlx,Uid:3c4ae70e-c9c4-4ee9-bd52-9a4553e9ec14,Namespace:kube-system,Attempt:0,}" Oct 13 05:27:45.742794 systemd[1]: Started cri-containerd-2c07acd0bd494971fbd6ffeed33f65ec74bfc95ca22a9a3c18acb08896b895fe.scope - libcontainer container 2c07acd0bd494971fbd6ffeed33f65ec74bfc95ca22a9a3c18acb08896b895fe. Oct 13 05:27:45.775772 containerd[1634]: time="2025-10-13T05:27:45.775716808Z" level=info msg="connecting to shim 1247216e29abf325eb81282a081f69af8ad48edd73670315e02805b9c7226a06" address="unix:///run/containerd/s/24ca5f3b914f6358fc8554237d9b8b2fe09c4bda7bfa41571517768f1888de4f" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:27:45.828442 containerd[1634]: time="2025-10-13T05:27:45.828388582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-qjn6d,Uid:5c233e96-3f7c-4708-b317-780328f4238d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2c07acd0bd494971fbd6ffeed33f65ec74bfc95ca22a9a3c18acb08896b895fe\"" Oct 13 05:27:45.831804 containerd[1634]: time="2025-10-13T05:27:45.831637187Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Oct 13 05:27:45.840808 systemd[1]: Started cri-containerd-1247216e29abf325eb81282a081f69af8ad48edd73670315e02805b9c7226a06.scope - libcontainer container 1247216e29abf325eb81282a081f69af8ad48edd73670315e02805b9c7226a06. Oct 13 05:27:45.867687 containerd[1634]: time="2025-10-13T05:27:45.867635828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jhzlx,Uid:3c4ae70e-c9c4-4ee9-bd52-9a4553e9ec14,Namespace:kube-system,Attempt:0,} returns sandbox id \"1247216e29abf325eb81282a081f69af8ad48edd73670315e02805b9c7226a06\"" Oct 13 05:27:45.868428 kubelet[2790]: E1013 05:27:45.868375 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:45.873131 containerd[1634]: time="2025-10-13T05:27:45.873091029Z" level=info msg="CreateContainer within sandbox \"1247216e29abf325eb81282a081f69af8ad48edd73670315e02805b9c7226a06\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 13 05:27:45.883795 containerd[1634]: time="2025-10-13T05:27:45.883761203Z" level=info msg="Container da249b7f7956a6e9d4cab8e98456cd1be635276e0cf6742530372bb717fec7c7: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:27:45.891859 containerd[1634]: time="2025-10-13T05:27:45.891804845Z" level=info msg="CreateContainer within sandbox \"1247216e29abf325eb81282a081f69af8ad48edd73670315e02805b9c7226a06\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"da249b7f7956a6e9d4cab8e98456cd1be635276e0cf6742530372bb717fec7c7\"" Oct 13 05:27:45.892670 containerd[1634]: time="2025-10-13T05:27:45.892250978Z" level=info msg="StartContainer for \"da249b7f7956a6e9d4cab8e98456cd1be635276e0cf6742530372bb717fec7c7\"" Oct 13 05:27:45.893951 containerd[1634]: time="2025-10-13T05:27:45.893921269Z" level=info msg="connecting to shim da249b7f7956a6e9d4cab8e98456cd1be635276e0cf6742530372bb717fec7c7" address="unix:///run/containerd/s/24ca5f3b914f6358fc8554237d9b8b2fe09c4bda7bfa41571517768f1888de4f" protocol=ttrpc version=3 Oct 13 05:27:45.916801 systemd[1]: Started cri-containerd-da249b7f7956a6e9d4cab8e98456cd1be635276e0cf6742530372bb717fec7c7.scope - libcontainer container da249b7f7956a6e9d4cab8e98456cd1be635276e0cf6742530372bb717fec7c7. Oct 13 05:27:45.960573 containerd[1634]: time="2025-10-13T05:27:45.960448345Z" level=info msg="StartContainer for \"da249b7f7956a6e9d4cab8e98456cd1be635276e0cf6742530372bb717fec7c7\" returns successfully" Oct 13 05:27:45.968228 kubelet[2790]: E1013 05:27:45.968177 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:45.976274 kubelet[2790]: I1013 05:27:45.976208 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jhzlx" podStartSLOduration=1.976188428 podStartE2EDuration="1.976188428s" podCreationTimestamp="2025-10-13 05:27:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:27:45.9759696 +0000 UTC m=+6.135290975" watchObservedRunningTime="2025-10-13 05:27:45.976188428 +0000 UTC m=+6.135509803" Oct 13 05:27:47.110837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount135687582.mount: Deactivated successfully. Oct 13 05:27:47.512730 containerd[1634]: time="2025-10-13T05:27:47.512587148Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:47.513410 containerd[1634]: time="2025-10-13T05:27:47.513375586Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Oct 13 05:27:47.514624 containerd[1634]: time="2025-10-13T05:27:47.514589410Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:47.516453 containerd[1634]: time="2025-10-13T05:27:47.516410591Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:47.517050 containerd[1634]: time="2025-10-13T05:27:47.517004885Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 1.685194352s" Oct 13 05:27:47.517050 containerd[1634]: time="2025-10-13T05:27:47.517046591Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Oct 13 05:27:47.521316 containerd[1634]: time="2025-10-13T05:27:47.521274222Z" level=info msg="CreateContainer within sandbox \"2c07acd0bd494971fbd6ffeed33f65ec74bfc95ca22a9a3c18acb08896b895fe\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 13 05:27:47.529381 containerd[1634]: time="2025-10-13T05:27:47.529345671Z" level=info msg="Container 4c1f9baaa0494a88ec99c2ca2e5fe27bf084aa38d2f77a53c3d4b42bcad4c3a8: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:27:47.532951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3241517023.mount: Deactivated successfully. Oct 13 05:27:47.536340 containerd[1634]: time="2025-10-13T05:27:47.536305314Z" level=info msg="CreateContainer within sandbox \"2c07acd0bd494971fbd6ffeed33f65ec74bfc95ca22a9a3c18acb08896b895fe\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4c1f9baaa0494a88ec99c2ca2e5fe27bf084aa38d2f77a53c3d4b42bcad4c3a8\"" Oct 13 05:27:47.536767 containerd[1634]: time="2025-10-13T05:27:47.536739436Z" level=info msg="StartContainer for \"4c1f9baaa0494a88ec99c2ca2e5fe27bf084aa38d2f77a53c3d4b42bcad4c3a8\"" Oct 13 05:27:47.537560 containerd[1634]: time="2025-10-13T05:27:47.537509009Z" level=info msg="connecting to shim 4c1f9baaa0494a88ec99c2ca2e5fe27bf084aa38d2f77a53c3d4b42bcad4c3a8" address="unix:///run/containerd/s/5319bf45da63999e93ed474065a7b7000733c52fdaf60597cf8d560e183a2a8a" protocol=ttrpc version=3 Oct 13 05:27:47.563794 systemd[1]: Started cri-containerd-4c1f9baaa0494a88ec99c2ca2e5fe27bf084aa38d2f77a53c3d4b42bcad4c3a8.scope - libcontainer container 4c1f9baaa0494a88ec99c2ca2e5fe27bf084aa38d2f77a53c3d4b42bcad4c3a8. Oct 13 05:27:47.595620 containerd[1634]: time="2025-10-13T05:27:47.595568086Z" level=info msg="StartContainer for \"4c1f9baaa0494a88ec99c2ca2e5fe27bf084aa38d2f77a53c3d4b42bcad4c3a8\" returns successfully" Oct 13 05:27:49.378417 kubelet[2790]: E1013 05:27:49.375582 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:49.395027 kubelet[2790]: I1013 05:27:49.394956 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-qjn6d" podStartSLOduration=2.707919746 podStartE2EDuration="4.394939374s" podCreationTimestamp="2025-10-13 05:27:45 +0000 UTC" firstStartedPulling="2025-10-13 05:27:45.830684392 +0000 UTC m=+5.990005767" lastFinishedPulling="2025-10-13 05:27:47.51770402 +0000 UTC m=+7.677025395" observedRunningTime="2025-10-13 05:27:47.985771684 +0000 UTC m=+8.145093059" watchObservedRunningTime="2025-10-13 05:27:49.394939374 +0000 UTC m=+9.554260749" Oct 13 05:27:49.980860 kubelet[2790]: E1013 05:27:49.980747 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:51.991020 kubelet[2790]: E1013 05:27:51.990934 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:52.199590 update_engine[1611]: I20251013 05:27:52.199385 1611 update_attempter.cc:509] Updating boot flags... Oct 13 05:27:52.617628 sudo[1834]: pam_unix(sudo:session): session closed for user root Oct 13 05:27:52.620676 sshd[1833]: Connection closed by 10.0.0.1 port 58714 Oct 13 05:27:52.622161 sshd-session[1830]: pam_unix(sshd:session): session closed for user core Oct 13 05:27:52.629401 systemd[1]: sshd@6-10.0.0.33:22-10.0.0.1:58714.service: Deactivated successfully. Oct 13 05:27:52.632926 systemd[1]: session-7.scope: Deactivated successfully. Oct 13 05:27:52.636606 systemd[1]: session-7.scope: Consumed 5.789s CPU time, 217.6M memory peak. Oct 13 05:27:52.638498 systemd-logind[1604]: Session 7 logged out. Waiting for processes to exit. Oct 13 05:27:52.641102 systemd-logind[1604]: Removed session 7. Oct 13 05:27:54.481212 kubelet[2790]: E1013 05:27:54.481128 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:55.025410 kubelet[2790]: E1013 05:27:55.025357 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:55.037060 systemd[1]: Created slice kubepods-besteffort-podf5b8481d_1488_42f5_bfe0_57aef288f5f0.slice - libcontainer container kubepods-besteffort-podf5b8481d_1488_42f5_bfe0_57aef288f5f0.slice. Oct 13 05:27:55.132026 kubelet[2790]: I1013 05:27:55.131959 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f5b8481d-1488-42f5-bfe0-57aef288f5f0-typha-certs\") pod \"calico-typha-654bbdbf89-wj4g6\" (UID: \"f5b8481d-1488-42f5-bfe0-57aef288f5f0\") " pod="calico-system/calico-typha-654bbdbf89-wj4g6" Oct 13 05:27:55.132026 kubelet[2790]: I1013 05:27:55.132008 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzsjk\" (UniqueName: \"kubernetes.io/projected/f5b8481d-1488-42f5-bfe0-57aef288f5f0-kube-api-access-mzsjk\") pod \"calico-typha-654bbdbf89-wj4g6\" (UID: \"f5b8481d-1488-42f5-bfe0-57aef288f5f0\") " pod="calico-system/calico-typha-654bbdbf89-wj4g6" Oct 13 05:27:55.132026 kubelet[2790]: I1013 05:27:55.132030 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5b8481d-1488-42f5-bfe0-57aef288f5f0-tigera-ca-bundle\") pod \"calico-typha-654bbdbf89-wj4g6\" (UID: \"f5b8481d-1488-42f5-bfe0-57aef288f5f0\") " pod="calico-system/calico-typha-654bbdbf89-wj4g6" Oct 13 05:27:55.184611 systemd[1]: Created slice kubepods-besteffort-pod1ed25f65_6d73_47cf_8eb6_aba4bedc0b52.slice - libcontainer container kubepods-besteffort-pod1ed25f65_6d73_47cf_8eb6_aba4bedc0b52.slice. Oct 13 05:27:55.233232 kubelet[2790]: I1013 05:27:55.233149 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1ed25f65-6d73-47cf-8eb6-aba4bedc0b52-cni-log-dir\") pod \"calico-node-lh4m2\" (UID: \"1ed25f65-6d73-47cf-8eb6-aba4bedc0b52\") " pod="calico-system/calico-node-lh4m2" Oct 13 05:27:55.233232 kubelet[2790]: I1013 05:27:55.233221 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1ed25f65-6d73-47cf-8eb6-aba4bedc0b52-cni-net-dir\") pod \"calico-node-lh4m2\" (UID: \"1ed25f65-6d73-47cf-8eb6-aba4bedc0b52\") " pod="calico-system/calico-node-lh4m2" Oct 13 05:27:55.233232 kubelet[2790]: I1013 05:27:55.233239 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1ed25f65-6d73-47cf-8eb6-aba4bedc0b52-var-lib-calico\") pod \"calico-node-lh4m2\" (UID: \"1ed25f65-6d73-47cf-8eb6-aba4bedc0b52\") " pod="calico-system/calico-node-lh4m2" Oct 13 05:27:55.233429 kubelet[2790]: I1013 05:27:55.233260 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1ed25f65-6d73-47cf-8eb6-aba4bedc0b52-policysync\") pod \"calico-node-lh4m2\" (UID: \"1ed25f65-6d73-47cf-8eb6-aba4bedc0b52\") " pod="calico-system/calico-node-lh4m2" Oct 13 05:27:55.233429 kubelet[2790]: I1013 05:27:55.233375 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1ed25f65-6d73-47cf-8eb6-aba4bedc0b52-cni-bin-dir\") pod \"calico-node-lh4m2\" (UID: \"1ed25f65-6d73-47cf-8eb6-aba4bedc0b52\") " pod="calico-system/calico-node-lh4m2" Oct 13 05:27:55.233533 kubelet[2790]: I1013 05:27:55.233508 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1ed25f65-6d73-47cf-8eb6-aba4bedc0b52-var-run-calico\") pod \"calico-node-lh4m2\" (UID: \"1ed25f65-6d73-47cf-8eb6-aba4bedc0b52\") " pod="calico-system/calico-node-lh4m2" Oct 13 05:27:55.233571 kubelet[2790]: I1013 05:27:55.233547 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ed25f65-6d73-47cf-8eb6-aba4bedc0b52-xtables-lock\") pod \"calico-node-lh4m2\" (UID: \"1ed25f65-6d73-47cf-8eb6-aba4bedc0b52\") " pod="calico-system/calico-node-lh4m2" Oct 13 05:27:55.233593 kubelet[2790]: I1013 05:27:55.233567 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1ed25f65-6d73-47cf-8eb6-aba4bedc0b52-flexvol-driver-host\") pod \"calico-node-lh4m2\" (UID: \"1ed25f65-6d73-47cf-8eb6-aba4bedc0b52\") " pod="calico-system/calico-node-lh4m2" Oct 13 05:27:55.233618 kubelet[2790]: I1013 05:27:55.233591 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ed25f65-6d73-47cf-8eb6-aba4bedc0b52-lib-modules\") pod \"calico-node-lh4m2\" (UID: \"1ed25f65-6d73-47cf-8eb6-aba4bedc0b52\") " pod="calico-system/calico-node-lh4m2" Oct 13 05:27:55.233618 kubelet[2790]: I1013 05:27:55.233608 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1ed25f65-6d73-47cf-8eb6-aba4bedc0b52-node-certs\") pod \"calico-node-lh4m2\" (UID: \"1ed25f65-6d73-47cf-8eb6-aba4bedc0b52\") " pod="calico-system/calico-node-lh4m2" Oct 13 05:27:55.233685 kubelet[2790]: I1013 05:27:55.233666 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6gqb\" (UniqueName: \"kubernetes.io/projected/1ed25f65-6d73-47cf-8eb6-aba4bedc0b52-kube-api-access-j6gqb\") pod \"calico-node-lh4m2\" (UID: \"1ed25f65-6d73-47cf-8eb6-aba4bedc0b52\") " pod="calico-system/calico-node-lh4m2" Oct 13 05:27:55.234390 kubelet[2790]: I1013 05:27:55.234356 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ed25f65-6d73-47cf-8eb6-aba4bedc0b52-tigera-ca-bundle\") pod \"calico-node-lh4m2\" (UID: \"1ed25f65-6d73-47cf-8eb6-aba4bedc0b52\") " pod="calico-system/calico-node-lh4m2" Oct 13 05:27:55.337025 kubelet[2790]: E1013 05:27:55.336973 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cjhz6" podUID="d0ed4399-1a69-434d-b19f-5ac4963e0fd8" Oct 13 05:27:55.342809 kubelet[2790]: E1013 05:27:55.342774 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.342809 kubelet[2790]: W1013 05:27:55.342803 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.343864 kubelet[2790]: E1013 05:27:55.343839 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.346673 kubelet[2790]: E1013 05:27:55.345548 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:55.347373 containerd[1634]: time="2025-10-13T05:27:55.347122615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-654bbdbf89-wj4g6,Uid:f5b8481d-1488-42f5-bfe0-57aef288f5f0,Namespace:calico-system,Attempt:0,}" Oct 13 05:27:55.348292 kubelet[2790]: E1013 05:27:55.348213 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.348292 kubelet[2790]: W1013 05:27:55.348233 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.348292 kubelet[2790]: E1013 05:27:55.348256 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.373127 containerd[1634]: time="2025-10-13T05:27:55.373070957Z" level=info msg="connecting to shim fc9be8b1ec7f1dd781ea65b96c5c1b1ed55a9ba4de0a4d666b5227f8a7f3f1f2" address="unix:///run/containerd/s/b0e0ca123d200b1a1fdffc6d4782518e451682d4ab120db5350ad8783c6c388f" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:27:55.410801 systemd[1]: Started cri-containerd-fc9be8b1ec7f1dd781ea65b96c5c1b1ed55a9ba4de0a4d666b5227f8a7f3f1f2.scope - libcontainer container fc9be8b1ec7f1dd781ea65b96c5c1b1ed55a9ba4de0a4d666b5227f8a7f3f1f2. Oct 13 05:27:55.415162 kubelet[2790]: E1013 05:27:55.415129 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.415162 kubelet[2790]: W1013 05:27:55.415157 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.415300 kubelet[2790]: E1013 05:27:55.415191 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.416717 kubelet[2790]: E1013 05:27:55.416616 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.416717 kubelet[2790]: W1013 05:27:55.416635 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.416717 kubelet[2790]: E1013 05:27:55.416659 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.416975 kubelet[2790]: E1013 05:27:55.416929 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.416975 kubelet[2790]: W1013 05:27:55.416939 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.416975 kubelet[2790]: E1013 05:27:55.416949 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.417248 kubelet[2790]: E1013 05:27:55.417229 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.417248 kubelet[2790]: W1013 05:27:55.417244 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.417402 kubelet[2790]: E1013 05:27:55.417253 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.417456 kubelet[2790]: E1013 05:27:55.417435 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.417456 kubelet[2790]: W1013 05:27:55.417443 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.417456 kubelet[2790]: E1013 05:27:55.417452 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.417619 kubelet[2790]: E1013 05:27:55.417602 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.417619 kubelet[2790]: W1013 05:27:55.417613 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.417714 kubelet[2790]: E1013 05:27:55.417623 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.417856 kubelet[2790]: E1013 05:27:55.417838 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.417856 kubelet[2790]: W1013 05:27:55.417852 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.417962 kubelet[2790]: E1013 05:27:55.417861 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.418028 kubelet[2790]: E1013 05:27:55.418016 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.418028 kubelet[2790]: W1013 05:27:55.418024 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.418253 kubelet[2790]: E1013 05:27:55.418032 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.418253 kubelet[2790]: E1013 05:27:55.418212 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.418253 kubelet[2790]: W1013 05:27:55.418220 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.418253 kubelet[2790]: E1013 05:27:55.418228 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.418386 kubelet[2790]: E1013 05:27:55.418376 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.418386 kubelet[2790]: W1013 05:27:55.418384 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.418441 kubelet[2790]: E1013 05:27:55.418391 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.418575 kubelet[2790]: E1013 05:27:55.418534 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.418575 kubelet[2790]: W1013 05:27:55.418547 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.418575 kubelet[2790]: E1013 05:27:55.418555 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.418764 kubelet[2790]: E1013 05:27:55.418746 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.418764 kubelet[2790]: W1013 05:27:55.418759 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.418830 kubelet[2790]: E1013 05:27:55.418767 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.418944 kubelet[2790]: E1013 05:27:55.418928 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.418944 kubelet[2790]: W1013 05:27:55.418940 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.419016 kubelet[2790]: E1013 05:27:55.418948 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.419196 kubelet[2790]: E1013 05:27:55.419162 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.419196 kubelet[2790]: W1013 05:27:55.419183 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.419196 kubelet[2790]: E1013 05:27:55.419192 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.419457 kubelet[2790]: E1013 05:27:55.419439 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.419489 kubelet[2790]: W1013 05:27:55.419476 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.419489 kubelet[2790]: E1013 05:27:55.419486 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.419758 kubelet[2790]: E1013 05:27:55.419738 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.419758 kubelet[2790]: W1013 05:27:55.419753 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.419830 kubelet[2790]: E1013 05:27:55.419766 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.420045 kubelet[2790]: E1013 05:27:55.419982 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.420045 kubelet[2790]: W1013 05:27:55.419998 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.420045 kubelet[2790]: E1013 05:27:55.420008 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.420482 kubelet[2790]: E1013 05:27:55.420440 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.420482 kubelet[2790]: W1013 05:27:55.420453 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.420482 kubelet[2790]: E1013 05:27:55.420462 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.420671 kubelet[2790]: E1013 05:27:55.420630 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.420698 kubelet[2790]: W1013 05:27:55.420643 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.420720 kubelet[2790]: E1013 05:27:55.420702 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.420930 kubelet[2790]: E1013 05:27:55.420911 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.420930 kubelet[2790]: W1013 05:27:55.420925 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.421000 kubelet[2790]: E1013 05:27:55.420934 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.436587 kubelet[2790]: E1013 05:27:55.436304 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.436587 kubelet[2790]: W1013 05:27:55.436327 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.436587 kubelet[2790]: E1013 05:27:55.436345 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.436587 kubelet[2790]: I1013 05:27:55.436378 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d0ed4399-1a69-434d-b19f-5ac4963e0fd8-varrun\") pod \"csi-node-driver-cjhz6\" (UID: \"d0ed4399-1a69-434d-b19f-5ac4963e0fd8\") " pod="calico-system/csi-node-driver-cjhz6" Oct 13 05:27:55.436881 kubelet[2790]: E1013 05:27:55.436665 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.436881 kubelet[2790]: W1013 05:27:55.436677 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.436881 kubelet[2790]: E1013 05:27:55.436686 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.436881 kubelet[2790]: I1013 05:27:55.436727 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d0ed4399-1a69-434d-b19f-5ac4963e0fd8-kubelet-dir\") pod \"csi-node-driver-cjhz6\" (UID: \"d0ed4399-1a69-434d-b19f-5ac4963e0fd8\") " pod="calico-system/csi-node-driver-cjhz6" Oct 13 05:27:55.437607 kubelet[2790]: E1013 05:27:55.437289 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.437607 kubelet[2790]: W1013 05:27:55.437306 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.437607 kubelet[2790]: E1013 05:27:55.437316 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.437607 kubelet[2790]: I1013 05:27:55.437350 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d0ed4399-1a69-434d-b19f-5ac4963e0fd8-socket-dir\") pod \"csi-node-driver-cjhz6\" (UID: \"d0ed4399-1a69-434d-b19f-5ac4963e0fd8\") " pod="calico-system/csi-node-driver-cjhz6" Oct 13 05:27:55.437836 kubelet[2790]: E1013 05:27:55.437758 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.437836 kubelet[2790]: W1013 05:27:55.437783 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.437836 kubelet[2790]: E1013 05:27:55.437808 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.438334 kubelet[2790]: E1013 05:27:55.438292 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.438334 kubelet[2790]: W1013 05:27:55.438303 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.438507 kubelet[2790]: E1013 05:27:55.438421 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.438988 kubelet[2790]: E1013 05:27:55.438927 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.439137 kubelet[2790]: W1013 05:27:55.439098 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.439137 kubelet[2790]: E1013 05:27:55.439115 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.439405 kubelet[2790]: E1013 05:27:55.439391 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.439536 kubelet[2790]: W1013 05:27:55.439472 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.439536 kubelet[2790]: E1013 05:27:55.439487 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.439667 kubelet[2790]: I1013 05:27:55.439630 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ddl9\" (UniqueName: \"kubernetes.io/projected/d0ed4399-1a69-434d-b19f-5ac4963e0fd8-kube-api-access-6ddl9\") pod \"csi-node-driver-cjhz6\" (UID: \"d0ed4399-1a69-434d-b19f-5ac4963e0fd8\") " pod="calico-system/csi-node-driver-cjhz6" Oct 13 05:27:55.439788 kubelet[2790]: E1013 05:27:55.439769 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.439788 kubelet[2790]: W1013 05:27:55.439784 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.439870 kubelet[2790]: E1013 05:27:55.439796 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.439988 kubelet[2790]: E1013 05:27:55.439972 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.439988 kubelet[2790]: W1013 05:27:55.439983 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.440044 kubelet[2790]: E1013 05:27:55.439991 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.440214 kubelet[2790]: E1013 05:27:55.440196 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.440214 kubelet[2790]: W1013 05:27:55.440209 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.440299 kubelet[2790]: E1013 05:27:55.440218 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.440403 kubelet[2790]: E1013 05:27:55.440387 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.440403 kubelet[2790]: W1013 05:27:55.440398 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.440451 kubelet[2790]: E1013 05:27:55.440406 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.440591 kubelet[2790]: E1013 05:27:55.440577 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.440591 kubelet[2790]: W1013 05:27:55.440588 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.440714 kubelet[2790]: E1013 05:27:55.440596 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.440825 kubelet[2790]: E1013 05:27:55.440809 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.440825 kubelet[2790]: W1013 05:27:55.440820 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.440886 kubelet[2790]: E1013 05:27:55.440830 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.440886 kubelet[2790]: I1013 05:27:55.440849 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d0ed4399-1a69-434d-b19f-5ac4963e0fd8-registration-dir\") pod \"csi-node-driver-cjhz6\" (UID: \"d0ed4399-1a69-434d-b19f-5ac4963e0fd8\") " pod="calico-system/csi-node-driver-cjhz6" Oct 13 05:27:55.441041 kubelet[2790]: E1013 05:27:55.441026 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.441041 kubelet[2790]: W1013 05:27:55.441037 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.441101 kubelet[2790]: E1013 05:27:55.441046 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.441219 kubelet[2790]: E1013 05:27:55.441204 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.441219 kubelet[2790]: W1013 05:27:55.441214 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.441267 kubelet[2790]: E1013 05:27:55.441222 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.460046 containerd[1634]: time="2025-10-13T05:27:55.460006896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-654bbdbf89-wj4g6,Uid:f5b8481d-1488-42f5-bfe0-57aef288f5f0,Namespace:calico-system,Attempt:0,} returns sandbox id \"fc9be8b1ec7f1dd781ea65b96c5c1b1ed55a9ba4de0a4d666b5227f8a7f3f1f2\"" Oct 13 05:27:55.461087 kubelet[2790]: E1013 05:27:55.461043 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:55.462385 containerd[1634]: time="2025-10-13T05:27:55.462336132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Oct 13 05:27:55.487949 containerd[1634]: time="2025-10-13T05:27:55.487911308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lh4m2,Uid:1ed25f65-6d73-47cf-8eb6-aba4bedc0b52,Namespace:calico-system,Attempt:0,}" Oct 13 05:27:55.508836 containerd[1634]: time="2025-10-13T05:27:55.508755633Z" level=info msg="connecting to shim edd382b080ecb58df1ccd16cff2005e23524b1fe1995f555cefbe3f79d3b2812" address="unix:///run/containerd/s/0350ef0d531506ac85a22c30acc1592a1a0431f6af8f549bee31abf381109ccd" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:27:55.540826 systemd[1]: Started cri-containerd-edd382b080ecb58df1ccd16cff2005e23524b1fe1995f555cefbe3f79d3b2812.scope - libcontainer container edd382b080ecb58df1ccd16cff2005e23524b1fe1995f555cefbe3f79d3b2812. Oct 13 05:27:55.542667 kubelet[2790]: E1013 05:27:55.542542 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.542667 kubelet[2790]: W1013 05:27:55.542566 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.542667 kubelet[2790]: E1013 05:27:55.542584 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.543889 kubelet[2790]: E1013 05:27:55.543851 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.543889 kubelet[2790]: W1013 05:27:55.543864 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.543889 kubelet[2790]: E1013 05:27:55.543875 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.544336 kubelet[2790]: E1013 05:27:55.544300 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.544336 kubelet[2790]: W1013 05:27:55.544312 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.544336 kubelet[2790]: E1013 05:27:55.544323 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.544756 kubelet[2790]: E1013 05:27:55.544723 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.544756 kubelet[2790]: W1013 05:27:55.544734 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.544756 kubelet[2790]: E1013 05:27:55.544744 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.545118 kubelet[2790]: E1013 05:27:55.545086 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.545118 kubelet[2790]: W1013 05:27:55.545097 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.545118 kubelet[2790]: E1013 05:27:55.545106 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.545451 kubelet[2790]: E1013 05:27:55.545416 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.545451 kubelet[2790]: W1013 05:27:55.545427 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.545451 kubelet[2790]: E1013 05:27:55.545436 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.545793 kubelet[2790]: E1013 05:27:55.545760 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.545793 kubelet[2790]: W1013 05:27:55.545771 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.545793 kubelet[2790]: E1013 05:27:55.545780 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.546235 kubelet[2790]: E1013 05:27:55.546201 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.546235 kubelet[2790]: W1013 05:27:55.546213 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.546235 kubelet[2790]: E1013 05:27:55.546222 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.546568 kubelet[2790]: E1013 05:27:55.546556 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.546643 kubelet[2790]: W1013 05:27:55.546618 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.546643 kubelet[2790]: E1013 05:27:55.546631 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.546978 kubelet[2790]: E1013 05:27:55.546946 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.546978 kubelet[2790]: W1013 05:27:55.546957 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.546978 kubelet[2790]: E1013 05:27:55.546966 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.547325 kubelet[2790]: E1013 05:27:55.547292 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.547325 kubelet[2790]: W1013 05:27:55.547303 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.547325 kubelet[2790]: E1013 05:27:55.547313 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.547685 kubelet[2790]: E1013 05:27:55.547638 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.547809 kubelet[2790]: W1013 05:27:55.547775 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.547809 kubelet[2790]: E1013 05:27:55.547791 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.548499 kubelet[2790]: E1013 05:27:55.548383 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.548499 kubelet[2790]: W1013 05:27:55.548395 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.548499 kubelet[2790]: E1013 05:27:55.548405 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.548859 kubelet[2790]: E1013 05:27:55.548847 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.548975 kubelet[2790]: W1013 05:27:55.548920 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.548975 kubelet[2790]: E1013 05:27:55.548934 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.549273 kubelet[2790]: E1013 05:27:55.549239 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.549273 kubelet[2790]: W1013 05:27:55.549250 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.549273 kubelet[2790]: E1013 05:27:55.549260 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.549615 kubelet[2790]: E1013 05:27:55.549583 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.549615 kubelet[2790]: W1013 05:27:55.549594 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.549615 kubelet[2790]: E1013 05:27:55.549603 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.549960 kubelet[2790]: E1013 05:27:55.549948 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.550035 kubelet[2790]: W1013 05:27:55.550010 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.550035 kubelet[2790]: E1013 05:27:55.550023 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.551465 kubelet[2790]: E1013 05:27:55.551419 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.551465 kubelet[2790]: W1013 05:27:55.551440 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.551465 kubelet[2790]: E1013 05:27:55.551451 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.551810 kubelet[2790]: E1013 05:27:55.551797 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.551886 kubelet[2790]: W1013 05:27:55.551860 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.551886 kubelet[2790]: E1013 05:27:55.551873 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.552156 kubelet[2790]: E1013 05:27:55.552145 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.552251 kubelet[2790]: W1013 05:27:55.552225 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.552251 kubelet[2790]: E1013 05:27:55.552239 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.552558 kubelet[2790]: E1013 05:27:55.552523 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.552558 kubelet[2790]: W1013 05:27:55.552536 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.552558 kubelet[2790]: E1013 05:27:55.552545 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.552925 kubelet[2790]: E1013 05:27:55.552913 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.553024 kubelet[2790]: W1013 05:27:55.552990 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.553024 kubelet[2790]: E1013 05:27:55.553004 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.553334 kubelet[2790]: E1013 05:27:55.553322 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.553491 kubelet[2790]: W1013 05:27:55.553380 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.553491 kubelet[2790]: E1013 05:27:55.553393 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.553903 kubelet[2790]: E1013 05:27:55.553868 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.553903 kubelet[2790]: W1013 05:27:55.553880 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.553903 kubelet[2790]: E1013 05:27:55.553890 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.554680 kubelet[2790]: E1013 05:27:55.554602 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.554680 kubelet[2790]: W1013 05:27:55.554615 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.554680 kubelet[2790]: E1013 05:27:55.554625 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.569070 kubelet[2790]: E1013 05:27:55.568990 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:55.569070 kubelet[2790]: W1013 05:27:55.569009 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:55.569070 kubelet[2790]: E1013 05:27:55.569027 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:55.576320 containerd[1634]: time="2025-10-13T05:27:55.576276191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lh4m2,Uid:1ed25f65-6d73-47cf-8eb6-aba4bedc0b52,Namespace:calico-system,Attempt:0,} returns sandbox id \"edd382b080ecb58df1ccd16cff2005e23524b1fe1995f555cefbe3f79d3b2812\"" Oct 13 05:27:56.778293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1524684991.mount: Deactivated successfully. Oct 13 05:27:56.947152 kubelet[2790]: E1013 05:27:56.947048 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cjhz6" podUID="d0ed4399-1a69-434d-b19f-5ac4963e0fd8" Oct 13 05:27:57.278876 containerd[1634]: time="2025-10-13T05:27:57.278812461Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:57.279640 containerd[1634]: time="2025-10-13T05:27:57.279603665Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Oct 13 05:27:57.280742 containerd[1634]: time="2025-10-13T05:27:57.280706071Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:57.282802 containerd[1634]: time="2025-10-13T05:27:57.282763672Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:57.283516 containerd[1634]: time="2025-10-13T05:27:57.283476632Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 1.821070523s" Oct 13 05:27:57.283554 containerd[1634]: time="2025-10-13T05:27:57.283516446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Oct 13 05:27:57.284750 containerd[1634]: time="2025-10-13T05:27:57.284484345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Oct 13 05:27:57.298268 containerd[1634]: time="2025-10-13T05:27:57.298219695Z" level=info msg="CreateContainer within sandbox \"fc9be8b1ec7f1dd781ea65b96c5c1b1ed55a9ba4de0a4d666b5227f8a7f3f1f2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 13 05:27:57.307681 containerd[1634]: time="2025-10-13T05:27:57.307201883Z" level=info msg="Container 4a03656a1813d594cb28f7ed1780ca934e0b468077443ddfa76ae3e2b91a2c8c: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:27:57.315397 containerd[1634]: time="2025-10-13T05:27:57.315345858Z" level=info msg="CreateContainer within sandbox \"fc9be8b1ec7f1dd781ea65b96c5c1b1ed55a9ba4de0a4d666b5227f8a7f3f1f2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4a03656a1813d594cb28f7ed1780ca934e0b468077443ddfa76ae3e2b91a2c8c\"" Oct 13 05:27:57.315971 containerd[1634]: time="2025-10-13T05:27:57.315937355Z" level=info msg="StartContainer for \"4a03656a1813d594cb28f7ed1780ca934e0b468077443ddfa76ae3e2b91a2c8c\"" Oct 13 05:27:57.317037 containerd[1634]: time="2025-10-13T05:27:57.317011339Z" level=info msg="connecting to shim 4a03656a1813d594cb28f7ed1780ca934e0b468077443ddfa76ae3e2b91a2c8c" address="unix:///run/containerd/s/b0e0ca123d200b1a1fdffc6d4782518e451682d4ab120db5350ad8783c6c388f" protocol=ttrpc version=3 Oct 13 05:27:57.340795 systemd[1]: Started cri-containerd-4a03656a1813d594cb28f7ed1780ca934e0b468077443ddfa76ae3e2b91a2c8c.scope - libcontainer container 4a03656a1813d594cb28f7ed1780ca934e0b468077443ddfa76ae3e2b91a2c8c. Oct 13 05:27:57.393353 containerd[1634]: time="2025-10-13T05:27:57.393308621Z" level=info msg="StartContainer for \"4a03656a1813d594cb28f7ed1780ca934e0b468077443ddfa76ae3e2b91a2c8c\" returns successfully" Oct 13 05:27:57.997912 kubelet[2790]: E1013 05:27:57.997874 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:58.039396 kubelet[2790]: E1013 05:27:58.039365 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:58.039396 kubelet[2790]: W1013 05:27:58.039386 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:58.039525 kubelet[2790]: E1013 05:27:58.039407 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:58.039612 kubelet[2790]: E1013 05:27:58.039585 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:58.039612 kubelet[2790]: W1013 05:27:58.039598 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:58.039612 kubelet[2790]: E1013 05:27:58.039606 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:58.039837 kubelet[2790]: E1013 05:27:58.039810 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:58.039837 kubelet[2790]: W1013 05:27:58.039823 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:58.039837 kubelet[2790]: E1013 05:27:58.039833 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:58.040118 kubelet[2790]: E1013 05:27:58.040098 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:58.040118 kubelet[2790]: W1013 05:27:58.040109 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:58.040118 kubelet[2790]: E1013 05:27:58.040118 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:58.040322 kubelet[2790]: E1013 05:27:58.040294 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:58.040322 kubelet[2790]: W1013 05:27:58.040305 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:58.040322 kubelet[2790]: E1013 05:27:58.040313 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:58.040486 kubelet[2790]: E1013 05:27:58.040467 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:58.040486 kubelet[2790]: W1013 05:27:58.040478 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:58.040486 kubelet[2790]: E1013 05:27:58.040486 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:58.040687 kubelet[2790]: E1013 05:27:58.040667 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:58.040687 kubelet[2790]: W1013 05:27:58.040680 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:58.040744 kubelet[2790]: E1013 05:27:58.040688 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:58.040875 kubelet[2790]: E1013 05:27:58.040857 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:58.040875 kubelet[2790]: W1013 05:27:58.040867 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:58.040875 kubelet[2790]: E1013 05:27:58.040875 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:58.041073 kubelet[2790]: E1013 05:27:58.041054 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:58.041073 kubelet[2790]: W1013 05:27:58.041065 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:58.041073 kubelet[2790]: E1013 05:27:58.041072 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:58.041248 kubelet[2790]: E1013 05:27:58.041229 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:58.041248 kubelet[2790]: W1013 05:27:58.041240 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:58.041248 kubelet[2790]: E1013 05:27:58.041247 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:58.041421 kubelet[2790]: E1013 05:27:58.041403 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:58.041421 kubelet[2790]: W1013 05:27:58.041413 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:58.041421 kubelet[2790]: E1013 05:27:58.041421 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:58.041592 kubelet[2790]: E1013 05:27:58.041573 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:58.041592 kubelet[2790]: W1013 05:27:58.041583 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:58.041592 kubelet[2790]: E1013 05:27:58.041591 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:58.041797 kubelet[2790]: E1013 05:27:58.041778 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:58.041797 kubelet[2790]: W1013 05:27:58.041789 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:58.041797 kubelet[2790]: E1013 05:27:58.041797 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:58.041978 kubelet[2790]: E1013 05:27:58.041960 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:58.041978 kubelet[2790]: W1013 05:27:58.041970 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:58.041978 kubelet[2790]: E1013 05:27:58.041979 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:58.042166 kubelet[2790]: E1013 05:27:58.042147 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:58.042166 kubelet[2790]: W1013 05:27:58.042158 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:58.042219 kubelet[2790]: E1013 05:27:58.042168 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:58.073661 kubelet[2790]: E1013 05:27:58.073612 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:58.073661 kubelet[2790]: W1013 05:27:58.073636 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:58.073743 kubelet[2790]: E1013 05:27:58.073674 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:58.073925 kubelet[2790]: E1013 05:27:58.073902 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:58.073925 kubelet[2790]: W1013 05:27:58.073914 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:58.073925 kubelet[2790]: E1013 05:27:58.073923 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:58.074170 kubelet[2790]: E1013 05:27:58.074146 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:58.074170 kubelet[2790]: W1013 05:27:58.074158 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:58.074170 kubelet[2790]: E1013 05:27:58.074167 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:58.074446 kubelet[2790]: E1013 05:27:58.074414 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:58.074446 kubelet[2790]: W1013 05:27:58.074433 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:58.074446 kubelet[2790]: E1013 05:27:58.074447 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:58.074693 kubelet[2790]: E1013 05:27:58.074662 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:58.074693 kubelet[2790]: W1013 05:27:58.074675 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:58.074693 kubelet[2790]: E1013 05:27:58.074684 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:58.074911 kubelet[2790]: E1013 05:27:58.074876 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:58.074911 kubelet[2790]: W1013 05:27:58.074883 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:58.074911 kubelet[2790]: E1013 05:27:58.074892 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:58.075113 kubelet[2790]: E1013 05:27:58.075096 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:58.075113 kubelet[2790]: W1013 05:27:58.075106 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:58.075113 kubelet[2790]: E1013 05:27:58.075115 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:58.075301 kubelet[2790]: E1013 05:27:58.075286 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:58.075301 kubelet[2790]: W1013 05:27:58.075296 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:58.075445 kubelet[2790]: E1013 05:27:58.075304 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:58.075495 kubelet[2790]: E1013 05:27:58.075480 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:58.075495 kubelet[2790]: W1013 05:27:58.075490 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:58.075540 kubelet[2790]: E1013 05:27:58.075506 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:58.075710 kubelet[2790]: E1013 05:27:58.075694 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:58.075710 kubelet[2790]: W1013 05:27:58.075706 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:58.075780 kubelet[2790]: E1013 05:27:58.075715 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:58.075910 kubelet[2790]: E1013 05:27:58.075893 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:58.075910 kubelet[2790]: W1013 05:27:58.075904 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:58.075910 kubelet[2790]: E1013 05:27:58.075911 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:58.076125 kubelet[2790]: E1013 05:27:58.076108 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:58.076125 kubelet[2790]: W1013 05:27:58.076119 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:58.076125 kubelet[2790]: E1013 05:27:58.076127 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:58.076429 kubelet[2790]: E1013 05:27:58.076398 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:58.076429 kubelet[2790]: W1013 05:27:58.076416 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:58.076429 kubelet[2790]: E1013 05:27:58.076426 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:58.076641 kubelet[2790]: E1013 05:27:58.076617 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:58.076641 kubelet[2790]: W1013 05:27:58.076629 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:58.076641 kubelet[2790]: E1013 05:27:58.076638 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:58.076851 kubelet[2790]: E1013 05:27:58.076833 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:58.076851 kubelet[2790]: W1013 05:27:58.076845 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:58.076914 kubelet[2790]: E1013 05:27:58.076853 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:58.077088 kubelet[2790]: E1013 05:27:58.077070 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:58.077088 kubelet[2790]: W1013 05:27:58.077081 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:58.077088 kubelet[2790]: E1013 05:27:58.077090 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:58.077445 kubelet[2790]: E1013 05:27:58.077421 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:58.077445 kubelet[2790]: W1013 05:27:58.077441 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:58.077506 kubelet[2790]: E1013 05:27:58.077462 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:58.077701 kubelet[2790]: E1013 05:27:58.077684 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:58.077701 kubelet[2790]: W1013 05:27:58.077695 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:58.077759 kubelet[2790]: E1013 05:27:58.077704 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:58.713511 containerd[1634]: time="2025-10-13T05:27:58.713449284Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:58.714109 containerd[1634]: time="2025-10-13T05:27:58.714064414Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Oct 13 05:27:58.715237 containerd[1634]: time="2025-10-13T05:27:58.715186729Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:58.718457 containerd[1634]: time="2025-10-13T05:27:58.718419403Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:58.719052 containerd[1634]: time="2025-10-13T05:27:58.719014968Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.434494337s" Oct 13 05:27:58.719103 containerd[1634]: time="2025-10-13T05:27:58.719054772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Oct 13 05:27:58.723048 containerd[1634]: time="2025-10-13T05:27:58.723005547Z" level=info msg="CreateContainer within sandbox \"edd382b080ecb58df1ccd16cff2005e23524b1fe1995f555cefbe3f79d3b2812\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 13 05:27:58.731626 containerd[1634]: time="2025-10-13T05:27:58.731580134Z" level=info msg="Container 32f0e7957afb84f2e3fa2d6732642c9e67eeaa099e0597714bf707bae72848e5: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:27:58.740221 containerd[1634]: time="2025-10-13T05:27:58.740171732Z" level=info msg="CreateContainer within sandbox \"edd382b080ecb58df1ccd16cff2005e23524b1fe1995f555cefbe3f79d3b2812\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"32f0e7957afb84f2e3fa2d6732642c9e67eeaa099e0597714bf707bae72848e5\"" Oct 13 05:27:58.741192 containerd[1634]: time="2025-10-13T05:27:58.741153859Z" level=info msg="StartContainer for \"32f0e7957afb84f2e3fa2d6732642c9e67eeaa099e0597714bf707bae72848e5\"" Oct 13 05:27:58.742901 containerd[1634]: time="2025-10-13T05:27:58.742859175Z" level=info msg="connecting to shim 32f0e7957afb84f2e3fa2d6732642c9e67eeaa099e0597714bf707bae72848e5" address="unix:///run/containerd/s/0350ef0d531506ac85a22c30acc1592a1a0431f6af8f549bee31abf381109ccd" protocol=ttrpc version=3 Oct 13 05:27:58.772810 systemd[1]: Started cri-containerd-32f0e7957afb84f2e3fa2d6732642c9e67eeaa099e0597714bf707bae72848e5.scope - libcontainer container 32f0e7957afb84f2e3fa2d6732642c9e67eeaa099e0597714bf707bae72848e5. Oct 13 05:27:58.818469 containerd[1634]: time="2025-10-13T05:27:58.818279843Z" level=info msg="StartContainer for \"32f0e7957afb84f2e3fa2d6732642c9e67eeaa099e0597714bf707bae72848e5\" returns successfully" Oct 13 05:27:58.826700 systemd[1]: cri-containerd-32f0e7957afb84f2e3fa2d6732642c9e67eeaa099e0597714bf707bae72848e5.scope: Deactivated successfully. Oct 13 05:27:58.828389 containerd[1634]: time="2025-10-13T05:27:58.828347376Z" level=info msg="received exit event container_id:\"32f0e7957afb84f2e3fa2d6732642c9e67eeaa099e0597714bf707bae72848e5\" id:\"32f0e7957afb84f2e3fa2d6732642c9e67eeaa099e0597714bf707bae72848e5\" pid:3493 exited_at:{seconds:1760333278 nanos:827786876}" Oct 13 05:27:58.828545 containerd[1634]: time="2025-10-13T05:27:58.828473328Z" level=info msg="TaskExit event in podsandbox handler container_id:\"32f0e7957afb84f2e3fa2d6732642c9e67eeaa099e0597714bf707bae72848e5\" id:\"32f0e7957afb84f2e3fa2d6732642c9e67eeaa099e0597714bf707bae72848e5\" pid:3493 exited_at:{seconds:1760333278 nanos:827786876}" Oct 13 05:27:58.855458 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32f0e7957afb84f2e3fa2d6732642c9e67eeaa099e0597714bf707bae72848e5-rootfs.mount: Deactivated successfully. Oct 13 05:27:58.947683 kubelet[2790]: E1013 05:27:58.947572 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cjhz6" podUID="d0ed4399-1a69-434d-b19f-5ac4963e0fd8" Oct 13 05:27:59.001141 kubelet[2790]: I1013 05:27:59.001033 2790 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:27:59.001540 kubelet[2790]: E1013 05:27:59.001477 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:59.143719 kubelet[2790]: I1013 05:27:59.143615 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-654bbdbf89-wj4g6" podStartSLOduration=3.321116536 podStartE2EDuration="5.143594104s" podCreationTimestamp="2025-10-13 05:27:54 +0000 UTC" firstStartedPulling="2025-10-13 05:27:55.461892236 +0000 UTC m=+15.621213611" lastFinishedPulling="2025-10-13 05:27:57.284369804 +0000 UTC m=+17.443691179" observedRunningTime="2025-10-13 05:27:58.008221395 +0000 UTC m=+18.167542770" watchObservedRunningTime="2025-10-13 05:27:59.143594104 +0000 UTC m=+19.302915479" Oct 13 05:28:00.005754 containerd[1634]: time="2025-10-13T05:28:00.005702515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Oct 13 05:28:00.947635 kubelet[2790]: E1013 05:28:00.947555 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cjhz6" podUID="d0ed4399-1a69-434d-b19f-5ac4963e0fd8" Oct 13 05:28:02.887959 containerd[1634]: time="2025-10-13T05:28:02.887890189Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:02.888558 containerd[1634]: time="2025-10-13T05:28:02.888511665Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Oct 13 05:28:02.889659 containerd[1634]: time="2025-10-13T05:28:02.889612654Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:02.891455 containerd[1634]: time="2025-10-13T05:28:02.891431136Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:02.892106 containerd[1634]: time="2025-10-13T05:28:02.892065866Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 2.88630351s" Oct 13 05:28:02.892106 containerd[1634]: time="2025-10-13T05:28:02.892097514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Oct 13 05:28:02.895562 containerd[1634]: time="2025-10-13T05:28:02.895505105Z" level=info msg="CreateContainer within sandbox \"edd382b080ecb58df1ccd16cff2005e23524b1fe1995f555cefbe3f79d3b2812\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 13 05:28:02.905895 containerd[1634]: time="2025-10-13T05:28:02.905864910Z" level=info msg="Container a7ef4d8821b81fecdfadfd4aab62a75bba9b68dbf49c9fa42711e8f41f6b3b84: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:28:02.914728 containerd[1634]: time="2025-10-13T05:28:02.914694083Z" level=info msg="CreateContainer within sandbox \"edd382b080ecb58df1ccd16cff2005e23524b1fe1995f555cefbe3f79d3b2812\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a7ef4d8821b81fecdfadfd4aab62a75bba9b68dbf49c9fa42711e8f41f6b3b84\"" Oct 13 05:28:02.915244 containerd[1634]: time="2025-10-13T05:28:02.915216988Z" level=info msg="StartContainer for \"a7ef4d8821b81fecdfadfd4aab62a75bba9b68dbf49c9fa42711e8f41f6b3b84\"" Oct 13 05:28:02.916849 containerd[1634]: time="2025-10-13T05:28:02.916820482Z" level=info msg="connecting to shim a7ef4d8821b81fecdfadfd4aab62a75bba9b68dbf49c9fa42711e8f41f6b3b84" address="unix:///run/containerd/s/0350ef0d531506ac85a22c30acc1592a1a0431f6af8f549bee31abf381109ccd" protocol=ttrpc version=3 Oct 13 05:28:02.947161 kubelet[2790]: E1013 05:28:02.947104 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cjhz6" podUID="d0ed4399-1a69-434d-b19f-5ac4963e0fd8" Oct 13 05:28:02.948835 systemd[1]: Started cri-containerd-a7ef4d8821b81fecdfadfd4aab62a75bba9b68dbf49c9fa42711e8f41f6b3b84.scope - libcontainer container a7ef4d8821b81fecdfadfd4aab62a75bba9b68dbf49c9fa42711e8f41f6b3b84. Oct 13 05:28:03.092262 containerd[1634]: time="2025-10-13T05:28:03.092190621Z" level=info msg="StartContainer for \"a7ef4d8821b81fecdfadfd4aab62a75bba9b68dbf49c9fa42711e8f41f6b3b84\" returns successfully" Oct 13 05:28:04.041596 systemd[1]: cri-containerd-a7ef4d8821b81fecdfadfd4aab62a75bba9b68dbf49c9fa42711e8f41f6b3b84.scope: Deactivated successfully. Oct 13 05:28:04.041994 systemd[1]: cri-containerd-a7ef4d8821b81fecdfadfd4aab62a75bba9b68dbf49c9fa42711e8f41f6b3b84.scope: Consumed 618ms CPU time, 176.2M memory peak, 3.2M read from disk, 171.3M written to disk. Oct 13 05:28:04.045593 containerd[1634]: time="2025-10-13T05:28:04.044261633Z" level=info msg="received exit event container_id:\"a7ef4d8821b81fecdfadfd4aab62a75bba9b68dbf49c9fa42711e8f41f6b3b84\" id:\"a7ef4d8821b81fecdfadfd4aab62a75bba9b68dbf49c9fa42711e8f41f6b3b84\" pid:3555 exited_at:{seconds:1760333284 nanos:43353006}" Oct 13 05:28:04.045593 containerd[1634]: time="2025-10-13T05:28:04.044337241Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a7ef4d8821b81fecdfadfd4aab62a75bba9b68dbf49c9fa42711e8f41f6b3b84\" id:\"a7ef4d8821b81fecdfadfd4aab62a75bba9b68dbf49c9fa42711e8f41f6b3b84\" pid:3555 exited_at:{seconds:1760333284 nanos:43353006}" Oct 13 05:28:04.069074 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7ef4d8821b81fecdfadfd4aab62a75bba9b68dbf49c9fa42711e8f41f6b3b84-rootfs.mount: Deactivated successfully. Oct 13 05:28:04.141534 kubelet[2790]: I1013 05:28:04.141492 2790 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 13 05:28:04.214004 systemd[1]: Created slice kubepods-besteffort-podf5cbf7e7_777e_49bc_81fe_a7d3cd83a422.slice - libcontainer container kubepods-besteffort-podf5cbf7e7_777e_49bc_81fe_a7d3cd83a422.slice. Oct 13 05:28:04.223997 systemd[1]: Created slice kubepods-burstable-podfa949c18_1836_4d88_b6ba_87e0260040eb.slice - libcontainer container kubepods-burstable-podfa949c18_1836_4d88_b6ba_87e0260040eb.slice. Oct 13 05:28:04.234663 systemd[1]: Created slice kubepods-besteffort-poddeeb05c6_03db_466f_aa9a_63f5c23b7762.slice - libcontainer container kubepods-besteffort-poddeeb05c6_03db_466f_aa9a_63f5c23b7762.slice. Oct 13 05:28:04.241978 systemd[1]: Created slice kubepods-besteffort-pod3912c993_81ea_4107_b456_1d7c6faa08ff.slice - libcontainer container kubepods-besteffort-pod3912c993_81ea_4107_b456_1d7c6faa08ff.slice. Oct 13 05:28:04.251948 systemd[1]: Created slice kubepods-besteffort-pod06ef6e4a_8317_43b7_899e_6a0b731c7417.slice - libcontainer container kubepods-besteffort-pod06ef6e4a_8317_43b7_899e_6a0b731c7417.slice. Oct 13 05:28:04.256578 kubelet[2790]: I1013 05:28:04.256526 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phvmf\" (UniqueName: \"kubernetes.io/projected/deeb05c6-03db-466f-aa9a-63f5c23b7762-kube-api-access-phvmf\") pod \"calico-kube-controllers-59cddc5ff-gp9jb\" (UID: \"deeb05c6-03db-466f-aa9a-63f5c23b7762\") " pod="calico-system/calico-kube-controllers-59cddc5ff-gp9jb" Oct 13 05:28:04.256825 kubelet[2790]: I1013 05:28:04.256587 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgbjt\" (UniqueName: \"kubernetes.io/projected/c51b0ffb-a000-4151-8805-e42f64018cc3-kube-api-access-rgbjt\") pod \"coredns-674b8bbfcf-6p2l2\" (UID: \"c51b0ffb-a000-4151-8805-e42f64018cc3\") " pod="kube-system/coredns-674b8bbfcf-6p2l2" Oct 13 05:28:04.256825 kubelet[2790]: I1013 05:28:04.256613 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-httnh\" (UniqueName: \"kubernetes.io/projected/fa949c18-1836-4d88-b6ba-87e0260040eb-kube-api-access-httnh\") pod \"coredns-674b8bbfcf-h2gxt\" (UID: \"fa949c18-1836-4d88-b6ba-87e0260040eb\") " pod="kube-system/coredns-674b8bbfcf-h2gxt" Oct 13 05:28:04.256825 kubelet[2790]: I1013 05:28:04.256633 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a700f481-9241-4166-b620-175cd7d4c02c-calico-apiserver-certs\") pod \"calico-apiserver-5cb6ddf7bc-r22k9\" (UID: \"a700f481-9241-4166-b620-175cd7d4c02c\") " pod="calico-apiserver/calico-apiserver-5cb6ddf7bc-r22k9" Oct 13 05:28:04.256825 kubelet[2790]: I1013 05:28:04.256765 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slk2d\" (UniqueName: \"kubernetes.io/projected/f5cbf7e7-777e-49bc-81fe-a7d3cd83a422-kube-api-access-slk2d\") pod \"calico-apiserver-5cb6ddf7bc-jxczs\" (UID: \"f5cbf7e7-777e-49bc-81fe-a7d3cd83a422\") " pod="calico-apiserver/calico-apiserver-5cb6ddf7bc-jxczs" Oct 13 05:28:04.256825 kubelet[2790]: I1013 05:28:04.256811 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3912c993-81ea-4107-b456-1d7c6faa08ff-whisker-backend-key-pair\") pod \"whisker-59d66597d4-bkd49\" (UID: \"3912c993-81ea-4107-b456-1d7c6faa08ff\") " pod="calico-system/whisker-59d66597d4-bkd49" Oct 13 05:28:04.257074 kubelet[2790]: I1013 05:28:04.256827 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg6f8\" (UniqueName: \"kubernetes.io/projected/06ef6e4a-8317-43b7-899e-6a0b731c7417-kube-api-access-jg6f8\") pod \"goldmane-54d579b49d-qhqcv\" (UID: \"06ef6e4a-8317-43b7-899e-6a0b731c7417\") " pod="calico-system/goldmane-54d579b49d-qhqcv" Oct 13 05:28:04.257074 kubelet[2790]: I1013 05:28:04.256848 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c51b0ffb-a000-4151-8805-e42f64018cc3-config-volume\") pod \"coredns-674b8bbfcf-6p2l2\" (UID: \"c51b0ffb-a000-4151-8805-e42f64018cc3\") " pod="kube-system/coredns-674b8bbfcf-6p2l2" Oct 13 05:28:04.257074 kubelet[2790]: I1013 05:28:04.256872 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fa949c18-1836-4d88-b6ba-87e0260040eb-config-volume\") pod \"coredns-674b8bbfcf-h2gxt\" (UID: \"fa949c18-1836-4d88-b6ba-87e0260040eb\") " pod="kube-system/coredns-674b8bbfcf-h2gxt" Oct 13 05:28:04.257074 kubelet[2790]: I1013 05:28:04.256938 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3912c993-81ea-4107-b456-1d7c6faa08ff-whisker-ca-bundle\") pod \"whisker-59d66597d4-bkd49\" (UID: \"3912c993-81ea-4107-b456-1d7c6faa08ff\") " pod="calico-system/whisker-59d66597d4-bkd49" Oct 13 05:28:04.257074 kubelet[2790]: I1013 05:28:04.256993 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/deeb05c6-03db-466f-aa9a-63f5c23b7762-tigera-ca-bundle\") pod \"calico-kube-controllers-59cddc5ff-gp9jb\" (UID: \"deeb05c6-03db-466f-aa9a-63f5c23b7762\") " pod="calico-system/calico-kube-controllers-59cddc5ff-gp9jb" Oct 13 05:28:04.257181 kubelet[2790]: I1013 05:28:04.257018 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f5cbf7e7-777e-49bc-81fe-a7d3cd83a422-calico-apiserver-certs\") pod \"calico-apiserver-5cb6ddf7bc-jxczs\" (UID: \"f5cbf7e7-777e-49bc-81fe-a7d3cd83a422\") " pod="calico-apiserver/calico-apiserver-5cb6ddf7bc-jxczs" Oct 13 05:28:04.257181 kubelet[2790]: I1013 05:28:04.257066 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxfb2\" (UniqueName: \"kubernetes.io/projected/3912c993-81ea-4107-b456-1d7c6faa08ff-kube-api-access-fxfb2\") pod \"whisker-59d66597d4-bkd49\" (UID: \"3912c993-81ea-4107-b456-1d7c6faa08ff\") " pod="calico-system/whisker-59d66597d4-bkd49" Oct 13 05:28:04.257181 kubelet[2790]: I1013 05:28:04.257088 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06ef6e4a-8317-43b7-899e-6a0b731c7417-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-qhqcv\" (UID: \"06ef6e4a-8317-43b7-899e-6a0b731c7417\") " pod="calico-system/goldmane-54d579b49d-qhqcv" Oct 13 05:28:04.257181 kubelet[2790]: I1013 05:28:04.257112 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06ef6e4a-8317-43b7-899e-6a0b731c7417-config\") pod \"goldmane-54d579b49d-qhqcv\" (UID: \"06ef6e4a-8317-43b7-899e-6a0b731c7417\") " pod="calico-system/goldmane-54d579b49d-qhqcv" Oct 13 05:28:04.257181 kubelet[2790]: I1013 05:28:04.257134 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/06ef6e4a-8317-43b7-899e-6a0b731c7417-goldmane-key-pair\") pod \"goldmane-54d579b49d-qhqcv\" (UID: \"06ef6e4a-8317-43b7-899e-6a0b731c7417\") " pod="calico-system/goldmane-54d579b49d-qhqcv" Oct 13 05:28:04.257300 kubelet[2790]: I1013 05:28:04.257155 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k6pf\" (UniqueName: \"kubernetes.io/projected/a700f481-9241-4166-b620-175cd7d4c02c-kube-api-access-5k6pf\") pod \"calico-apiserver-5cb6ddf7bc-r22k9\" (UID: \"a700f481-9241-4166-b620-175cd7d4c02c\") " pod="calico-apiserver/calico-apiserver-5cb6ddf7bc-r22k9" Oct 13 05:28:04.258996 systemd[1]: Created slice kubepods-besteffort-poda700f481_9241_4166_b620_175cd7d4c02c.slice - libcontainer container kubepods-besteffort-poda700f481_9241_4166_b620_175cd7d4c02c.slice. Oct 13 05:28:04.265095 systemd[1]: Created slice kubepods-burstable-podc51b0ffb_a000_4151_8805_e42f64018cc3.slice - libcontainer container kubepods-burstable-podc51b0ffb_a000_4151_8805_e42f64018cc3.slice. Oct 13 05:28:04.520718 containerd[1634]: time="2025-10-13T05:28:04.520623864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cb6ddf7bc-jxczs,Uid:f5cbf7e7-777e-49bc-81fe-a7d3cd83a422,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:28:04.528915 kubelet[2790]: E1013 05:28:04.528882 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:28:04.529480 containerd[1634]: time="2025-10-13T05:28:04.529433098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h2gxt,Uid:fa949c18-1836-4d88-b6ba-87e0260040eb,Namespace:kube-system,Attempt:0,}" Oct 13 05:28:04.541284 containerd[1634]: time="2025-10-13T05:28:04.540225822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59cddc5ff-gp9jb,Uid:deeb05c6-03db-466f-aa9a-63f5c23b7762,Namespace:calico-system,Attempt:0,}" Oct 13 05:28:04.549277 containerd[1634]: time="2025-10-13T05:28:04.549240455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59d66597d4-bkd49,Uid:3912c993-81ea-4107-b456-1d7c6faa08ff,Namespace:calico-system,Attempt:0,}" Oct 13 05:28:04.569794 kubelet[2790]: E1013 05:28:04.569751 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:28:04.594446 containerd[1634]: time="2025-10-13T05:28:04.594391375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6p2l2,Uid:c51b0ffb-a000-4151-8805-e42f64018cc3,Namespace:kube-system,Attempt:0,}" Oct 13 05:28:04.594596 containerd[1634]: time="2025-10-13T05:28:04.594564354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-qhqcv,Uid:06ef6e4a-8317-43b7-899e-6a0b731c7417,Namespace:calico-system,Attempt:0,}" Oct 13 05:28:04.594704 containerd[1634]: time="2025-10-13T05:28:04.594681991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cb6ddf7bc-r22k9,Uid:a700f481-9241-4166-b620-175cd7d4c02c,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:28:04.732943 containerd[1634]: time="2025-10-13T05:28:04.732888400Z" level=error msg="Failed to destroy network for sandbox \"92c9a2a49faa6173efee5991210df8aade7fc581aa16105184731af5b76e4d90\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:28:04.735505 containerd[1634]: time="2025-10-13T05:28:04.735443245Z" level=error msg="Failed to destroy network for sandbox \"90e68d8db812009539533cb84d1969129379378af7098b89354dfc2ecde86918\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:28:04.736265 containerd[1634]: time="2025-10-13T05:28:04.736229665Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59cddc5ff-gp9jb,Uid:deeb05c6-03db-466f-aa9a-63f5c23b7762,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"92c9a2a49faa6173efee5991210df8aade7fc581aa16105184731af5b76e4d90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:28:04.736980 kubelet[2790]: E1013 05:28:04.736941 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92c9a2a49faa6173efee5991210df8aade7fc581aa16105184731af5b76e4d90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:28:04.737687 kubelet[2790]: E1013 05:28:04.737584 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92c9a2a49faa6173efee5991210df8aade7fc581aa16105184731af5b76e4d90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59cddc5ff-gp9jb" Oct 13 05:28:04.737687 kubelet[2790]: E1013 05:28:04.737625 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92c9a2a49faa6173efee5991210df8aade7fc581aa16105184731af5b76e4d90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59cddc5ff-gp9jb" Oct 13 05:28:04.738751 kubelet[2790]: E1013 05:28:04.737803 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-59cddc5ff-gp9jb_calico-system(deeb05c6-03db-466f-aa9a-63f5c23b7762)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-59cddc5ff-gp9jb_calico-system(deeb05c6-03db-466f-aa9a-63f5c23b7762)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"92c9a2a49faa6173efee5991210df8aade7fc581aa16105184731af5b76e4d90\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59cddc5ff-gp9jb" podUID="deeb05c6-03db-466f-aa9a-63f5c23b7762" Oct 13 05:28:04.738958 containerd[1634]: time="2025-10-13T05:28:04.738930939Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cb6ddf7bc-jxczs,Uid:f5cbf7e7-777e-49bc-81fe-a7d3cd83a422,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"90e68d8db812009539533cb84d1969129379378af7098b89354dfc2ecde86918\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:28:04.739820 kubelet[2790]: E1013 05:28:04.739762 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90e68d8db812009539533cb84d1969129379378af7098b89354dfc2ecde86918\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:28:04.739939 kubelet[2790]: E1013 05:28:04.739921 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90e68d8db812009539533cb84d1969129379378af7098b89354dfc2ecde86918\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cb6ddf7bc-jxczs" Oct 13 05:28:04.740006 kubelet[2790]: E1013 05:28:04.739992 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90e68d8db812009539533cb84d1969129379378af7098b89354dfc2ecde86918\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cb6ddf7bc-jxczs" Oct 13 05:28:04.740101 kubelet[2790]: E1013 05:28:04.740082 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cb6ddf7bc-jxczs_calico-apiserver(f5cbf7e7-777e-49bc-81fe-a7d3cd83a422)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cb6ddf7bc-jxczs_calico-apiserver(f5cbf7e7-777e-49bc-81fe-a7d3cd83a422)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90e68d8db812009539533cb84d1969129379378af7098b89354dfc2ecde86918\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cb6ddf7bc-jxczs" podUID="f5cbf7e7-777e-49bc-81fe-a7d3cd83a422" Oct 13 05:28:04.749524 containerd[1634]: time="2025-10-13T05:28:04.749330869Z" level=error msg="Failed to destroy network for sandbox \"e2c70fffa9bd3cdae25cb153e1aa2ecf9b9d5abae5ac170aed075027cb56ba69\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:28:04.750622 containerd[1634]: time="2025-10-13T05:28:04.750576788Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6p2l2,Uid:c51b0ffb-a000-4151-8805-e42f64018cc3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2c70fffa9bd3cdae25cb153e1aa2ecf9b9d5abae5ac170aed075027cb56ba69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:28:04.751003 kubelet[2790]: E1013 05:28:04.750815 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2c70fffa9bd3cdae25cb153e1aa2ecf9b9d5abae5ac170aed075027cb56ba69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:28:04.751003 kubelet[2790]: E1013 05:28:04.750873 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2c70fffa9bd3cdae25cb153e1aa2ecf9b9d5abae5ac170aed075027cb56ba69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6p2l2" Oct 13 05:28:04.751003 kubelet[2790]: E1013 05:28:04.750893 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2c70fffa9bd3cdae25cb153e1aa2ecf9b9d5abae5ac170aed075027cb56ba69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6p2l2" Oct 13 05:28:04.751114 kubelet[2790]: E1013 05:28:04.750935 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6p2l2_kube-system(c51b0ffb-a000-4151-8805-e42f64018cc3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6p2l2_kube-system(c51b0ffb-a000-4151-8805-e42f64018cc3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e2c70fffa9bd3cdae25cb153e1aa2ecf9b9d5abae5ac170aed075027cb56ba69\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6p2l2" podUID="c51b0ffb-a000-4151-8805-e42f64018cc3" Oct 13 05:28:04.769494 containerd[1634]: time="2025-10-13T05:28:04.769434543Z" level=error msg="Failed to destroy network for sandbox \"2bb96bd3dd8f4248e0db656ee3fbc01bc601a1df7416094128fd5b177ccb0be7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:28:04.771770 containerd[1634]: time="2025-10-13T05:28:04.771562579Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-qhqcv,Uid:06ef6e4a-8317-43b7-899e-6a0b731c7417,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bb96bd3dd8f4248e0db656ee3fbc01bc601a1df7416094128fd5b177ccb0be7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:28:04.772614 kubelet[2790]: E1013 05:28:04.772162 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bb96bd3dd8f4248e0db656ee3fbc01bc601a1df7416094128fd5b177ccb0be7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:28:04.772614 kubelet[2790]: E1013 05:28:04.772238 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bb96bd3dd8f4248e0db656ee3fbc01bc601a1df7416094128fd5b177ccb0be7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-qhqcv" Oct 13 05:28:04.772614 kubelet[2790]: E1013 05:28:04.772259 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bb96bd3dd8f4248e0db656ee3fbc01bc601a1df7416094128fd5b177ccb0be7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-qhqcv" Oct 13 05:28:04.772849 kubelet[2790]: E1013 05:28:04.772331 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-qhqcv_calico-system(06ef6e4a-8317-43b7-899e-6a0b731c7417)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-qhqcv_calico-system(06ef6e4a-8317-43b7-899e-6a0b731c7417)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2bb96bd3dd8f4248e0db656ee3fbc01bc601a1df7416094128fd5b177ccb0be7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-qhqcv" podUID="06ef6e4a-8317-43b7-899e-6a0b731c7417" Oct 13 05:28:04.776424 containerd[1634]: time="2025-10-13T05:28:04.776400967Z" level=error msg="Failed to destroy network for sandbox \"8548b2af1c60d1224a474903442cdb3234633ef37990b483f1823cb7bde9af4d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:28:04.778253 containerd[1634]: time="2025-10-13T05:28:04.778226415Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h2gxt,Uid:fa949c18-1836-4d88-b6ba-87e0260040eb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8548b2af1c60d1224a474903442cdb3234633ef37990b483f1823cb7bde9af4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:28:04.778697 kubelet[2790]: E1013 05:28:04.778609 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8548b2af1c60d1224a474903442cdb3234633ef37990b483f1823cb7bde9af4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:28:04.779040 kubelet[2790]: E1013 05:28:04.778912 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8548b2af1c60d1224a474903442cdb3234633ef37990b483f1823cb7bde9af4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-h2gxt" Oct 13 05:28:04.779040 kubelet[2790]: E1013 05:28:04.778941 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8548b2af1c60d1224a474903442cdb3234633ef37990b483f1823cb7bde9af4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-h2gxt" Oct 13 05:28:04.779598 kubelet[2790]: E1013 05:28:04.779007 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-h2gxt_kube-system(fa949c18-1836-4d88-b6ba-87e0260040eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-h2gxt_kube-system(fa949c18-1836-4d88-b6ba-87e0260040eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8548b2af1c60d1224a474903442cdb3234633ef37990b483f1823cb7bde9af4d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-h2gxt" podUID="fa949c18-1836-4d88-b6ba-87e0260040eb" Oct 13 05:28:04.783499 containerd[1634]: time="2025-10-13T05:28:04.783469610Z" level=error msg="Failed to destroy network for sandbox \"2890be22f913e58fc3ae7ebcb6e7ebcd077b4f3536ffccc6e4067a3c8eb2bd3f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:28:04.784915 containerd[1634]: time="2025-10-13T05:28:04.784861367Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cb6ddf7bc-r22k9,Uid:a700f481-9241-4166-b620-175cd7d4c02c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2890be22f913e58fc3ae7ebcb6e7ebcd077b4f3536ffccc6e4067a3c8eb2bd3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:28:04.785118 kubelet[2790]: E1013 05:28:04.785087 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2890be22f913e58fc3ae7ebcb6e7ebcd077b4f3536ffccc6e4067a3c8eb2bd3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:28:04.785169 kubelet[2790]: E1013 05:28:04.785138 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2890be22f913e58fc3ae7ebcb6e7ebcd077b4f3536ffccc6e4067a3c8eb2bd3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cb6ddf7bc-r22k9" Oct 13 05:28:04.785169 kubelet[2790]: E1013 05:28:04.785160 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2890be22f913e58fc3ae7ebcb6e7ebcd077b4f3536ffccc6e4067a3c8eb2bd3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cb6ddf7bc-r22k9" Oct 13 05:28:04.785239 kubelet[2790]: E1013 05:28:04.785221 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cb6ddf7bc-r22k9_calico-apiserver(a700f481-9241-4166-b620-175cd7d4c02c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cb6ddf7bc-r22k9_calico-apiserver(a700f481-9241-4166-b620-175cd7d4c02c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2890be22f913e58fc3ae7ebcb6e7ebcd077b4f3536ffccc6e4067a3c8eb2bd3f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cb6ddf7bc-r22k9" podUID="a700f481-9241-4166-b620-175cd7d4c02c" Oct 13 05:28:04.785417 containerd[1634]: time="2025-10-13T05:28:04.785367852Z" level=error msg="Failed to destroy network for sandbox \"9aacb757a45393e04028766de38ace90349344111c3050d18d91a64aecbe9127\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:28:04.786704 containerd[1634]: time="2025-10-13T05:28:04.786624941Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59d66597d4-bkd49,Uid:3912c993-81ea-4107-b456-1d7c6faa08ff,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9aacb757a45393e04028766de38ace90349344111c3050d18d91a64aecbe9127\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:28:04.786918 kubelet[2790]: E1013 05:28:04.786785 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9aacb757a45393e04028766de38ace90349344111c3050d18d91a64aecbe9127\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:28:04.786918 kubelet[2790]: E1013 05:28:04.786814 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9aacb757a45393e04028766de38ace90349344111c3050d18d91a64aecbe9127\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-59d66597d4-bkd49" Oct 13 05:28:04.786918 kubelet[2790]: E1013 05:28:04.786832 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9aacb757a45393e04028766de38ace90349344111c3050d18d91a64aecbe9127\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-59d66597d4-bkd49" Oct 13 05:28:04.787025 kubelet[2790]: E1013 05:28:04.786877 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-59d66597d4-bkd49_calico-system(3912c993-81ea-4107-b456-1d7c6faa08ff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-59d66597d4-bkd49_calico-system(3912c993-81ea-4107-b456-1d7c6faa08ff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9aacb757a45393e04028766de38ace90349344111c3050d18d91a64aecbe9127\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-59d66597d4-bkd49" podUID="3912c993-81ea-4107-b456-1d7c6faa08ff" Oct 13 05:28:04.952713 systemd[1]: Created slice kubepods-besteffort-podd0ed4399_1a69_434d_b19f_5ac4963e0fd8.slice - libcontainer container kubepods-besteffort-podd0ed4399_1a69_434d_b19f_5ac4963e0fd8.slice. Oct 13 05:28:04.955083 containerd[1634]: time="2025-10-13T05:28:04.955031744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cjhz6,Uid:d0ed4399-1a69-434d-b19f-5ac4963e0fd8,Namespace:calico-system,Attempt:0,}" Oct 13 05:28:05.106630 containerd[1634]: time="2025-10-13T05:28:05.106560854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Oct 13 05:28:05.255608 containerd[1634]: time="2025-10-13T05:28:05.255043282Z" level=error msg="Failed to destroy network for sandbox \"e45bef8288f126fc92b3a1f06a6fe0862287e92b3d25b15713f9728a66b5d0bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:28:05.257277 containerd[1634]: time="2025-10-13T05:28:05.257215322Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cjhz6,Uid:d0ed4399-1a69-434d-b19f-5ac4963e0fd8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e45bef8288f126fc92b3a1f06a6fe0862287e92b3d25b15713f9728a66b5d0bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:28:05.257974 kubelet[2790]: E1013 05:28:05.257816 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e45bef8288f126fc92b3a1f06a6fe0862287e92b3d25b15713f9728a66b5d0bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:28:05.257974 kubelet[2790]: E1013 05:28:05.257904 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e45bef8288f126fc92b3a1f06a6fe0862287e92b3d25b15713f9728a66b5d0bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cjhz6" Oct 13 05:28:05.257974 kubelet[2790]: E1013 05:28:05.257926 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e45bef8288f126fc92b3a1f06a6fe0862287e92b3d25b15713f9728a66b5d0bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cjhz6" Oct 13 05:28:05.258713 kubelet[2790]: E1013 05:28:05.258011 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cjhz6_calico-system(d0ed4399-1a69-434d-b19f-5ac4963e0fd8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cjhz6_calico-system(d0ed4399-1a69-434d-b19f-5ac4963e0fd8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e45bef8288f126fc92b3a1f06a6fe0862287e92b3d25b15713f9728a66b5d0bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cjhz6" podUID="d0ed4399-1a69-434d-b19f-5ac4963e0fd8" Oct 13 05:28:05.258172 systemd[1]: run-netns-cni\x2dde7e399a\x2dbb42\x2d28cb\x2da9af\x2dc3c436f035bd.mount: Deactivated successfully. Oct 13 05:28:12.941613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3538397222.mount: Deactivated successfully. Oct 13 05:28:13.881874 containerd[1634]: time="2025-10-13T05:28:13.881795718Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:13.882505 containerd[1634]: time="2025-10-13T05:28:13.882465299Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Oct 13 05:28:13.883730 containerd[1634]: time="2025-10-13T05:28:13.883703482Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:13.885703 containerd[1634]: time="2025-10-13T05:28:13.885645040Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:13.886152 containerd[1634]: time="2025-10-13T05:28:13.886108488Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 8.779508321s" Oct 13 05:28:13.886196 containerd[1634]: time="2025-10-13T05:28:13.886150556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Oct 13 05:28:13.904168 containerd[1634]: time="2025-10-13T05:28:13.904116819Z" level=info msg="CreateContainer within sandbox \"edd382b080ecb58df1ccd16cff2005e23524b1fe1995f555cefbe3f79d3b2812\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 13 05:28:13.917410 containerd[1634]: time="2025-10-13T05:28:13.917368899Z" level=info msg="Container 1621ab9f6d764d9da32db25a5827324076a71694524fb34606133f8757dd1239: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:28:13.928719 containerd[1634]: time="2025-10-13T05:28:13.928530427Z" level=info msg="CreateContainer within sandbox \"edd382b080ecb58df1ccd16cff2005e23524b1fe1995f555cefbe3f79d3b2812\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1621ab9f6d764d9da32db25a5827324076a71694524fb34606133f8757dd1239\"" Oct 13 05:28:13.930628 containerd[1634]: time="2025-10-13T05:28:13.930587899Z" level=info msg="StartContainer for \"1621ab9f6d764d9da32db25a5827324076a71694524fb34606133f8757dd1239\"" Oct 13 05:28:13.932498 containerd[1634]: time="2025-10-13T05:28:13.932471999Z" level=info msg="connecting to shim 1621ab9f6d764d9da32db25a5827324076a71694524fb34606133f8757dd1239" address="unix:///run/containerd/s/0350ef0d531506ac85a22c30acc1592a1a0431f6af8f549bee31abf381109ccd" protocol=ttrpc version=3 Oct 13 05:28:13.963791 systemd[1]: Started cri-containerd-1621ab9f6d764d9da32db25a5827324076a71694524fb34606133f8757dd1239.scope - libcontainer container 1621ab9f6d764d9da32db25a5827324076a71694524fb34606133f8757dd1239. Oct 13 05:28:14.016611 containerd[1634]: time="2025-10-13T05:28:14.016564395Z" level=info msg="StartContainer for \"1621ab9f6d764d9da32db25a5827324076a71694524fb34606133f8757dd1239\" returns successfully" Oct 13 05:28:14.100145 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 13 05:28:14.101294 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 13 05:28:14.169014 kubelet[2790]: I1013 05:28:14.168864 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lh4m2" podStartSLOduration=0.859998881 podStartE2EDuration="19.168848248s" podCreationTimestamp="2025-10-13 05:27:55 +0000 UTC" firstStartedPulling="2025-10-13 05:27:55.578043614 +0000 UTC m=+15.737364989" lastFinishedPulling="2025-10-13 05:28:13.886892981 +0000 UTC m=+34.046214356" observedRunningTime="2025-10-13 05:28:14.166981148 +0000 UTC m=+34.326302523" watchObservedRunningTime="2025-10-13 05:28:14.168848248 +0000 UTC m=+34.328169613" Oct 13 05:28:14.318720 kubelet[2790]: I1013 05:28:14.318672 2790 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3912c993-81ea-4107-b456-1d7c6faa08ff-whisker-backend-key-pair\") pod \"3912c993-81ea-4107-b456-1d7c6faa08ff\" (UID: \"3912c993-81ea-4107-b456-1d7c6faa08ff\") " Oct 13 05:28:14.318720 kubelet[2790]: I1013 05:28:14.318725 2790 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3912c993-81ea-4107-b456-1d7c6faa08ff-whisker-ca-bundle\") pod \"3912c993-81ea-4107-b456-1d7c6faa08ff\" (UID: \"3912c993-81ea-4107-b456-1d7c6faa08ff\") " Oct 13 05:28:14.318720 kubelet[2790]: I1013 05:28:14.318743 2790 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxfb2\" (UniqueName: \"kubernetes.io/projected/3912c993-81ea-4107-b456-1d7c6faa08ff-kube-api-access-fxfb2\") pod \"3912c993-81ea-4107-b456-1d7c6faa08ff\" (UID: \"3912c993-81ea-4107-b456-1d7c6faa08ff\") " Oct 13 05:28:14.320854 kubelet[2790]: I1013 05:28:14.320807 2790 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3912c993-81ea-4107-b456-1d7c6faa08ff-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3912c993-81ea-4107-b456-1d7c6faa08ff" (UID: "3912c993-81ea-4107-b456-1d7c6faa08ff"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 13 05:28:14.324003 kubelet[2790]: I1013 05:28:14.323967 2790 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3912c993-81ea-4107-b456-1d7c6faa08ff-kube-api-access-fxfb2" (OuterVolumeSpecName: "kube-api-access-fxfb2") pod "3912c993-81ea-4107-b456-1d7c6faa08ff" (UID: "3912c993-81ea-4107-b456-1d7c6faa08ff"). InnerVolumeSpecName "kube-api-access-fxfb2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 05:28:14.324121 kubelet[2790]: I1013 05:28:14.323973 2790 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3912c993-81ea-4107-b456-1d7c6faa08ff-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3912c993-81ea-4107-b456-1d7c6faa08ff" (UID: "3912c993-81ea-4107-b456-1d7c6faa08ff"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 05:28:14.325043 systemd[1]: var-lib-kubelet-pods-3912c993\x2d81ea\x2d4107\x2db456\x2d1d7c6faa08ff-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfxfb2.mount: Deactivated successfully. Oct 13 05:28:14.327776 systemd[1]: var-lib-kubelet-pods-3912c993\x2d81ea\x2d4107\x2db456\x2d1d7c6faa08ff-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 13 05:28:14.419442 kubelet[2790]: I1013 05:28:14.419321 2790 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3912c993-81ea-4107-b456-1d7c6faa08ff-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 13 05:28:14.419442 kubelet[2790]: I1013 05:28:14.419353 2790 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3912c993-81ea-4107-b456-1d7c6faa08ff-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 13 05:28:14.419442 kubelet[2790]: I1013 05:28:14.419362 2790 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fxfb2\" (UniqueName: \"kubernetes.io/projected/3912c993-81ea-4107-b456-1d7c6faa08ff-kube-api-access-fxfb2\") on node \"localhost\" DevicePath \"\"" Oct 13 05:28:14.947353 kubelet[2790]: E1013 05:28:14.947296 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:28:14.947860 containerd[1634]: time="2025-10-13T05:28:14.947767045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h2gxt,Uid:fa949c18-1836-4d88-b6ba-87e0260040eb,Namespace:kube-system,Attempt:0,}" Oct 13 05:28:15.155784 systemd[1]: Removed slice kubepods-besteffort-pod3912c993_81ea_4107_b456_1d7c6faa08ff.slice - libcontainer container kubepods-besteffort-pod3912c993_81ea_4107_b456_1d7c6faa08ff.slice. Oct 13 05:28:15.220226 systemd[1]: Created slice kubepods-besteffort-podbb8485ee_a25d_42e8_bf98_167f236a62f5.slice - libcontainer container kubepods-besteffort-podbb8485ee_a25d_42e8_bf98_167f236a62f5.slice. Oct 13 05:28:15.226305 kubelet[2790]: I1013 05:28:15.225771 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb8485ee-a25d-42e8-bf98-167f236a62f5-whisker-ca-bundle\") pod \"whisker-6fb94755-wrqf8\" (UID: \"bb8485ee-a25d-42e8-bf98-167f236a62f5\") " pod="calico-system/whisker-6fb94755-wrqf8" Oct 13 05:28:15.226305 kubelet[2790]: I1013 05:28:15.225825 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk2pd\" (UniqueName: \"kubernetes.io/projected/bb8485ee-a25d-42e8-bf98-167f236a62f5-kube-api-access-xk2pd\") pod \"whisker-6fb94755-wrqf8\" (UID: \"bb8485ee-a25d-42e8-bf98-167f236a62f5\") " pod="calico-system/whisker-6fb94755-wrqf8" Oct 13 05:28:15.226305 kubelet[2790]: I1013 05:28:15.225860 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bb8485ee-a25d-42e8-bf98-167f236a62f5-whisker-backend-key-pair\") pod \"whisker-6fb94755-wrqf8\" (UID: \"bb8485ee-a25d-42e8-bf98-167f236a62f5\") " pod="calico-system/whisker-6fb94755-wrqf8" Oct 13 05:28:15.255824 systemd-networkd[1530]: cali2c5548799f2: Link UP Oct 13 05:28:15.256886 systemd-networkd[1530]: cali2c5548799f2: Gained carrier Oct 13 05:28:15.273963 containerd[1634]: 2025-10-13 05:28:15.076 [INFO][3934] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:28:15.273963 containerd[1634]: 2025-10-13 05:28:15.103 [INFO][3934] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--h2gxt-eth0 coredns-674b8bbfcf- kube-system fa949c18-1836-4d88-b6ba-87e0260040eb 869 0 2025-10-13 05:27:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-h2gxt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2c5548799f2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="28b39b9b304386abb1dcc2a23edcd20b4fd905758b5ebdeda4de8f913da96cd8" Namespace="kube-system" Pod="coredns-674b8bbfcf-h2gxt" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h2gxt-" Oct 13 05:28:15.273963 containerd[1634]: 2025-10-13 05:28:15.103 [INFO][3934] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="28b39b9b304386abb1dcc2a23edcd20b4fd905758b5ebdeda4de8f913da96cd8" Namespace="kube-system" Pod="coredns-674b8bbfcf-h2gxt" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h2gxt-eth0" Oct 13 05:28:15.273963 containerd[1634]: 2025-10-13 05:28:15.172 [INFO][3946] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28b39b9b304386abb1dcc2a23edcd20b4fd905758b5ebdeda4de8f913da96cd8" HandleID="k8s-pod-network.28b39b9b304386abb1dcc2a23edcd20b4fd905758b5ebdeda4de8f913da96cd8" Workload="localhost-k8s-coredns--674b8bbfcf--h2gxt-eth0" Oct 13 05:28:15.274224 containerd[1634]: 2025-10-13 05:28:15.173 [INFO][3946] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="28b39b9b304386abb1dcc2a23edcd20b4fd905758b5ebdeda4de8f913da96cd8" HandleID="k8s-pod-network.28b39b9b304386abb1dcc2a23edcd20b4fd905758b5ebdeda4de8f913da96cd8" Workload="localhost-k8s-coredns--674b8bbfcf--h2gxt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e170), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-h2gxt", "timestamp":"2025-10-13 05:28:15.17277002 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:28:15.274224 containerd[1634]: 2025-10-13 05:28:15.173 [INFO][3946] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:28:15.274224 containerd[1634]: 2025-10-13 05:28:15.173 [INFO][3946] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:28:15.274224 containerd[1634]: 2025-10-13 05:28:15.173 [INFO][3946] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:28:15.274224 containerd[1634]: 2025-10-13 05:28:15.180 [INFO][3946] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.28b39b9b304386abb1dcc2a23edcd20b4fd905758b5ebdeda4de8f913da96cd8" host="localhost" Oct 13 05:28:15.274224 containerd[1634]: 2025-10-13 05:28:15.189 [INFO][3946] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:28:15.274224 containerd[1634]: 2025-10-13 05:28:15.198 [INFO][3946] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:28:15.274224 containerd[1634]: 2025-10-13 05:28:15.203 [INFO][3946] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:28:15.274224 containerd[1634]: 2025-10-13 05:28:15.216 [INFO][3946] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:28:15.274224 containerd[1634]: 2025-10-13 05:28:15.216 [INFO][3946] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.28b39b9b304386abb1dcc2a23edcd20b4fd905758b5ebdeda4de8f913da96cd8" host="localhost" Oct 13 05:28:15.274446 containerd[1634]: 2025-10-13 05:28:15.222 [INFO][3946] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.28b39b9b304386abb1dcc2a23edcd20b4fd905758b5ebdeda4de8f913da96cd8 Oct 13 05:28:15.274446 containerd[1634]: 2025-10-13 05:28:15.229 [INFO][3946] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.28b39b9b304386abb1dcc2a23edcd20b4fd905758b5ebdeda4de8f913da96cd8" host="localhost" Oct 13 05:28:15.274446 containerd[1634]: 2025-10-13 05:28:15.236 [INFO][3946] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.28b39b9b304386abb1dcc2a23edcd20b4fd905758b5ebdeda4de8f913da96cd8" host="localhost" Oct 13 05:28:15.274446 containerd[1634]: 2025-10-13 05:28:15.236 [INFO][3946] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.28b39b9b304386abb1dcc2a23edcd20b4fd905758b5ebdeda4de8f913da96cd8" host="localhost" Oct 13 05:28:15.274446 containerd[1634]: 2025-10-13 05:28:15.236 [INFO][3946] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:28:15.274446 containerd[1634]: 2025-10-13 05:28:15.236 [INFO][3946] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="28b39b9b304386abb1dcc2a23edcd20b4fd905758b5ebdeda4de8f913da96cd8" HandleID="k8s-pod-network.28b39b9b304386abb1dcc2a23edcd20b4fd905758b5ebdeda4de8f913da96cd8" Workload="localhost-k8s-coredns--674b8bbfcf--h2gxt-eth0" Oct 13 05:28:15.274564 containerd[1634]: 2025-10-13 05:28:15.240 [INFO][3934] cni-plugin/k8s.go 418: Populated endpoint ContainerID="28b39b9b304386abb1dcc2a23edcd20b4fd905758b5ebdeda4de8f913da96cd8" Namespace="kube-system" Pod="coredns-674b8bbfcf-h2gxt" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h2gxt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--h2gxt-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fa949c18-1836-4d88-b6ba-87e0260040eb", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 27, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-h2gxt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2c5548799f2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:28:15.274642 containerd[1634]: 2025-10-13 05:28:15.241 [INFO][3934] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="28b39b9b304386abb1dcc2a23edcd20b4fd905758b5ebdeda4de8f913da96cd8" Namespace="kube-system" Pod="coredns-674b8bbfcf-h2gxt" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h2gxt-eth0" Oct 13 05:28:15.274642 containerd[1634]: 2025-10-13 05:28:15.241 [INFO][3934] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2c5548799f2 ContainerID="28b39b9b304386abb1dcc2a23edcd20b4fd905758b5ebdeda4de8f913da96cd8" Namespace="kube-system" Pod="coredns-674b8bbfcf-h2gxt" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h2gxt-eth0" Oct 13 05:28:15.274642 containerd[1634]: 2025-10-13 05:28:15.256 [INFO][3934] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28b39b9b304386abb1dcc2a23edcd20b4fd905758b5ebdeda4de8f913da96cd8" Namespace="kube-system" Pod="coredns-674b8bbfcf-h2gxt" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h2gxt-eth0" Oct 13 05:28:15.274737 containerd[1634]: 2025-10-13 05:28:15.259 [INFO][3934] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="28b39b9b304386abb1dcc2a23edcd20b4fd905758b5ebdeda4de8f913da96cd8" Namespace="kube-system" Pod="coredns-674b8bbfcf-h2gxt" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h2gxt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--h2gxt-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fa949c18-1836-4d88-b6ba-87e0260040eb", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 27, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"28b39b9b304386abb1dcc2a23edcd20b4fd905758b5ebdeda4de8f913da96cd8", Pod:"coredns-674b8bbfcf-h2gxt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2c5548799f2", MAC:"f6:d0:c1:dc:66:5d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:28:15.274737 containerd[1634]: 2025-10-13 05:28:15.269 [INFO][3934] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="28b39b9b304386abb1dcc2a23edcd20b4fd905758b5ebdeda4de8f913da96cd8" Namespace="kube-system" Pod="coredns-674b8bbfcf-h2gxt" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h2gxt-eth0" Oct 13 05:28:15.310433 containerd[1634]: time="2025-10-13T05:28:15.310387845Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1621ab9f6d764d9da32db25a5827324076a71694524fb34606133f8757dd1239\" id:\"9a659e306f82f73037a933f6bdcda45355899e2e1355f374d0e7ff3d81a7b630\" pid:3966 exit_status:1 exited_at:{seconds:1760333295 nanos:309635591}" Oct 13 05:28:15.361146 containerd[1634]: time="2025-10-13T05:28:15.361096446Z" level=info msg="connecting to shim 28b39b9b304386abb1dcc2a23edcd20b4fd905758b5ebdeda4de8f913da96cd8" address="unix:///run/containerd/s/538c7de8de9e163eb7c5f2c4e5203a0d5dee03c286b41f8356da73b655106ddd" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:28:15.373069 systemd[1]: Started sshd@7-10.0.0.33:22-10.0.0.1:60414.service - OpenSSH per-connection server daemon (10.0.0.1:60414). Oct 13 05:28:15.395808 systemd[1]: Started cri-containerd-28b39b9b304386abb1dcc2a23edcd20b4fd905758b5ebdeda4de8f913da96cd8.scope - libcontainer container 28b39b9b304386abb1dcc2a23edcd20b4fd905758b5ebdeda4de8f913da96cd8. Oct 13 05:28:15.411784 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:28:15.460727 containerd[1634]: time="2025-10-13T05:28:15.460643640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h2gxt,Uid:fa949c18-1836-4d88-b6ba-87e0260040eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"28b39b9b304386abb1dcc2a23edcd20b4fd905758b5ebdeda4de8f913da96cd8\"" Oct 13 05:28:15.461947 kubelet[2790]: E1013 05:28:15.461909 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:28:15.466831 sshd[4007]: Accepted publickey for core from 10.0.0.1 port 60414 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:28:15.467841 containerd[1634]: time="2025-10-13T05:28:15.467799847Z" level=info msg="CreateContainer within sandbox \"28b39b9b304386abb1dcc2a23edcd20b4fd905758b5ebdeda4de8f913da96cd8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 05:28:15.471612 sshd-session[4007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:15.480881 systemd-logind[1604]: New session 8 of user core. Oct 13 05:28:15.489302 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 13 05:28:15.496315 containerd[1634]: time="2025-10-13T05:28:15.496262695Z" level=info msg="Container 2f36cd54f645c04324ac74923cecd2f3f1d4afd5833c19ba4cd4d891826394cd: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:28:15.509810 containerd[1634]: time="2025-10-13T05:28:15.509754569Z" level=info msg="CreateContainer within sandbox \"28b39b9b304386abb1dcc2a23edcd20b4fd905758b5ebdeda4de8f913da96cd8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2f36cd54f645c04324ac74923cecd2f3f1d4afd5833c19ba4cd4d891826394cd\"" Oct 13 05:28:15.511088 containerd[1634]: time="2025-10-13T05:28:15.511058796Z" level=info msg="StartContainer for \"2f36cd54f645c04324ac74923cecd2f3f1d4afd5833c19ba4cd4d891826394cd\"" Oct 13 05:28:15.515955 containerd[1634]: time="2025-10-13T05:28:15.515919188Z" level=info msg="connecting to shim 2f36cd54f645c04324ac74923cecd2f3f1d4afd5833c19ba4cd4d891826394cd" address="unix:///run/containerd/s/538c7de8de9e163eb7c5f2c4e5203a0d5dee03c286b41f8356da73b655106ddd" protocol=ttrpc version=3 Oct 13 05:28:15.529617 containerd[1634]: time="2025-10-13T05:28:15.529575605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fb94755-wrqf8,Uid:bb8485ee-a25d-42e8-bf98-167f236a62f5,Namespace:calico-system,Attempt:0,}" Oct 13 05:28:15.561868 systemd[1]: Started cri-containerd-2f36cd54f645c04324ac74923cecd2f3f1d4afd5833c19ba4cd4d891826394cd.scope - libcontainer container 2f36cd54f645c04324ac74923cecd2f3f1d4afd5833c19ba4cd4d891826394cd. Oct 13 05:28:15.637325 containerd[1634]: time="2025-10-13T05:28:15.637244907Z" level=info msg="StartContainer for \"2f36cd54f645c04324ac74923cecd2f3f1d4afd5833c19ba4cd4d891826394cd\" returns successfully" Oct 13 05:28:15.702214 sshd[4110]: Connection closed by 10.0.0.1 port 60414 Oct 13 05:28:15.703100 sshd-session[4007]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:15.710461 systemd-logind[1604]: Session 8 logged out. Waiting for processes to exit. Oct 13 05:28:15.711384 systemd[1]: sshd@7-10.0.0.33:22-10.0.0.1:60414.service: Deactivated successfully. Oct 13 05:28:15.714836 systemd[1]: session-8.scope: Deactivated successfully. Oct 13 05:28:15.719629 systemd-logind[1604]: Removed session 8. Oct 13 05:28:15.761037 systemd-networkd[1530]: cali7d41d51ab93: Link UP Oct 13 05:28:15.762801 systemd-networkd[1530]: cali7d41d51ab93: Gained carrier Oct 13 05:28:15.777792 containerd[1634]: 2025-10-13 05:28:15.607 [INFO][4149] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:28:15.777792 containerd[1634]: 2025-10-13 05:28:15.641 [INFO][4149] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6fb94755--wrqf8-eth0 whisker-6fb94755- calico-system bb8485ee-a25d-42e8-bf98-167f236a62f5 976 0 2025-10-13 05:28:15 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6fb94755 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6fb94755-wrqf8 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7d41d51ab93 [] [] }} ContainerID="b650e3c23a35718e16880475571f469528177d7abbc074e516e4917ad708a4e2" Namespace="calico-system" Pod="whisker-6fb94755-wrqf8" WorkloadEndpoint="localhost-k8s-whisker--6fb94755--wrqf8-" Oct 13 05:28:15.777792 containerd[1634]: 2025-10-13 05:28:15.641 [INFO][4149] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b650e3c23a35718e16880475571f469528177d7abbc074e516e4917ad708a4e2" Namespace="calico-system" Pod="whisker-6fb94755-wrqf8" WorkloadEndpoint="localhost-k8s-whisker--6fb94755--wrqf8-eth0" Oct 13 05:28:15.777792 containerd[1634]: 2025-10-13 05:28:15.709 [INFO][4192] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b650e3c23a35718e16880475571f469528177d7abbc074e516e4917ad708a4e2" HandleID="k8s-pod-network.b650e3c23a35718e16880475571f469528177d7abbc074e516e4917ad708a4e2" Workload="localhost-k8s-whisker--6fb94755--wrqf8-eth0" Oct 13 05:28:15.777792 containerd[1634]: 2025-10-13 05:28:15.709 [INFO][4192] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b650e3c23a35718e16880475571f469528177d7abbc074e516e4917ad708a4e2" HandleID="k8s-pod-network.b650e3c23a35718e16880475571f469528177d7abbc074e516e4917ad708a4e2" Workload="localhost-k8s-whisker--6fb94755--wrqf8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c08e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6fb94755-wrqf8", "timestamp":"2025-10-13 05:28:15.709504994 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:28:15.777792 containerd[1634]: 2025-10-13 05:28:15.709 [INFO][4192] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:28:15.777792 containerd[1634]: 2025-10-13 05:28:15.710 [INFO][4192] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:28:15.777792 containerd[1634]: 2025-10-13 05:28:15.710 [INFO][4192] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:28:15.777792 containerd[1634]: 2025-10-13 05:28:15.717 [INFO][4192] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b650e3c23a35718e16880475571f469528177d7abbc074e516e4917ad708a4e2" host="localhost" Oct 13 05:28:15.777792 containerd[1634]: 2025-10-13 05:28:15.723 [INFO][4192] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:28:15.777792 containerd[1634]: 2025-10-13 05:28:15.727 [INFO][4192] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:28:15.777792 containerd[1634]: 2025-10-13 05:28:15.729 [INFO][4192] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:28:15.777792 containerd[1634]: 2025-10-13 05:28:15.731 [INFO][4192] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:28:15.777792 containerd[1634]: 2025-10-13 05:28:15.731 [INFO][4192] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b650e3c23a35718e16880475571f469528177d7abbc074e516e4917ad708a4e2" host="localhost" Oct 13 05:28:15.777792 containerd[1634]: 2025-10-13 05:28:15.734 [INFO][4192] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b650e3c23a35718e16880475571f469528177d7abbc074e516e4917ad708a4e2 Oct 13 05:28:15.777792 containerd[1634]: 2025-10-13 05:28:15.748 [INFO][4192] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b650e3c23a35718e16880475571f469528177d7abbc074e516e4917ad708a4e2" host="localhost" Oct 13 05:28:15.777792 containerd[1634]: 2025-10-13 05:28:15.753 [INFO][4192] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b650e3c23a35718e16880475571f469528177d7abbc074e516e4917ad708a4e2" host="localhost" Oct 13 05:28:15.777792 containerd[1634]: 2025-10-13 05:28:15.753 [INFO][4192] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b650e3c23a35718e16880475571f469528177d7abbc074e516e4917ad708a4e2" host="localhost" Oct 13 05:28:15.777792 containerd[1634]: 2025-10-13 05:28:15.754 [INFO][4192] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:28:15.777792 containerd[1634]: 2025-10-13 05:28:15.754 [INFO][4192] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b650e3c23a35718e16880475571f469528177d7abbc074e516e4917ad708a4e2" HandleID="k8s-pod-network.b650e3c23a35718e16880475571f469528177d7abbc074e516e4917ad708a4e2" Workload="localhost-k8s-whisker--6fb94755--wrqf8-eth0" Oct 13 05:28:15.778465 containerd[1634]: 2025-10-13 05:28:15.758 [INFO][4149] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b650e3c23a35718e16880475571f469528177d7abbc074e516e4917ad708a4e2" Namespace="calico-system" Pod="whisker-6fb94755-wrqf8" WorkloadEndpoint="localhost-k8s-whisker--6fb94755--wrqf8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6fb94755--wrqf8-eth0", GenerateName:"whisker-6fb94755-", Namespace:"calico-system", SelfLink:"", UID:"bb8485ee-a25d-42e8-bf98-167f236a62f5", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 28, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6fb94755", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6fb94755-wrqf8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7d41d51ab93", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:28:15.778465 containerd[1634]: 2025-10-13 05:28:15.758 [INFO][4149] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b650e3c23a35718e16880475571f469528177d7abbc074e516e4917ad708a4e2" Namespace="calico-system" Pod="whisker-6fb94755-wrqf8" WorkloadEndpoint="localhost-k8s-whisker--6fb94755--wrqf8-eth0" Oct 13 05:28:15.778465 containerd[1634]: 2025-10-13 05:28:15.758 [INFO][4149] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7d41d51ab93 ContainerID="b650e3c23a35718e16880475571f469528177d7abbc074e516e4917ad708a4e2" Namespace="calico-system" Pod="whisker-6fb94755-wrqf8" WorkloadEndpoint="localhost-k8s-whisker--6fb94755--wrqf8-eth0" Oct 13 05:28:15.778465 containerd[1634]: 2025-10-13 05:28:15.760 [INFO][4149] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b650e3c23a35718e16880475571f469528177d7abbc074e516e4917ad708a4e2" Namespace="calico-system" Pod="whisker-6fb94755-wrqf8" WorkloadEndpoint="localhost-k8s-whisker--6fb94755--wrqf8-eth0" Oct 13 05:28:15.778465 containerd[1634]: 2025-10-13 05:28:15.761 [INFO][4149] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b650e3c23a35718e16880475571f469528177d7abbc074e516e4917ad708a4e2" Namespace="calico-system" Pod="whisker-6fb94755-wrqf8" WorkloadEndpoint="localhost-k8s-whisker--6fb94755--wrqf8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6fb94755--wrqf8-eth0", GenerateName:"whisker-6fb94755-", Namespace:"calico-system", SelfLink:"", UID:"bb8485ee-a25d-42e8-bf98-167f236a62f5", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 28, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6fb94755", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b650e3c23a35718e16880475571f469528177d7abbc074e516e4917ad708a4e2", Pod:"whisker-6fb94755-wrqf8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7d41d51ab93", MAC:"66:af:e8:fe:c7:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:28:15.778465 containerd[1634]: 2025-10-13 05:28:15.772 [INFO][4149] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b650e3c23a35718e16880475571f469528177d7abbc074e516e4917ad708a4e2" Namespace="calico-system" Pod="whisker-6fb94755-wrqf8" WorkloadEndpoint="localhost-k8s-whisker--6fb94755--wrqf8-eth0" Oct 13 05:28:15.819866 containerd[1634]: time="2025-10-13T05:28:15.819814280Z" level=info msg="connecting to shim b650e3c23a35718e16880475571f469528177d7abbc074e516e4917ad708a4e2" address="unix:///run/containerd/s/3f75e99c51cd3e713bc1f070842dcb07c9073e5b76d960b8cf42f09b139a4e41" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:28:15.854813 systemd[1]: Started cri-containerd-b650e3c23a35718e16880475571f469528177d7abbc074e516e4917ad708a4e2.scope - libcontainer container b650e3c23a35718e16880475571f469528177d7abbc074e516e4917ad708a4e2. Oct 13 05:28:15.867749 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:28:15.898701 containerd[1634]: time="2025-10-13T05:28:15.898639827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fb94755-wrqf8,Uid:bb8485ee-a25d-42e8-bf98-167f236a62f5,Namespace:calico-system,Attempt:0,} returns sandbox id \"b650e3c23a35718e16880475571f469528177d7abbc074e516e4917ad708a4e2\"" Oct 13 05:28:15.903944 containerd[1634]: time="2025-10-13T05:28:15.903909156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Oct 13 05:28:15.947907 kubelet[2790]: E1013 05:28:15.947869 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:28:15.948679 containerd[1634]: time="2025-10-13T05:28:15.948626590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6p2l2,Uid:c51b0ffb-a000-4151-8805-e42f64018cc3,Namespace:kube-system,Attempt:0,}" Oct 13 05:28:15.950525 kubelet[2790]: I1013 05:28:15.950476 2790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3912c993-81ea-4107-b456-1d7c6faa08ff" path="/var/lib/kubelet/pods/3912c993-81ea-4107-b456-1d7c6faa08ff/volumes" Oct 13 05:28:16.059085 systemd-networkd[1530]: calidcc1d75f7c0: Link UP Oct 13 05:28:16.060122 systemd-networkd[1530]: calidcc1d75f7c0: Gained carrier Oct 13 05:28:16.073917 containerd[1634]: 2025-10-13 05:28:15.977 [INFO][4259] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:28:16.073917 containerd[1634]: 2025-10-13 05:28:15.987 [INFO][4259] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--6p2l2-eth0 coredns-674b8bbfcf- kube-system c51b0ffb-a000-4151-8805-e42f64018cc3 872 0 2025-10-13 05:27:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-6p2l2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidcc1d75f7c0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4c9b99548e9a95ec0cd38527be4814517c87c36ee65b9c6b593bbaee676ffc5b" Namespace="kube-system" Pod="coredns-674b8bbfcf-6p2l2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6p2l2-" Oct 13 05:28:16.073917 containerd[1634]: 2025-10-13 05:28:15.988 [INFO][4259] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4c9b99548e9a95ec0cd38527be4814517c87c36ee65b9c6b593bbaee676ffc5b" Namespace="kube-system" Pod="coredns-674b8bbfcf-6p2l2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6p2l2-eth0" Oct 13 05:28:16.073917 containerd[1634]: 2025-10-13 05:28:16.017 [INFO][4273] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c9b99548e9a95ec0cd38527be4814517c87c36ee65b9c6b593bbaee676ffc5b" HandleID="k8s-pod-network.4c9b99548e9a95ec0cd38527be4814517c87c36ee65b9c6b593bbaee676ffc5b" Workload="localhost-k8s-coredns--674b8bbfcf--6p2l2-eth0" Oct 13 05:28:16.073917 containerd[1634]: 2025-10-13 05:28:16.018 [INFO][4273] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4c9b99548e9a95ec0cd38527be4814517c87c36ee65b9c6b593bbaee676ffc5b" HandleID="k8s-pod-network.4c9b99548e9a95ec0cd38527be4814517c87c36ee65b9c6b593bbaee676ffc5b" Workload="localhost-k8s-coredns--674b8bbfcf--6p2l2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035f810), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-6p2l2", "timestamp":"2025-10-13 05:28:16.017877147 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:28:16.073917 containerd[1634]: 2025-10-13 05:28:16.018 [INFO][4273] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:28:16.073917 containerd[1634]: 2025-10-13 05:28:16.018 [INFO][4273] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:28:16.073917 containerd[1634]: 2025-10-13 05:28:16.018 [INFO][4273] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:28:16.073917 containerd[1634]: 2025-10-13 05:28:16.028 [INFO][4273] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4c9b99548e9a95ec0cd38527be4814517c87c36ee65b9c6b593bbaee676ffc5b" host="localhost" Oct 13 05:28:16.073917 containerd[1634]: 2025-10-13 05:28:16.032 [INFO][4273] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:28:16.073917 containerd[1634]: 2025-10-13 05:28:16.036 [INFO][4273] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:28:16.073917 containerd[1634]: 2025-10-13 05:28:16.039 [INFO][4273] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:28:16.073917 containerd[1634]: 2025-10-13 05:28:16.041 [INFO][4273] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:28:16.073917 containerd[1634]: 2025-10-13 05:28:16.041 [INFO][4273] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4c9b99548e9a95ec0cd38527be4814517c87c36ee65b9c6b593bbaee676ffc5b" host="localhost" Oct 13 05:28:16.073917 containerd[1634]: 2025-10-13 05:28:16.044 [INFO][4273] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4c9b99548e9a95ec0cd38527be4814517c87c36ee65b9c6b593bbaee676ffc5b Oct 13 05:28:16.073917 containerd[1634]: 2025-10-13 05:28:16.048 [INFO][4273] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4c9b99548e9a95ec0cd38527be4814517c87c36ee65b9c6b593bbaee676ffc5b" host="localhost" Oct 13 05:28:16.073917 containerd[1634]: 2025-10-13 05:28:16.053 [INFO][4273] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.4c9b99548e9a95ec0cd38527be4814517c87c36ee65b9c6b593bbaee676ffc5b" host="localhost" Oct 13 05:28:16.073917 containerd[1634]: 2025-10-13 05:28:16.053 [INFO][4273] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.4c9b99548e9a95ec0cd38527be4814517c87c36ee65b9c6b593bbaee676ffc5b" host="localhost" Oct 13 05:28:16.073917 containerd[1634]: 2025-10-13 05:28:16.053 [INFO][4273] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:28:16.073917 containerd[1634]: 2025-10-13 05:28:16.053 [INFO][4273] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="4c9b99548e9a95ec0cd38527be4814517c87c36ee65b9c6b593bbaee676ffc5b" HandleID="k8s-pod-network.4c9b99548e9a95ec0cd38527be4814517c87c36ee65b9c6b593bbaee676ffc5b" Workload="localhost-k8s-coredns--674b8bbfcf--6p2l2-eth0" Oct 13 05:28:16.074699 containerd[1634]: 2025-10-13 05:28:16.056 [INFO][4259] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4c9b99548e9a95ec0cd38527be4814517c87c36ee65b9c6b593bbaee676ffc5b" Namespace="kube-system" Pod="coredns-674b8bbfcf-6p2l2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6p2l2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--6p2l2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c51b0ffb-a000-4151-8805-e42f64018cc3", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 27, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-6p2l2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidcc1d75f7c0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:28:16.074699 containerd[1634]: 2025-10-13 05:28:16.057 [INFO][4259] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="4c9b99548e9a95ec0cd38527be4814517c87c36ee65b9c6b593bbaee676ffc5b" Namespace="kube-system" Pod="coredns-674b8bbfcf-6p2l2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6p2l2-eth0" Oct 13 05:28:16.074699 containerd[1634]: 2025-10-13 05:28:16.057 [INFO][4259] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidcc1d75f7c0 ContainerID="4c9b99548e9a95ec0cd38527be4814517c87c36ee65b9c6b593bbaee676ffc5b" Namespace="kube-system" Pod="coredns-674b8bbfcf-6p2l2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6p2l2-eth0" Oct 13 05:28:16.074699 containerd[1634]: 2025-10-13 05:28:16.062 [INFO][4259] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c9b99548e9a95ec0cd38527be4814517c87c36ee65b9c6b593bbaee676ffc5b" Namespace="kube-system" Pod="coredns-674b8bbfcf-6p2l2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6p2l2-eth0" Oct 13 05:28:16.074699 containerd[1634]: 2025-10-13 05:28:16.062 [INFO][4259] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4c9b99548e9a95ec0cd38527be4814517c87c36ee65b9c6b593bbaee676ffc5b" Namespace="kube-system" Pod="coredns-674b8bbfcf-6p2l2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6p2l2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--6p2l2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c51b0ffb-a000-4151-8805-e42f64018cc3", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 27, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c9b99548e9a95ec0cd38527be4814517c87c36ee65b9c6b593bbaee676ffc5b", Pod:"coredns-674b8bbfcf-6p2l2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidcc1d75f7c0", MAC:"8e:50:fe:20:f6:a0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:28:16.074699 containerd[1634]: 2025-10-13 05:28:16.069 [INFO][4259] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4c9b99548e9a95ec0cd38527be4814517c87c36ee65b9c6b593bbaee676ffc5b" Namespace="kube-system" Pod="coredns-674b8bbfcf-6p2l2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6p2l2-eth0" Oct 13 05:28:16.106549 containerd[1634]: time="2025-10-13T05:28:16.106455494Z" level=info msg="connecting to shim 4c9b99548e9a95ec0cd38527be4814517c87c36ee65b9c6b593bbaee676ffc5b" address="unix:///run/containerd/s/591e3a5c41fcb33ec107e62abb0068115cb8e6556381c9b6cdad816b44e4ce40" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:28:16.132780 systemd[1]: Started cri-containerd-4c9b99548e9a95ec0cd38527be4814517c87c36ee65b9c6b593bbaee676ffc5b.scope - libcontainer container 4c9b99548e9a95ec0cd38527be4814517c87c36ee65b9c6b593bbaee676ffc5b. Oct 13 05:28:16.146791 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:28:16.150538 kubelet[2790]: E1013 05:28:16.150503 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:28:16.184713 kubelet[2790]: I1013 05:28:16.184601 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-h2gxt" podStartSLOduration=31.184582017 podStartE2EDuration="31.184582017s" podCreationTimestamp="2025-10-13 05:27:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:28:16.167807407 +0000 UTC m=+36.327128793" watchObservedRunningTime="2025-10-13 05:28:16.184582017 +0000 UTC m=+36.343903393" Oct 13 05:28:16.196408 containerd[1634]: time="2025-10-13T05:28:16.195797879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6p2l2,Uid:c51b0ffb-a000-4151-8805-e42f64018cc3,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c9b99548e9a95ec0cd38527be4814517c87c36ee65b9c6b593bbaee676ffc5b\"" Oct 13 05:28:16.197542 kubelet[2790]: E1013 05:28:16.197500 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:28:16.202884 containerd[1634]: time="2025-10-13T05:28:16.202818368Z" level=info msg="CreateContainer within sandbox \"4c9b99548e9a95ec0cd38527be4814517c87c36ee65b9c6b593bbaee676ffc5b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 05:28:16.214122 containerd[1634]: time="2025-10-13T05:28:16.214087017Z" level=info msg="Container 907914948cd804e504c3f1296904151355682d59d3e0938cd558bac074f6eba1: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:28:16.220192 containerd[1634]: time="2025-10-13T05:28:16.220154270Z" level=info msg="CreateContainer within sandbox \"4c9b99548e9a95ec0cd38527be4814517c87c36ee65b9c6b593bbaee676ffc5b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"907914948cd804e504c3f1296904151355682d59d3e0938cd558bac074f6eba1\"" Oct 13 05:28:16.221042 containerd[1634]: time="2025-10-13T05:28:16.220977136Z" level=info msg="StartContainer for \"907914948cd804e504c3f1296904151355682d59d3e0938cd558bac074f6eba1\"" Oct 13 05:28:16.222547 containerd[1634]: time="2025-10-13T05:28:16.222511651Z" level=info msg="connecting to shim 907914948cd804e504c3f1296904151355682d59d3e0938cd558bac074f6eba1" address="unix:///run/containerd/s/591e3a5c41fcb33ec107e62abb0068115cb8e6556381c9b6cdad816b44e4ce40" protocol=ttrpc version=3 Oct 13 05:28:16.248816 systemd[1]: Started cri-containerd-907914948cd804e504c3f1296904151355682d59d3e0938cd558bac074f6eba1.scope - libcontainer container 907914948cd804e504c3f1296904151355682d59d3e0938cd558bac074f6eba1. Oct 13 05:28:16.276503 containerd[1634]: time="2025-10-13T05:28:16.276455095Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1621ab9f6d764d9da32db25a5827324076a71694524fb34606133f8757dd1239\" id:\"fad3c7ae083ee9711ed59dc576874676d40b349d96ba85f82d3f0af78902d8c9\" pid:4346 exit_status:1 exited_at:{seconds:1760333296 nanos:276023564}" Oct 13 05:28:16.288746 containerd[1634]: time="2025-10-13T05:28:16.288724439Z" level=info msg="StartContainer for \"907914948cd804e504c3f1296904151355682d59d3e0938cd558bac074f6eba1\" returns successfully" Oct 13 05:28:16.311831 systemd-networkd[1530]: cali2c5548799f2: Gained IPv6LL Oct 13 05:28:16.948108 containerd[1634]: time="2025-10-13T05:28:16.948050011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cb6ddf7bc-r22k9,Uid:a700f481-9241-4166-b620-175cd7d4c02c,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:28:16.948357 containerd[1634]: time="2025-10-13T05:28:16.948090927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59cddc5ff-gp9jb,Uid:deeb05c6-03db-466f-aa9a-63f5c23b7762,Namespace:calico-system,Attempt:0,}" Oct 13 05:28:16.948357 containerd[1634]: time="2025-10-13T05:28:16.948090927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cjhz6,Uid:d0ed4399-1a69-434d-b19f-5ac4963e0fd8,Namespace:calico-system,Attempt:0,}" Oct 13 05:28:17.052168 systemd-networkd[1530]: cali9d3efe55613: Link UP Oct 13 05:28:17.052377 systemd-networkd[1530]: cali9d3efe55613: Gained carrier Oct 13 05:28:17.064736 containerd[1634]: 2025-10-13 05:28:16.974 [INFO][4425] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:28:17.064736 containerd[1634]: 2025-10-13 05:28:16.990 [INFO][4425] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5cb6ddf7bc--r22k9-eth0 calico-apiserver-5cb6ddf7bc- calico-apiserver a700f481-9241-4166-b620-175cd7d4c02c 871 0 2025-10-13 05:27:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5cb6ddf7bc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5cb6ddf7bc-r22k9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9d3efe55613 [] [] }} ContainerID="c6b5189db885aecaf416656285e7b1a473f0456cab5efa3c9ba9b3c58cec2065" Namespace="calico-apiserver" Pod="calico-apiserver-5cb6ddf7bc-r22k9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cb6ddf7bc--r22k9-" Oct 13 05:28:17.064736 containerd[1634]: 2025-10-13 05:28:16.990 [INFO][4425] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c6b5189db885aecaf416656285e7b1a473f0456cab5efa3c9ba9b3c58cec2065" Namespace="calico-apiserver" Pod="calico-apiserver-5cb6ddf7bc-r22k9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cb6ddf7bc--r22k9-eth0" Oct 13 05:28:17.064736 containerd[1634]: 2025-10-13 05:28:17.016 [INFO][4469] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c6b5189db885aecaf416656285e7b1a473f0456cab5efa3c9ba9b3c58cec2065" HandleID="k8s-pod-network.c6b5189db885aecaf416656285e7b1a473f0456cab5efa3c9ba9b3c58cec2065" Workload="localhost-k8s-calico--apiserver--5cb6ddf7bc--r22k9-eth0" Oct 13 05:28:17.064736 containerd[1634]: 2025-10-13 05:28:17.017 [INFO][4469] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c6b5189db885aecaf416656285e7b1a473f0456cab5efa3c9ba9b3c58cec2065" HandleID="k8s-pod-network.c6b5189db885aecaf416656285e7b1a473f0456cab5efa3c9ba9b3c58cec2065" Workload="localhost-k8s-calico--apiserver--5cb6ddf7bc--r22k9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003472d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5cb6ddf7bc-r22k9", "timestamp":"2025-10-13 05:28:17.016842243 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:28:17.064736 containerd[1634]: 2025-10-13 05:28:17.017 [INFO][4469] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:28:17.064736 containerd[1634]: 2025-10-13 05:28:17.017 [INFO][4469] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:28:17.064736 containerd[1634]: 2025-10-13 05:28:17.017 [INFO][4469] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:28:17.064736 containerd[1634]: 2025-10-13 05:28:17.023 [INFO][4469] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c6b5189db885aecaf416656285e7b1a473f0456cab5efa3c9ba9b3c58cec2065" host="localhost" Oct 13 05:28:17.064736 containerd[1634]: 2025-10-13 05:28:17.026 [INFO][4469] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:28:17.064736 containerd[1634]: 2025-10-13 05:28:17.030 [INFO][4469] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:28:17.064736 containerd[1634]: 2025-10-13 05:28:17.031 [INFO][4469] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:28:17.064736 containerd[1634]: 2025-10-13 05:28:17.033 [INFO][4469] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:28:17.064736 containerd[1634]: 2025-10-13 05:28:17.033 [INFO][4469] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c6b5189db885aecaf416656285e7b1a473f0456cab5efa3c9ba9b3c58cec2065" host="localhost" Oct 13 05:28:17.064736 containerd[1634]: 2025-10-13 05:28:17.035 [INFO][4469] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c6b5189db885aecaf416656285e7b1a473f0456cab5efa3c9ba9b3c58cec2065 Oct 13 05:28:17.064736 containerd[1634]: 2025-10-13 05:28:17.039 [INFO][4469] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c6b5189db885aecaf416656285e7b1a473f0456cab5efa3c9ba9b3c58cec2065" host="localhost" Oct 13 05:28:17.064736 containerd[1634]: 2025-10-13 05:28:17.044 [INFO][4469] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c6b5189db885aecaf416656285e7b1a473f0456cab5efa3c9ba9b3c58cec2065" host="localhost" Oct 13 05:28:17.064736 containerd[1634]: 2025-10-13 05:28:17.044 [INFO][4469] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c6b5189db885aecaf416656285e7b1a473f0456cab5efa3c9ba9b3c58cec2065" host="localhost" Oct 13 05:28:17.064736 containerd[1634]: 2025-10-13 05:28:17.044 [INFO][4469] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:28:17.064736 containerd[1634]: 2025-10-13 05:28:17.044 [INFO][4469] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c6b5189db885aecaf416656285e7b1a473f0456cab5efa3c9ba9b3c58cec2065" HandleID="k8s-pod-network.c6b5189db885aecaf416656285e7b1a473f0456cab5efa3c9ba9b3c58cec2065" Workload="localhost-k8s-calico--apiserver--5cb6ddf7bc--r22k9-eth0" Oct 13 05:28:17.065534 containerd[1634]: 2025-10-13 05:28:17.050 [INFO][4425] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c6b5189db885aecaf416656285e7b1a473f0456cab5efa3c9ba9b3c58cec2065" Namespace="calico-apiserver" Pod="calico-apiserver-5cb6ddf7bc-r22k9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cb6ddf7bc--r22k9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5cb6ddf7bc--r22k9-eth0", GenerateName:"calico-apiserver-5cb6ddf7bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"a700f481-9241-4166-b620-175cd7d4c02c", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 27, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cb6ddf7bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5cb6ddf7bc-r22k9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d3efe55613", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:28:17.065534 containerd[1634]: 2025-10-13 05:28:17.050 [INFO][4425] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c6b5189db885aecaf416656285e7b1a473f0456cab5efa3c9ba9b3c58cec2065" Namespace="calico-apiserver" Pod="calico-apiserver-5cb6ddf7bc-r22k9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cb6ddf7bc--r22k9-eth0" Oct 13 05:28:17.065534 containerd[1634]: 2025-10-13 05:28:17.050 [INFO][4425] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9d3efe55613 ContainerID="c6b5189db885aecaf416656285e7b1a473f0456cab5efa3c9ba9b3c58cec2065" Namespace="calico-apiserver" Pod="calico-apiserver-5cb6ddf7bc-r22k9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cb6ddf7bc--r22k9-eth0" Oct 13 05:28:17.065534 containerd[1634]: 2025-10-13 05:28:17.052 [INFO][4425] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c6b5189db885aecaf416656285e7b1a473f0456cab5efa3c9ba9b3c58cec2065" Namespace="calico-apiserver" Pod="calico-apiserver-5cb6ddf7bc-r22k9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cb6ddf7bc--r22k9-eth0" Oct 13 05:28:17.065534 containerd[1634]: 2025-10-13 05:28:17.052 [INFO][4425] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c6b5189db885aecaf416656285e7b1a473f0456cab5efa3c9ba9b3c58cec2065" Namespace="calico-apiserver" Pod="calico-apiserver-5cb6ddf7bc-r22k9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cb6ddf7bc--r22k9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5cb6ddf7bc--r22k9-eth0", GenerateName:"calico-apiserver-5cb6ddf7bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"a700f481-9241-4166-b620-175cd7d4c02c", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 27, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cb6ddf7bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c6b5189db885aecaf416656285e7b1a473f0456cab5efa3c9ba9b3c58cec2065", Pod:"calico-apiserver-5cb6ddf7bc-r22k9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d3efe55613", MAC:"2a:ca:eb:91:cd:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:28:17.065534 containerd[1634]: 2025-10-13 05:28:17.059 [INFO][4425] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c6b5189db885aecaf416656285e7b1a473f0456cab5efa3c9ba9b3c58cec2065" Namespace="calico-apiserver" Pod="calico-apiserver-5cb6ddf7bc-r22k9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cb6ddf7bc--r22k9-eth0" Oct 13 05:28:17.081177 containerd[1634]: time="2025-10-13T05:28:17.081142996Z" level=info msg="connecting to shim c6b5189db885aecaf416656285e7b1a473f0456cab5efa3c9ba9b3c58cec2065" address="unix:///run/containerd/s/3d0ce2a8cebc9e0d77cdde77799ecbea1d084a58c2ad0e4b4bf2b3973ff9384c" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:28:17.110882 systemd[1]: Started cri-containerd-c6b5189db885aecaf416656285e7b1a473f0456cab5efa3c9ba9b3c58cec2065.scope - libcontainer container c6b5189db885aecaf416656285e7b1a473f0456cab5efa3c9ba9b3c58cec2065. Oct 13 05:28:17.125848 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:28:17.162808 kubelet[2790]: E1013 05:28:17.162267 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:28:17.162808 kubelet[2790]: E1013 05:28:17.162536 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:28:17.163972 systemd-networkd[1530]: calib491fc5f7ab: Link UP Oct 13 05:28:17.166458 systemd-networkd[1530]: calib491fc5f7ab: Gained carrier Oct 13 05:28:17.185370 kubelet[2790]: I1013 05:28:17.184826 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6p2l2" podStartSLOduration=32.184809472 podStartE2EDuration="32.184809472s" podCreationTimestamp="2025-10-13 05:27:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:28:17.183597333 +0000 UTC m=+37.342918708" watchObservedRunningTime="2025-10-13 05:28:17.184809472 +0000 UTC m=+37.344130837" Oct 13 05:28:17.193933 containerd[1634]: 2025-10-13 05:28:16.978 [INFO][4440] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:28:17.193933 containerd[1634]: 2025-10-13 05:28:16.991 [INFO][4440] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--cjhz6-eth0 csi-node-driver- calico-system d0ed4399-1a69-434d-b19f-5ac4963e0fd8 751 0 2025-10-13 05:27:55 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-cjhz6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib491fc5f7ab [] [] }} ContainerID="b3da49cbb2c7122d744587dac2648c869a5997fe3b356eef70653e16a30272bb" Namespace="calico-system" Pod="csi-node-driver-cjhz6" WorkloadEndpoint="localhost-k8s-csi--node--driver--cjhz6-" Oct 13 05:28:17.193933 containerd[1634]: 2025-10-13 05:28:16.991 [INFO][4440] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b3da49cbb2c7122d744587dac2648c869a5997fe3b356eef70653e16a30272bb" Namespace="calico-system" Pod="csi-node-driver-cjhz6" WorkloadEndpoint="localhost-k8s-csi--node--driver--cjhz6-eth0" Oct 13 05:28:17.193933 containerd[1634]: 2025-10-13 05:28:17.017 [INFO][4471] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b3da49cbb2c7122d744587dac2648c869a5997fe3b356eef70653e16a30272bb" HandleID="k8s-pod-network.b3da49cbb2c7122d744587dac2648c869a5997fe3b356eef70653e16a30272bb" Workload="localhost-k8s-csi--node--driver--cjhz6-eth0" Oct 13 05:28:17.193933 containerd[1634]: 2025-10-13 05:28:17.017 [INFO][4471] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b3da49cbb2c7122d744587dac2648c869a5997fe3b356eef70653e16a30272bb" HandleID="k8s-pod-network.b3da49cbb2c7122d744587dac2648c869a5997fe3b356eef70653e16a30272bb" Workload="localhost-k8s-csi--node--driver--cjhz6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032d490), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-cjhz6", "timestamp":"2025-10-13 05:28:17.017145766 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:28:17.193933 containerd[1634]: 2025-10-13 05:28:17.017 [INFO][4471] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:28:17.193933 containerd[1634]: 2025-10-13 05:28:17.044 [INFO][4471] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:28:17.193933 containerd[1634]: 2025-10-13 05:28:17.044 [INFO][4471] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:28:17.193933 containerd[1634]: 2025-10-13 05:28:17.125 [INFO][4471] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b3da49cbb2c7122d744587dac2648c869a5997fe3b356eef70653e16a30272bb" host="localhost" Oct 13 05:28:17.193933 containerd[1634]: 2025-10-13 05:28:17.131 [INFO][4471] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:28:17.193933 containerd[1634]: 2025-10-13 05:28:17.134 [INFO][4471] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:28:17.193933 containerd[1634]: 2025-10-13 05:28:17.136 [INFO][4471] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:28:17.193933 containerd[1634]: 2025-10-13 05:28:17.138 [INFO][4471] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:28:17.193933 containerd[1634]: 2025-10-13 05:28:17.138 [INFO][4471] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b3da49cbb2c7122d744587dac2648c869a5997fe3b356eef70653e16a30272bb" host="localhost" Oct 13 05:28:17.193933 containerd[1634]: 2025-10-13 05:28:17.139 [INFO][4471] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b3da49cbb2c7122d744587dac2648c869a5997fe3b356eef70653e16a30272bb Oct 13 05:28:17.193933 containerd[1634]: 2025-10-13 05:28:17.144 [INFO][4471] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b3da49cbb2c7122d744587dac2648c869a5997fe3b356eef70653e16a30272bb" host="localhost" Oct 13 05:28:17.193933 containerd[1634]: 2025-10-13 05:28:17.151 [INFO][4471] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.b3da49cbb2c7122d744587dac2648c869a5997fe3b356eef70653e16a30272bb" host="localhost" Oct 13 05:28:17.193933 containerd[1634]: 2025-10-13 05:28:17.151 [INFO][4471] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.b3da49cbb2c7122d744587dac2648c869a5997fe3b356eef70653e16a30272bb" host="localhost" Oct 13 05:28:17.193933 containerd[1634]: 2025-10-13 05:28:17.151 [INFO][4471] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:28:17.193933 containerd[1634]: 2025-10-13 05:28:17.151 [INFO][4471] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="b3da49cbb2c7122d744587dac2648c869a5997fe3b356eef70653e16a30272bb" HandleID="k8s-pod-network.b3da49cbb2c7122d744587dac2648c869a5997fe3b356eef70653e16a30272bb" Workload="localhost-k8s-csi--node--driver--cjhz6-eth0" Oct 13 05:28:17.194573 containerd[1634]: 2025-10-13 05:28:17.160 [INFO][4440] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b3da49cbb2c7122d744587dac2648c869a5997fe3b356eef70653e16a30272bb" Namespace="calico-system" Pod="csi-node-driver-cjhz6" WorkloadEndpoint="localhost-k8s-csi--node--driver--cjhz6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cjhz6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d0ed4399-1a69-434d-b19f-5ac4963e0fd8", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 27, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-cjhz6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib491fc5f7ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:28:17.194573 containerd[1634]: 2025-10-13 05:28:17.161 [INFO][4440] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="b3da49cbb2c7122d744587dac2648c869a5997fe3b356eef70653e16a30272bb" Namespace="calico-system" Pod="csi-node-driver-cjhz6" WorkloadEndpoint="localhost-k8s-csi--node--driver--cjhz6-eth0" Oct 13 05:28:17.194573 containerd[1634]: 2025-10-13 05:28:17.161 [INFO][4440] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib491fc5f7ab ContainerID="b3da49cbb2c7122d744587dac2648c869a5997fe3b356eef70653e16a30272bb" Namespace="calico-system" Pod="csi-node-driver-cjhz6" WorkloadEndpoint="localhost-k8s-csi--node--driver--cjhz6-eth0" Oct 13 05:28:17.194573 containerd[1634]: 2025-10-13 05:28:17.165 [INFO][4440] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b3da49cbb2c7122d744587dac2648c869a5997fe3b356eef70653e16a30272bb" Namespace="calico-system" Pod="csi-node-driver-cjhz6" WorkloadEndpoint="localhost-k8s-csi--node--driver--cjhz6-eth0" Oct 13 05:28:17.194573 containerd[1634]: 2025-10-13 05:28:17.168 [INFO][4440] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b3da49cbb2c7122d744587dac2648c869a5997fe3b356eef70653e16a30272bb" Namespace="calico-system" Pod="csi-node-driver-cjhz6" WorkloadEndpoint="localhost-k8s-csi--node--driver--cjhz6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cjhz6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d0ed4399-1a69-434d-b19f-5ac4963e0fd8", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 27, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b3da49cbb2c7122d744587dac2648c869a5997fe3b356eef70653e16a30272bb", Pod:"csi-node-driver-cjhz6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib491fc5f7ab", MAC:"7e:61:b9:d5:db:5d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:28:17.194573 containerd[1634]: 2025-10-13 05:28:17.186 [INFO][4440] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b3da49cbb2c7122d744587dac2648c869a5997fe3b356eef70653e16a30272bb" Namespace="calico-system" Pod="csi-node-driver-cjhz6" WorkloadEndpoint="localhost-k8s-csi--node--driver--cjhz6-eth0" Oct 13 05:28:17.203004 containerd[1634]: time="2025-10-13T05:28:17.202849210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cb6ddf7bc-r22k9,Uid:a700f481-9241-4166-b620-175cd7d4c02c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c6b5189db885aecaf416656285e7b1a473f0456cab5efa3c9ba9b3c58cec2065\"" Oct 13 05:28:17.239669 containerd[1634]: time="2025-10-13T05:28:17.239556350Z" level=info msg="connecting to shim b3da49cbb2c7122d744587dac2648c869a5997fe3b356eef70653e16a30272bb" address="unix:///run/containerd/s/c91fb4531233ac17d3290da2baec4b52317c6fad707f47572a308c1e602389a8" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:28:17.266385 systemd-networkd[1530]: calib71112d233e: Link UP Oct 13 05:28:17.267434 systemd-networkd[1530]: calib71112d233e: Gained carrier Oct 13 05:28:17.268890 systemd[1]: Started cri-containerd-b3da49cbb2c7122d744587dac2648c869a5997fe3b356eef70653e16a30272bb.scope - libcontainer container b3da49cbb2c7122d744587dac2648c869a5997fe3b356eef70653e16a30272bb. Oct 13 05:28:17.286336 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:28:17.293856 containerd[1634]: 2025-10-13 05:28:16.978 [INFO][4431] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:28:17.293856 containerd[1634]: 2025-10-13 05:28:16.990 [INFO][4431] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--59cddc5ff--gp9jb-eth0 calico-kube-controllers-59cddc5ff- calico-system deeb05c6-03db-466f-aa9a-63f5c23b7762 870 0 2025-10-13 05:27:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:59cddc5ff projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-59cddc5ff-gp9jb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib71112d233e [] [] }} ContainerID="da0464532e00504020baaa51714a41efb4478f43933b94fb74569d7c4ee9be5b" Namespace="calico-system" Pod="calico-kube-controllers-59cddc5ff-gp9jb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59cddc5ff--gp9jb-" Oct 13 05:28:17.293856 containerd[1634]: 2025-10-13 05:28:16.990 [INFO][4431] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="da0464532e00504020baaa51714a41efb4478f43933b94fb74569d7c4ee9be5b" Namespace="calico-system" Pod="calico-kube-controllers-59cddc5ff-gp9jb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59cddc5ff--gp9jb-eth0" Oct 13 05:28:17.293856 containerd[1634]: 2025-10-13 05:28:17.024 [INFO][4467] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da0464532e00504020baaa51714a41efb4478f43933b94fb74569d7c4ee9be5b" HandleID="k8s-pod-network.da0464532e00504020baaa51714a41efb4478f43933b94fb74569d7c4ee9be5b" Workload="localhost-k8s-calico--kube--controllers--59cddc5ff--gp9jb-eth0" Oct 13 05:28:17.293856 containerd[1634]: 2025-10-13 05:28:17.025 [INFO][4467] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="da0464532e00504020baaa51714a41efb4478f43933b94fb74569d7c4ee9be5b" HandleID="k8s-pod-network.da0464532e00504020baaa51714a41efb4478f43933b94fb74569d7c4ee9be5b" Workload="localhost-k8s-calico--kube--controllers--59cddc5ff--gp9jb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003244e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-59cddc5ff-gp9jb", "timestamp":"2025-10-13 05:28:17.024853366 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:28:17.293856 containerd[1634]: 2025-10-13 05:28:17.026 [INFO][4467] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:28:17.293856 containerd[1634]: 2025-10-13 05:28:17.151 [INFO][4467] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:28:17.293856 containerd[1634]: 2025-10-13 05:28:17.151 [INFO][4467] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:28:17.293856 containerd[1634]: 2025-10-13 05:28:17.226 [INFO][4467] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.da0464532e00504020baaa51714a41efb4478f43933b94fb74569d7c4ee9be5b" host="localhost" Oct 13 05:28:17.293856 containerd[1634]: 2025-10-13 05:28:17.233 [INFO][4467] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:28:17.293856 containerd[1634]: 2025-10-13 05:28:17.237 [INFO][4467] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:28:17.293856 containerd[1634]: 2025-10-13 05:28:17.239 [INFO][4467] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:28:17.293856 containerd[1634]: 2025-10-13 05:28:17.241 [INFO][4467] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:28:17.293856 containerd[1634]: 2025-10-13 05:28:17.241 [INFO][4467] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.da0464532e00504020baaa51714a41efb4478f43933b94fb74569d7c4ee9be5b" host="localhost" Oct 13 05:28:17.293856 containerd[1634]: 2025-10-13 05:28:17.242 [INFO][4467] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.da0464532e00504020baaa51714a41efb4478f43933b94fb74569d7c4ee9be5b Oct 13 05:28:17.293856 containerd[1634]: 2025-10-13 05:28:17.250 [INFO][4467] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.da0464532e00504020baaa51714a41efb4478f43933b94fb74569d7c4ee9be5b" host="localhost" Oct 13 05:28:17.293856 containerd[1634]: 2025-10-13 05:28:17.257 [INFO][4467] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.da0464532e00504020baaa51714a41efb4478f43933b94fb74569d7c4ee9be5b" host="localhost" Oct 13 05:28:17.293856 containerd[1634]: 2025-10-13 05:28:17.257 [INFO][4467] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.da0464532e00504020baaa51714a41efb4478f43933b94fb74569d7c4ee9be5b" host="localhost" Oct 13 05:28:17.293856 containerd[1634]: 2025-10-13 05:28:17.257 [INFO][4467] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:28:17.293856 containerd[1634]: 2025-10-13 05:28:17.257 [INFO][4467] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="da0464532e00504020baaa51714a41efb4478f43933b94fb74569d7c4ee9be5b" HandleID="k8s-pod-network.da0464532e00504020baaa51714a41efb4478f43933b94fb74569d7c4ee9be5b" Workload="localhost-k8s-calico--kube--controllers--59cddc5ff--gp9jb-eth0" Oct 13 05:28:17.294402 containerd[1634]: 2025-10-13 05:28:17.261 [INFO][4431] cni-plugin/k8s.go 418: Populated endpoint ContainerID="da0464532e00504020baaa51714a41efb4478f43933b94fb74569d7c4ee9be5b" Namespace="calico-system" Pod="calico-kube-controllers-59cddc5ff-gp9jb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59cddc5ff--gp9jb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--59cddc5ff--gp9jb-eth0", GenerateName:"calico-kube-controllers-59cddc5ff-", Namespace:"calico-system", SelfLink:"", UID:"deeb05c6-03db-466f-aa9a-63f5c23b7762", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 27, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59cddc5ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-59cddc5ff-gp9jb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib71112d233e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:28:17.294402 containerd[1634]: 2025-10-13 05:28:17.261 [INFO][4431] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="da0464532e00504020baaa51714a41efb4478f43933b94fb74569d7c4ee9be5b" Namespace="calico-system" Pod="calico-kube-controllers-59cddc5ff-gp9jb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59cddc5ff--gp9jb-eth0" Oct 13 05:28:17.294402 containerd[1634]: 2025-10-13 05:28:17.261 [INFO][4431] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib71112d233e ContainerID="da0464532e00504020baaa51714a41efb4478f43933b94fb74569d7c4ee9be5b" Namespace="calico-system" Pod="calico-kube-controllers-59cddc5ff-gp9jb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59cddc5ff--gp9jb-eth0" Oct 13 05:28:17.294402 containerd[1634]: 2025-10-13 05:28:17.271 [INFO][4431] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da0464532e00504020baaa51714a41efb4478f43933b94fb74569d7c4ee9be5b" Namespace="calico-system" Pod="calico-kube-controllers-59cddc5ff-gp9jb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59cddc5ff--gp9jb-eth0" Oct 13 05:28:17.294402 containerd[1634]: 2025-10-13 05:28:17.272 [INFO][4431] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="da0464532e00504020baaa51714a41efb4478f43933b94fb74569d7c4ee9be5b" Namespace="calico-system" Pod="calico-kube-controllers-59cddc5ff-gp9jb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59cddc5ff--gp9jb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--59cddc5ff--gp9jb-eth0", GenerateName:"calico-kube-controllers-59cddc5ff-", Namespace:"calico-system", SelfLink:"", UID:"deeb05c6-03db-466f-aa9a-63f5c23b7762", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 27, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59cddc5ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"da0464532e00504020baaa51714a41efb4478f43933b94fb74569d7c4ee9be5b", Pod:"calico-kube-controllers-59cddc5ff-gp9jb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib71112d233e", MAC:"2e:1f:4b:af:62:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:28:17.294402 containerd[1634]: 2025-10-13 05:28:17.287 [INFO][4431] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="da0464532e00504020baaa51714a41efb4478f43933b94fb74569d7c4ee9be5b" Namespace="calico-system" Pod="calico-kube-controllers-59cddc5ff-gp9jb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59cddc5ff--gp9jb-eth0" Oct 13 05:28:17.305300 containerd[1634]: time="2025-10-13T05:28:17.305252131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cjhz6,Uid:d0ed4399-1a69-434d-b19f-5ac4963e0fd8,Namespace:calico-system,Attempt:0,} returns sandbox id \"b3da49cbb2c7122d744587dac2648c869a5997fe3b356eef70653e16a30272bb\"" Oct 13 05:28:17.326643 containerd[1634]: time="2025-10-13T05:28:17.326567216Z" level=info msg="connecting to shim da0464532e00504020baaa51714a41efb4478f43933b94fb74569d7c4ee9be5b" address="unix:///run/containerd/s/79f4384ce51b3fa98dad74ddba88c29bf39d0c115c542d73d563feb98703f9c3" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:28:17.359931 systemd[1]: Started cri-containerd-da0464532e00504020baaa51714a41efb4478f43933b94fb74569d7c4ee9be5b.scope - libcontainer container da0464532e00504020baaa51714a41efb4478f43933b94fb74569d7c4ee9be5b. Oct 13 05:28:17.374427 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:28:17.381322 containerd[1634]: time="2025-10-13T05:28:17.381277325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:17.382112 containerd[1634]: time="2025-10-13T05:28:17.382068994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Oct 13 05:28:17.383114 containerd[1634]: time="2025-10-13T05:28:17.383059241Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:17.385289 containerd[1634]: time="2025-10-13T05:28:17.385260595Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:17.385820 containerd[1634]: time="2025-10-13T05:28:17.385789086Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.481840547s" Oct 13 05:28:17.385850 containerd[1634]: time="2025-10-13T05:28:17.385819612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Oct 13 05:28:17.388448 containerd[1634]: time="2025-10-13T05:28:17.387532089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 05:28:17.390444 containerd[1634]: time="2025-10-13T05:28:17.390420749Z" level=info msg="CreateContainer within sandbox \"b650e3c23a35718e16880475571f469528177d7abbc074e516e4917ad708a4e2\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Oct 13 05:28:17.397356 containerd[1634]: time="2025-10-13T05:28:17.397313476Z" level=info msg="Container a5072e2c18ef3fffea43a97d9ccacf2d893ddd7d73ac9b368a8c817036cb6710: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:28:17.404530 containerd[1634]: time="2025-10-13T05:28:17.404502093Z" level=info msg="CreateContainer within sandbox \"b650e3c23a35718e16880475571f469528177d7abbc074e516e4917ad708a4e2\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"a5072e2c18ef3fffea43a97d9ccacf2d893ddd7d73ac9b368a8c817036cb6710\"" Oct 13 05:28:17.405179 containerd[1634]: time="2025-10-13T05:28:17.405146388Z" level=info msg="StartContainer for \"a5072e2c18ef3fffea43a97d9ccacf2d893ddd7d73ac9b368a8c817036cb6710\"" Oct 13 05:28:17.407495 containerd[1634]: time="2025-10-13T05:28:17.407467634Z" level=info msg="connecting to shim a5072e2c18ef3fffea43a97d9ccacf2d893ddd7d73ac9b368a8c817036cb6710" address="unix:///run/containerd/s/3f75e99c51cd3e713bc1f070842dcb07c9073e5b76d960b8cf42f09b139a4e41" protocol=ttrpc version=3 Oct 13 05:28:17.408781 containerd[1634]: time="2025-10-13T05:28:17.408739003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59cddc5ff-gp9jb,Uid:deeb05c6-03db-466f-aa9a-63f5c23b7762,Namespace:calico-system,Attempt:0,} returns sandbox id \"da0464532e00504020baaa51714a41efb4478f43933b94fb74569d7c4ee9be5b\"" Oct 13 05:28:17.426888 systemd[1]: Started cri-containerd-a5072e2c18ef3fffea43a97d9ccacf2d893ddd7d73ac9b368a8c817036cb6710.scope - libcontainer container a5072e2c18ef3fffea43a97d9ccacf2d893ddd7d73ac9b368a8c817036cb6710. Oct 13 05:28:17.476441 containerd[1634]: time="2025-10-13T05:28:17.476312157Z" level=info msg="StartContainer for \"a5072e2c18ef3fffea43a97d9ccacf2d893ddd7d73ac9b368a8c817036cb6710\" returns successfully" Oct 13 05:28:17.719950 systemd-networkd[1530]: cali7d41d51ab93: Gained IPv6LL Oct 13 05:28:17.783946 systemd-networkd[1530]: calidcc1d75f7c0: Gained IPv6LL Oct 13 05:28:17.948045 containerd[1634]: time="2025-10-13T05:28:17.947985471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cb6ddf7bc-jxczs,Uid:f5cbf7e7-777e-49bc-81fe-a7d3cd83a422,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:28:18.043294 systemd-networkd[1530]: calib1b53333810: Link UP Oct 13 05:28:18.043710 systemd-networkd[1530]: calib1b53333810: Gained carrier Oct 13 05:28:18.057132 containerd[1634]: 2025-10-13 05:28:17.972 [INFO][4713] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:28:18.057132 containerd[1634]: 2025-10-13 05:28:17.983 [INFO][4713] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5cb6ddf7bc--jxczs-eth0 calico-apiserver-5cb6ddf7bc- calico-apiserver f5cbf7e7-777e-49bc-81fe-a7d3cd83a422 862 0 2025-10-13 05:27:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5cb6ddf7bc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5cb6ddf7bc-jxczs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib1b53333810 [] [] }} ContainerID="f9a211ed7279cd42e82f6a0c9f798d881ea7e5c769649c8596b570fa3e142f3e" Namespace="calico-apiserver" Pod="calico-apiserver-5cb6ddf7bc-jxczs" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cb6ddf7bc--jxczs-" Oct 13 05:28:18.057132 containerd[1634]: 2025-10-13 05:28:17.983 [INFO][4713] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f9a211ed7279cd42e82f6a0c9f798d881ea7e5c769649c8596b570fa3e142f3e" Namespace="calico-apiserver" Pod="calico-apiserver-5cb6ddf7bc-jxczs" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cb6ddf7bc--jxczs-eth0" Oct 13 05:28:18.057132 containerd[1634]: 2025-10-13 05:28:18.009 [INFO][4727] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f9a211ed7279cd42e82f6a0c9f798d881ea7e5c769649c8596b570fa3e142f3e" HandleID="k8s-pod-network.f9a211ed7279cd42e82f6a0c9f798d881ea7e5c769649c8596b570fa3e142f3e" Workload="localhost-k8s-calico--apiserver--5cb6ddf7bc--jxczs-eth0" Oct 13 05:28:18.057132 containerd[1634]: 2025-10-13 05:28:18.010 [INFO][4727] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f9a211ed7279cd42e82f6a0c9f798d881ea7e5c769649c8596b570fa3e142f3e" HandleID="k8s-pod-network.f9a211ed7279cd42e82f6a0c9f798d881ea7e5c769649c8596b570fa3e142f3e" Workload="localhost-k8s-calico--apiserver--5cb6ddf7bc--jxczs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7100), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5cb6ddf7bc-jxczs", "timestamp":"2025-10-13 05:28:18.009464154 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:28:18.057132 containerd[1634]: 2025-10-13 05:28:18.010 [INFO][4727] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:28:18.057132 containerd[1634]: 2025-10-13 05:28:18.010 [INFO][4727] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:28:18.057132 containerd[1634]: 2025-10-13 05:28:18.010 [INFO][4727] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:28:18.057132 containerd[1634]: 2025-10-13 05:28:18.016 [INFO][4727] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f9a211ed7279cd42e82f6a0c9f798d881ea7e5c769649c8596b570fa3e142f3e" host="localhost" Oct 13 05:28:18.057132 containerd[1634]: 2025-10-13 05:28:18.020 [INFO][4727] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:28:18.057132 containerd[1634]: 2025-10-13 05:28:18.023 [INFO][4727] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:28:18.057132 containerd[1634]: 2025-10-13 05:28:18.024 [INFO][4727] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:28:18.057132 containerd[1634]: 2025-10-13 05:28:18.026 [INFO][4727] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:28:18.057132 containerd[1634]: 2025-10-13 05:28:18.026 [INFO][4727] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f9a211ed7279cd42e82f6a0c9f798d881ea7e5c769649c8596b570fa3e142f3e" host="localhost" Oct 13 05:28:18.057132 containerd[1634]: 2025-10-13 05:28:18.027 [INFO][4727] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f9a211ed7279cd42e82f6a0c9f798d881ea7e5c769649c8596b570fa3e142f3e Oct 13 05:28:18.057132 containerd[1634]: 2025-10-13 05:28:18.031 [INFO][4727] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f9a211ed7279cd42e82f6a0c9f798d881ea7e5c769649c8596b570fa3e142f3e" host="localhost" Oct 13 05:28:18.057132 containerd[1634]: 2025-10-13 05:28:18.037 [INFO][4727] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.f9a211ed7279cd42e82f6a0c9f798d881ea7e5c769649c8596b570fa3e142f3e" host="localhost" Oct 13 05:28:18.057132 containerd[1634]: 2025-10-13 05:28:18.037 [INFO][4727] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.f9a211ed7279cd42e82f6a0c9f798d881ea7e5c769649c8596b570fa3e142f3e" host="localhost" Oct 13 05:28:18.057132 containerd[1634]: 2025-10-13 05:28:18.037 [INFO][4727] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:28:18.057132 containerd[1634]: 2025-10-13 05:28:18.037 [INFO][4727] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="f9a211ed7279cd42e82f6a0c9f798d881ea7e5c769649c8596b570fa3e142f3e" HandleID="k8s-pod-network.f9a211ed7279cd42e82f6a0c9f798d881ea7e5c769649c8596b570fa3e142f3e" Workload="localhost-k8s-calico--apiserver--5cb6ddf7bc--jxczs-eth0" Oct 13 05:28:18.057676 containerd[1634]: 2025-10-13 05:28:18.041 [INFO][4713] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f9a211ed7279cd42e82f6a0c9f798d881ea7e5c769649c8596b570fa3e142f3e" Namespace="calico-apiserver" Pod="calico-apiserver-5cb6ddf7bc-jxczs" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cb6ddf7bc--jxczs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5cb6ddf7bc--jxczs-eth0", GenerateName:"calico-apiserver-5cb6ddf7bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"f5cbf7e7-777e-49bc-81fe-a7d3cd83a422", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 27, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cb6ddf7bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5cb6ddf7bc-jxczs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib1b53333810", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:28:18.057676 containerd[1634]: 2025-10-13 05:28:18.041 [INFO][4713] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="f9a211ed7279cd42e82f6a0c9f798d881ea7e5c769649c8596b570fa3e142f3e" Namespace="calico-apiserver" Pod="calico-apiserver-5cb6ddf7bc-jxczs" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cb6ddf7bc--jxczs-eth0" Oct 13 05:28:18.057676 containerd[1634]: 2025-10-13 05:28:18.041 [INFO][4713] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib1b53333810 ContainerID="f9a211ed7279cd42e82f6a0c9f798d881ea7e5c769649c8596b570fa3e142f3e" Namespace="calico-apiserver" Pod="calico-apiserver-5cb6ddf7bc-jxczs" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cb6ddf7bc--jxczs-eth0" Oct 13 05:28:18.057676 containerd[1634]: 2025-10-13 05:28:18.043 [INFO][4713] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f9a211ed7279cd42e82f6a0c9f798d881ea7e5c769649c8596b570fa3e142f3e" Namespace="calico-apiserver" Pod="calico-apiserver-5cb6ddf7bc-jxczs" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cb6ddf7bc--jxczs-eth0" Oct 13 05:28:18.057676 containerd[1634]: 2025-10-13 05:28:18.044 [INFO][4713] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f9a211ed7279cd42e82f6a0c9f798d881ea7e5c769649c8596b570fa3e142f3e" Namespace="calico-apiserver" Pod="calico-apiserver-5cb6ddf7bc-jxczs" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cb6ddf7bc--jxczs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5cb6ddf7bc--jxczs-eth0", GenerateName:"calico-apiserver-5cb6ddf7bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"f5cbf7e7-777e-49bc-81fe-a7d3cd83a422", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 27, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cb6ddf7bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f9a211ed7279cd42e82f6a0c9f798d881ea7e5c769649c8596b570fa3e142f3e", Pod:"calico-apiserver-5cb6ddf7bc-jxczs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib1b53333810", MAC:"6e:5b:c6:6d:f8:b8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:28:18.057676 containerd[1634]: 2025-10-13 05:28:18.054 [INFO][4713] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f9a211ed7279cd42e82f6a0c9f798d881ea7e5c769649c8596b570fa3e142f3e" Namespace="calico-apiserver" Pod="calico-apiserver-5cb6ddf7bc-jxczs" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cb6ddf7bc--jxczs-eth0" Oct 13 05:28:18.076366 containerd[1634]: time="2025-10-13T05:28:18.076302787Z" level=info msg="connecting to shim f9a211ed7279cd42e82f6a0c9f798d881ea7e5c769649c8596b570fa3e142f3e" address="unix:///run/containerd/s/aae38ffff2c1fa243a54ad403c5bee811da83c0fc88c16800581c219943ffea8" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:28:18.104784 systemd[1]: Started cri-containerd-f9a211ed7279cd42e82f6a0c9f798d881ea7e5c769649c8596b570fa3e142f3e.scope - libcontainer container f9a211ed7279cd42e82f6a0c9f798d881ea7e5c769649c8596b570fa3e142f3e. Oct 13 05:28:18.116825 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:28:18.146605 containerd[1634]: time="2025-10-13T05:28:18.146542971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cb6ddf7bc-jxczs,Uid:f5cbf7e7-777e-49bc-81fe-a7d3cd83a422,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f9a211ed7279cd42e82f6a0c9f798d881ea7e5c769649c8596b570fa3e142f3e\"" Oct 13 05:28:18.171762 kubelet[2790]: E1013 05:28:18.171727 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:28:18.172171 kubelet[2790]: E1013 05:28:18.171906 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:28:18.231796 systemd-networkd[1530]: cali9d3efe55613: Gained IPv6LL Oct 13 05:28:18.679867 systemd-networkd[1530]: calib71112d233e: Gained IPv6LL Oct 13 05:28:19.174330 kubelet[2790]: E1013 05:28:19.174285 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:28:19.191892 systemd-networkd[1530]: calib491fc5f7ab: Gained IPv6LL Oct 13 05:28:19.313928 containerd[1634]: time="2025-10-13T05:28:19.313864964Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:19.314614 containerd[1634]: time="2025-10-13T05:28:19.314554935Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Oct 13 05:28:19.315882 containerd[1634]: time="2025-10-13T05:28:19.315840191Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:19.318122 containerd[1634]: time="2025-10-13T05:28:19.318092994Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:19.318715 containerd[1634]: time="2025-10-13T05:28:19.318678431Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 1.93109189s" Oct 13 05:28:19.318715 containerd[1634]: time="2025-10-13T05:28:19.318714367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Oct 13 05:28:19.319526 containerd[1634]: time="2025-10-13T05:28:19.319503772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Oct 13 05:28:19.322207 containerd[1634]: time="2025-10-13T05:28:19.322169261Z" level=info msg="CreateContainer within sandbox \"c6b5189db885aecaf416656285e7b1a473f0456cab5efa3c9ba9b3c58cec2065\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:28:19.330415 containerd[1634]: time="2025-10-13T05:28:19.330367320Z" level=info msg="Container a5b9a00e925999b38f8cd77cafb0641cc578e51cd08d4421683b7809925060f0: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:28:19.339946 containerd[1634]: time="2025-10-13T05:28:19.339892042Z" level=info msg="CreateContainer within sandbox \"c6b5189db885aecaf416656285e7b1a473f0456cab5efa3c9ba9b3c58cec2065\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a5b9a00e925999b38f8cd77cafb0641cc578e51cd08d4421683b7809925060f0\"" Oct 13 05:28:19.340403 containerd[1634]: time="2025-10-13T05:28:19.340371672Z" level=info msg="StartContainer for \"a5b9a00e925999b38f8cd77cafb0641cc578e51cd08d4421683b7809925060f0\"" Oct 13 05:28:19.341477 containerd[1634]: time="2025-10-13T05:28:19.341442812Z" level=info msg="connecting to shim a5b9a00e925999b38f8cd77cafb0641cc578e51cd08d4421683b7809925060f0" address="unix:///run/containerd/s/3d0ce2a8cebc9e0d77cdde77799ecbea1d084a58c2ad0e4b4bf2b3973ff9384c" protocol=ttrpc version=3 Oct 13 05:28:19.364788 systemd[1]: Started cri-containerd-a5b9a00e925999b38f8cd77cafb0641cc578e51cd08d4421683b7809925060f0.scope - libcontainer container a5b9a00e925999b38f8cd77cafb0641cc578e51cd08d4421683b7809925060f0. Oct 13 05:28:19.418538 containerd[1634]: time="2025-10-13T05:28:19.418491980Z" level=info msg="StartContainer for \"a5b9a00e925999b38f8cd77cafb0641cc578e51cd08d4421683b7809925060f0\" returns successfully" Oct 13 05:28:19.575922 systemd-networkd[1530]: calib1b53333810: Gained IPv6LL Oct 13 05:28:19.950092 containerd[1634]: time="2025-10-13T05:28:19.949915771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-qhqcv,Uid:06ef6e4a-8317-43b7-899e-6a0b731c7417,Namespace:calico-system,Attempt:0,}" Oct 13 05:28:20.135914 systemd-networkd[1530]: cali8ad793c752d: Link UP Oct 13 05:28:20.137434 systemd-networkd[1530]: cali8ad793c752d: Gained carrier Oct 13 05:28:20.152728 containerd[1634]: 2025-10-13 05:28:19.989 [INFO][4884] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:28:20.152728 containerd[1634]: 2025-10-13 05:28:20.031 [INFO][4884] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--qhqcv-eth0 goldmane-54d579b49d- calico-system 06ef6e4a-8317-43b7-899e-6a0b731c7417 868 0 2025-10-13 05:27:55 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-qhqcv eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8ad793c752d [] [] }} ContainerID="56094b86092df05a61a5e8f53bfe72036be854ba15371d0b650992ec28fd66a5" Namespace="calico-system" Pod="goldmane-54d579b49d-qhqcv" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--qhqcv-" Oct 13 05:28:20.152728 containerd[1634]: 2025-10-13 05:28:20.031 [INFO][4884] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="56094b86092df05a61a5e8f53bfe72036be854ba15371d0b650992ec28fd66a5" Namespace="calico-system" Pod="goldmane-54d579b49d-qhqcv" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--qhqcv-eth0" Oct 13 05:28:20.152728 containerd[1634]: 2025-10-13 05:28:20.084 [INFO][4901] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="56094b86092df05a61a5e8f53bfe72036be854ba15371d0b650992ec28fd66a5" HandleID="k8s-pod-network.56094b86092df05a61a5e8f53bfe72036be854ba15371d0b650992ec28fd66a5" Workload="localhost-k8s-goldmane--54d579b49d--qhqcv-eth0" Oct 13 05:28:20.152728 containerd[1634]: 2025-10-13 05:28:20.084 [INFO][4901] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="56094b86092df05a61a5e8f53bfe72036be854ba15371d0b650992ec28fd66a5" HandleID="k8s-pod-network.56094b86092df05a61a5e8f53bfe72036be854ba15371d0b650992ec28fd66a5" Workload="localhost-k8s-goldmane--54d579b49d--qhqcv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c66c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-qhqcv", "timestamp":"2025-10-13 05:28:20.08405962 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:28:20.152728 containerd[1634]: 2025-10-13 05:28:20.084 [INFO][4901] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:28:20.152728 containerd[1634]: 2025-10-13 05:28:20.084 [INFO][4901] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:28:20.152728 containerd[1634]: 2025-10-13 05:28:20.084 [INFO][4901] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:28:20.152728 containerd[1634]: 2025-10-13 05:28:20.095 [INFO][4901] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.56094b86092df05a61a5e8f53bfe72036be854ba15371d0b650992ec28fd66a5" host="localhost" Oct 13 05:28:20.152728 containerd[1634]: 2025-10-13 05:28:20.101 [INFO][4901] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:28:20.152728 containerd[1634]: 2025-10-13 05:28:20.107 [INFO][4901] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:28:20.152728 containerd[1634]: 2025-10-13 05:28:20.110 [INFO][4901] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:28:20.152728 containerd[1634]: 2025-10-13 05:28:20.112 [INFO][4901] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:28:20.152728 containerd[1634]: 2025-10-13 05:28:20.113 [INFO][4901] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.56094b86092df05a61a5e8f53bfe72036be854ba15371d0b650992ec28fd66a5" host="localhost" Oct 13 05:28:20.152728 containerd[1634]: 2025-10-13 05:28:20.114 [INFO][4901] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.56094b86092df05a61a5e8f53bfe72036be854ba15371d0b650992ec28fd66a5 Oct 13 05:28:20.152728 containerd[1634]: 2025-10-13 05:28:20.120 [INFO][4901] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.56094b86092df05a61a5e8f53bfe72036be854ba15371d0b650992ec28fd66a5" host="localhost" Oct 13 05:28:20.152728 containerd[1634]: 2025-10-13 05:28:20.128 [INFO][4901] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.56094b86092df05a61a5e8f53bfe72036be854ba15371d0b650992ec28fd66a5" host="localhost" Oct 13 05:28:20.152728 containerd[1634]: 2025-10-13 05:28:20.128 [INFO][4901] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.56094b86092df05a61a5e8f53bfe72036be854ba15371d0b650992ec28fd66a5" host="localhost" Oct 13 05:28:20.152728 containerd[1634]: 2025-10-13 05:28:20.128 [INFO][4901] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:28:20.152728 containerd[1634]: 2025-10-13 05:28:20.128 [INFO][4901] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="56094b86092df05a61a5e8f53bfe72036be854ba15371d0b650992ec28fd66a5" HandleID="k8s-pod-network.56094b86092df05a61a5e8f53bfe72036be854ba15371d0b650992ec28fd66a5" Workload="localhost-k8s-goldmane--54d579b49d--qhqcv-eth0" Oct 13 05:28:20.153359 containerd[1634]: 2025-10-13 05:28:20.132 [INFO][4884] cni-plugin/k8s.go 418: Populated endpoint ContainerID="56094b86092df05a61a5e8f53bfe72036be854ba15371d0b650992ec28fd66a5" Namespace="calico-system" Pod="goldmane-54d579b49d-qhqcv" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--qhqcv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--qhqcv-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"06ef6e4a-8317-43b7-899e-6a0b731c7417", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 27, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-qhqcv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8ad793c752d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:28:20.153359 containerd[1634]: 2025-10-13 05:28:20.132 [INFO][4884] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="56094b86092df05a61a5e8f53bfe72036be854ba15371d0b650992ec28fd66a5" Namespace="calico-system" Pod="goldmane-54d579b49d-qhqcv" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--qhqcv-eth0" Oct 13 05:28:20.153359 containerd[1634]: 2025-10-13 05:28:20.132 [INFO][4884] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ad793c752d ContainerID="56094b86092df05a61a5e8f53bfe72036be854ba15371d0b650992ec28fd66a5" Namespace="calico-system" Pod="goldmane-54d579b49d-qhqcv" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--qhqcv-eth0" Oct 13 05:28:20.153359 containerd[1634]: 2025-10-13 05:28:20.135 [INFO][4884] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="56094b86092df05a61a5e8f53bfe72036be854ba15371d0b650992ec28fd66a5" Namespace="calico-system" Pod="goldmane-54d579b49d-qhqcv" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--qhqcv-eth0" Oct 13 05:28:20.153359 containerd[1634]: 2025-10-13 05:28:20.135 [INFO][4884] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="56094b86092df05a61a5e8f53bfe72036be854ba15371d0b650992ec28fd66a5" Namespace="calico-system" Pod="goldmane-54d579b49d-qhqcv" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--qhqcv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--qhqcv-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"06ef6e4a-8317-43b7-899e-6a0b731c7417", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 27, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"56094b86092df05a61a5e8f53bfe72036be854ba15371d0b650992ec28fd66a5", Pod:"goldmane-54d579b49d-qhqcv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8ad793c752d", MAC:"3e:57:37:8e:66:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:28:20.153359 containerd[1634]: 2025-10-13 05:28:20.148 [INFO][4884] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="56094b86092df05a61a5e8f53bfe72036be854ba15371d0b650992ec28fd66a5" Namespace="calico-system" Pod="goldmane-54d579b49d-qhqcv" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--qhqcv-eth0" Oct 13 05:28:20.195257 kubelet[2790]: I1013 05:28:20.195170 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5cb6ddf7bc-r22k9" podStartSLOduration=25.081078939 podStartE2EDuration="27.195154006s" podCreationTimestamp="2025-10-13 05:27:53 +0000 UTC" firstStartedPulling="2025-10-13 05:28:17.20532317 +0000 UTC m=+37.364644545" lastFinishedPulling="2025-10-13 05:28:19.319398237 +0000 UTC m=+39.478719612" observedRunningTime="2025-10-13 05:28:20.194507746 +0000 UTC m=+40.353829121" watchObservedRunningTime="2025-10-13 05:28:20.195154006 +0000 UTC m=+40.354475381" Oct 13 05:28:20.203007 containerd[1634]: time="2025-10-13T05:28:20.202863182Z" level=info msg="connecting to shim 56094b86092df05a61a5e8f53bfe72036be854ba15371d0b650992ec28fd66a5" address="unix:///run/containerd/s/a8630fa01b9d035c5f251f70c516b3219058866c15ed51f8fdc24838fe1e5a32" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:28:20.267830 systemd[1]: Started cri-containerd-56094b86092df05a61a5e8f53bfe72036be854ba15371d0b650992ec28fd66a5.scope - libcontainer container 56094b86092df05a61a5e8f53bfe72036be854ba15371d0b650992ec28fd66a5. Oct 13 05:28:20.285062 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:28:20.319423 containerd[1634]: time="2025-10-13T05:28:20.319337308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-qhqcv,Uid:06ef6e4a-8317-43b7-899e-6a0b731c7417,Namespace:calico-system,Attempt:0,} returns sandbox id \"56094b86092df05a61a5e8f53bfe72036be854ba15371d0b650992ec28fd66a5\"" Oct 13 05:28:20.720737 systemd[1]: Started sshd@8-10.0.0.33:22-10.0.0.1:60428.service - OpenSSH per-connection server daemon (10.0.0.1:60428). Oct 13 05:28:20.768817 containerd[1634]: time="2025-10-13T05:28:20.768743198Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:20.770001 containerd[1634]: time="2025-10-13T05:28:20.769965397Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Oct 13 05:28:20.772419 containerd[1634]: time="2025-10-13T05:28:20.772357339Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:20.774484 containerd[1634]: time="2025-10-13T05:28:20.774425770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:20.774910 containerd[1634]: time="2025-10-13T05:28:20.774865948Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.455335415s" Oct 13 05:28:20.774910 containerd[1634]: time="2025-10-13T05:28:20.774905521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Oct 13 05:28:20.775845 containerd[1634]: time="2025-10-13T05:28:20.775807266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Oct 13 05:28:20.779808 containerd[1634]: time="2025-10-13T05:28:20.779770665Z" level=info msg="CreateContainer within sandbox \"b3da49cbb2c7122d744587dac2648c869a5997fe3b356eef70653e16a30272bb\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 13 05:28:20.793851 containerd[1634]: time="2025-10-13T05:28:20.793767358Z" level=info msg="Container 796995333a85f1e74712f0e65f169dc4347059bfefef6ff7dd626b08b6adf5db: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:28:20.805348 containerd[1634]: time="2025-10-13T05:28:20.805291950Z" level=info msg="CreateContainer within sandbox \"b3da49cbb2c7122d744587dac2648c869a5997fe3b356eef70653e16a30272bb\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"796995333a85f1e74712f0e65f169dc4347059bfefef6ff7dd626b08b6adf5db\"" Oct 13 05:28:20.805809 containerd[1634]: time="2025-10-13T05:28:20.805784735Z" level=info msg="StartContainer for \"796995333a85f1e74712f0e65f169dc4347059bfefef6ff7dd626b08b6adf5db\"" Oct 13 05:28:20.807556 containerd[1634]: time="2025-10-13T05:28:20.807472178Z" level=info msg="connecting to shim 796995333a85f1e74712f0e65f169dc4347059bfefef6ff7dd626b08b6adf5db" address="unix:///run/containerd/s/c91fb4531233ac17d3290da2baec4b52317c6fad707f47572a308c1e602389a8" protocol=ttrpc version=3 Oct 13 05:28:20.844848 systemd[1]: Started cri-containerd-796995333a85f1e74712f0e65f169dc4347059bfefef6ff7dd626b08b6adf5db.scope - libcontainer container 796995333a85f1e74712f0e65f169dc4347059bfefef6ff7dd626b08b6adf5db. Oct 13 05:28:20.866680 sshd[4972]: Accepted publickey for core from 10.0.0.1 port 60428 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:28:20.869424 sshd-session[4972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:20.876136 systemd-logind[1604]: New session 9 of user core. Oct 13 05:28:20.883800 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 13 05:28:20.895435 containerd[1634]: time="2025-10-13T05:28:20.895382881Z" level=info msg="StartContainer for \"796995333a85f1e74712f0e65f169dc4347059bfefef6ff7dd626b08b6adf5db\" returns successfully" Oct 13 05:28:21.033620 sshd[5004]: Connection closed by 10.0.0.1 port 60428 Oct 13 05:28:21.033851 sshd-session[4972]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:21.040198 systemd[1]: sshd@8-10.0.0.33:22-10.0.0.1:60428.service: Deactivated successfully. Oct 13 05:28:21.042717 systemd[1]: session-9.scope: Deactivated successfully. Oct 13 05:28:21.043890 systemd-logind[1604]: Session 9 logged out. Waiting for processes to exit. Oct 13 05:28:21.045788 systemd-logind[1604]: Removed session 9. Oct 13 05:28:21.189694 kubelet[2790]: I1013 05:28:21.189632 2790 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:28:21.559868 systemd-networkd[1530]: cali8ad793c752d: Gained IPv6LL Oct 13 05:28:23.026867 containerd[1634]: time="2025-10-13T05:28:23.026788954Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:23.027529 containerd[1634]: time="2025-10-13T05:28:23.027479397Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Oct 13 05:28:23.028620 containerd[1634]: time="2025-10-13T05:28:23.028585814Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:23.030693 containerd[1634]: time="2025-10-13T05:28:23.030664958Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:23.031254 containerd[1634]: time="2025-10-13T05:28:23.031201005Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 2.255366057s" Oct 13 05:28:23.031254 containerd[1634]: time="2025-10-13T05:28:23.031246590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Oct 13 05:28:23.032539 containerd[1634]: time="2025-10-13T05:28:23.032308994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Oct 13 05:28:23.045334 containerd[1634]: time="2025-10-13T05:28:23.045277709Z" level=info msg="CreateContainer within sandbox \"da0464532e00504020baaa51714a41efb4478f43933b94fb74569d7c4ee9be5b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 13 05:28:23.054052 containerd[1634]: time="2025-10-13T05:28:23.053979672Z" level=info msg="Container 1c23a672cbe68304f2edff0f69532740818b51cff29bab24c7a906afc795b6fe: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:28:23.064077 containerd[1634]: time="2025-10-13T05:28:23.064030925Z" level=info msg="CreateContainer within sandbox \"da0464532e00504020baaa51714a41efb4478f43933b94fb74569d7c4ee9be5b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1c23a672cbe68304f2edff0f69532740818b51cff29bab24c7a906afc795b6fe\"" Oct 13 05:28:23.064683 containerd[1634]: time="2025-10-13T05:28:23.064614549Z" level=info msg="StartContainer for \"1c23a672cbe68304f2edff0f69532740818b51cff29bab24c7a906afc795b6fe\"" Oct 13 05:28:23.065871 containerd[1634]: time="2025-10-13T05:28:23.065807196Z" level=info msg="connecting to shim 1c23a672cbe68304f2edff0f69532740818b51cff29bab24c7a906afc795b6fe" address="unix:///run/containerd/s/79f4384ce51b3fa98dad74ddba88c29bf39d0c115c542d73d563feb98703f9c3" protocol=ttrpc version=3 Oct 13 05:28:23.101965 systemd[1]: Started cri-containerd-1c23a672cbe68304f2edff0f69532740818b51cff29bab24c7a906afc795b6fe.scope - libcontainer container 1c23a672cbe68304f2edff0f69532740818b51cff29bab24c7a906afc795b6fe. Oct 13 05:28:23.159820 containerd[1634]: time="2025-10-13T05:28:23.159753490Z" level=info msg="StartContainer for \"1c23a672cbe68304f2edff0f69532740818b51cff29bab24c7a906afc795b6fe\" returns successfully" Oct 13 05:28:23.219147 kubelet[2790]: I1013 05:28:23.218960 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-59cddc5ff-gp9jb" podStartSLOduration=22.59714477 podStartE2EDuration="28.218940599s" podCreationTimestamp="2025-10-13 05:27:55 +0000 UTC" firstStartedPulling="2025-10-13 05:28:17.410373806 +0000 UTC m=+37.569695181" lastFinishedPulling="2025-10-13 05:28:23.032169635 +0000 UTC m=+43.191491010" observedRunningTime="2025-10-13 05:28:23.218589727 +0000 UTC m=+43.377911112" watchObservedRunningTime="2025-10-13 05:28:23.218940599 +0000 UTC m=+43.378261974" Oct 13 05:28:23.282209 containerd[1634]: time="2025-10-13T05:28:23.282073873Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c23a672cbe68304f2edff0f69532740818b51cff29bab24c7a906afc795b6fe\" id:\"81b2c0c310cbaeca8ce845e0a1f2f62dc5e468b559bfc6711e94662dfccf3cea\" pid:5137 exited_at:{seconds:1760333303 nanos:281093300}" Oct 13 05:28:23.605938 kubelet[2790]: I1013 05:28:23.605861 2790 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:28:23.606398 kubelet[2790]: E1013 05:28:23.606361 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:28:24.206682 kubelet[2790]: E1013 05:28:24.206046 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:28:24.404592 systemd-networkd[1530]: vxlan.calico: Link UP Oct 13 05:28:24.404616 systemd-networkd[1530]: vxlan.calico: Gained carrier Oct 13 05:28:26.055221 systemd[1]: Started sshd@9-10.0.0.33:22-10.0.0.1:44210.service - OpenSSH per-connection server daemon (10.0.0.1:44210). Oct 13 05:28:26.141484 sshd[5316]: Accepted publickey for core from 10.0.0.1 port 44210 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:28:26.144370 sshd-session[5316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:26.149865 systemd-logind[1604]: New session 10 of user core. Oct 13 05:28:26.159856 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 13 05:28:26.204549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2689333739.mount: Deactivated successfully. Oct 13 05:28:26.401482 sshd[5319]: Connection closed by 10.0.0.1 port 44210 Oct 13 05:28:26.401828 sshd-session[5316]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:26.412554 systemd[1]: sshd@9-10.0.0.33:22-10.0.0.1:44210.service: Deactivated successfully. Oct 13 05:28:26.414991 systemd[1]: session-10.scope: Deactivated successfully. Oct 13 05:28:26.415899 systemd-logind[1604]: Session 10 logged out. Waiting for processes to exit. Oct 13 05:28:26.419240 systemd[1]: Started sshd@10-10.0.0.33:22-10.0.0.1:44224.service - OpenSSH per-connection server daemon (10.0.0.1:44224). Oct 13 05:28:26.420150 systemd-logind[1604]: Removed session 10. Oct 13 05:28:26.424911 systemd-networkd[1530]: vxlan.calico: Gained IPv6LL Oct 13 05:28:26.476945 sshd[5335]: Accepted publickey for core from 10.0.0.1 port 44224 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:28:26.478275 sshd-session[5335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:26.483070 systemd-logind[1604]: New session 11 of user core. Oct 13 05:28:26.490813 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 13 05:28:26.747394 sshd[5338]: Connection closed by 10.0.0.1 port 44224 Oct 13 05:28:26.747692 sshd-session[5335]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:26.756811 systemd[1]: sshd@10-10.0.0.33:22-10.0.0.1:44224.service: Deactivated successfully. Oct 13 05:28:26.758866 systemd[1]: session-11.scope: Deactivated successfully. Oct 13 05:28:26.759773 systemd-logind[1604]: Session 11 logged out. Waiting for processes to exit. Oct 13 05:28:26.762871 systemd[1]: Started sshd@11-10.0.0.33:22-10.0.0.1:44228.service - OpenSSH per-connection server daemon (10.0.0.1:44228). Oct 13 05:28:26.763991 systemd-logind[1604]: Removed session 11. Oct 13 05:28:26.835732 sshd[5350]: Accepted publickey for core from 10.0.0.1 port 44228 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:28:26.837163 sshd-session[5350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:26.841997 systemd-logind[1604]: New session 12 of user core. Oct 13 05:28:26.849814 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 13 05:28:27.030421 sshd[5353]: Connection closed by 10.0.0.1 port 44228 Oct 13 05:28:27.030993 sshd-session[5350]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:27.035379 systemd[1]: sshd@11-10.0.0.33:22-10.0.0.1:44228.service: Deactivated successfully. Oct 13 05:28:27.037590 systemd[1]: session-12.scope: Deactivated successfully. Oct 13 05:28:27.038561 systemd-logind[1604]: Session 12 logged out. Waiting for processes to exit. Oct 13 05:28:27.040044 systemd-logind[1604]: Removed session 12. Oct 13 05:28:27.109758 containerd[1634]: time="2025-10-13T05:28:27.109685109Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:27.110484 containerd[1634]: time="2025-10-13T05:28:27.110451044Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Oct 13 05:28:27.111704 containerd[1634]: time="2025-10-13T05:28:27.111674841Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:27.113756 containerd[1634]: time="2025-10-13T05:28:27.113724686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:27.114349 containerd[1634]: time="2025-10-13T05:28:27.114319111Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 4.081953712s" Oct 13 05:28:27.114398 containerd[1634]: time="2025-10-13T05:28:27.114352113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Oct 13 05:28:27.116194 containerd[1634]: time="2025-10-13T05:28:27.116146942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 05:28:27.122935 containerd[1634]: time="2025-10-13T05:28:27.122896371Z" level=info msg="CreateContainer within sandbox \"b650e3c23a35718e16880475571f469528177d7abbc074e516e4917ad708a4e2\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Oct 13 05:28:27.129733 containerd[1634]: time="2025-10-13T05:28:27.129617917Z" level=info msg="Container 86e565286338401d7dbf4d1f95420cb8e6d65c52f2a35a9376ae5184afa58927: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:28:27.137763 containerd[1634]: time="2025-10-13T05:28:27.137690917Z" level=info msg="CreateContainer within sandbox \"b650e3c23a35718e16880475571f469528177d7abbc074e516e4917ad708a4e2\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"86e565286338401d7dbf4d1f95420cb8e6d65c52f2a35a9376ae5184afa58927\"" Oct 13 05:28:27.138170 containerd[1634]: time="2025-10-13T05:28:27.138143349Z" level=info msg="StartContainer for \"86e565286338401d7dbf4d1f95420cb8e6d65c52f2a35a9376ae5184afa58927\"" Oct 13 05:28:27.139389 containerd[1634]: time="2025-10-13T05:28:27.139359762Z" level=info msg="connecting to shim 86e565286338401d7dbf4d1f95420cb8e6d65c52f2a35a9376ae5184afa58927" address="unix:///run/containerd/s/3f75e99c51cd3e713bc1f070842dcb07c9073e5b76d960b8cf42f09b139a4e41" protocol=ttrpc version=3 Oct 13 05:28:27.160838 systemd[1]: Started cri-containerd-86e565286338401d7dbf4d1f95420cb8e6d65c52f2a35a9376ae5184afa58927.scope - libcontainer container 86e565286338401d7dbf4d1f95420cb8e6d65c52f2a35a9376ae5184afa58927. Oct 13 05:28:27.217682 containerd[1634]: time="2025-10-13T05:28:27.217619613Z" level=info msg="StartContainer for \"86e565286338401d7dbf4d1f95420cb8e6d65c52f2a35a9376ae5184afa58927\" returns successfully" Oct 13 05:28:27.475203 containerd[1634]: time="2025-10-13T05:28:27.475115313Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:27.476217 containerd[1634]: time="2025-10-13T05:28:27.476170136Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Oct 13 05:28:27.478303 containerd[1634]: time="2025-10-13T05:28:27.478260534Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 362.075252ms" Oct 13 05:28:27.478303 containerd[1634]: time="2025-10-13T05:28:27.478302353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Oct 13 05:28:27.479191 containerd[1634]: time="2025-10-13T05:28:27.479165588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Oct 13 05:28:27.483468 containerd[1634]: time="2025-10-13T05:28:27.483425254Z" level=info msg="CreateContainer within sandbox \"f9a211ed7279cd42e82f6a0c9f798d881ea7e5c769649c8596b570fa3e142f3e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:28:27.496979 containerd[1634]: time="2025-10-13T05:28:27.495960768Z" level=info msg="Container 5368efc174ebf2f8c62c2fa48d09b319953e7cd4401c070b9465a7a7a64799d0: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:28:27.504816 containerd[1634]: time="2025-10-13T05:28:27.504772573Z" level=info msg="CreateContainer within sandbox \"f9a211ed7279cd42e82f6a0c9f798d881ea7e5c769649c8596b570fa3e142f3e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5368efc174ebf2f8c62c2fa48d09b319953e7cd4401c070b9465a7a7a64799d0\"" Oct 13 05:28:27.505797 containerd[1634]: time="2025-10-13T05:28:27.505712562Z" level=info msg="StartContainer for \"5368efc174ebf2f8c62c2fa48d09b319953e7cd4401c070b9465a7a7a64799d0\"" Oct 13 05:28:27.508589 containerd[1634]: time="2025-10-13T05:28:27.508027749Z" level=info msg="connecting to shim 5368efc174ebf2f8c62c2fa48d09b319953e7cd4401c070b9465a7a7a64799d0" address="unix:///run/containerd/s/aae38ffff2c1fa243a54ad403c5bee811da83c0fc88c16800581c219943ffea8" protocol=ttrpc version=3 Oct 13 05:28:27.543952 systemd[1]: Started cri-containerd-5368efc174ebf2f8c62c2fa48d09b319953e7cd4401c070b9465a7a7a64799d0.scope - libcontainer container 5368efc174ebf2f8c62c2fa48d09b319953e7cd4401c070b9465a7a7a64799d0. Oct 13 05:28:27.594493 containerd[1634]: time="2025-10-13T05:28:27.594440901Z" level=info msg="StartContainer for \"5368efc174ebf2f8c62c2fa48d09b319953e7cd4401c070b9465a7a7a64799d0\" returns successfully" Oct 13 05:28:28.232320 kubelet[2790]: I1013 05:28:28.232247 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6fb94755-wrqf8" podStartSLOduration=2.019668106 podStartE2EDuration="13.232230453s" podCreationTimestamp="2025-10-13 05:28:15 +0000 UTC" firstStartedPulling="2025-10-13 05:28:15.903390114 +0000 UTC m=+36.062711489" lastFinishedPulling="2025-10-13 05:28:27.115952461 +0000 UTC m=+47.275273836" observedRunningTime="2025-10-13 05:28:28.231559153 +0000 UTC m=+48.390880528" watchObservedRunningTime="2025-10-13 05:28:28.232230453 +0000 UTC m=+48.391551818" Oct 13 05:28:28.245267 kubelet[2790]: I1013 05:28:28.245204 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5cb6ddf7bc-jxczs" podStartSLOduration=25.914042732 podStartE2EDuration="35.245188809s" podCreationTimestamp="2025-10-13 05:27:53 +0000 UTC" firstStartedPulling="2025-10-13 05:28:18.14791089 +0000 UTC m=+38.307232265" lastFinishedPulling="2025-10-13 05:28:27.479056966 +0000 UTC m=+47.638378342" observedRunningTime="2025-10-13 05:28:28.244362011 +0000 UTC m=+48.403683376" watchObservedRunningTime="2025-10-13 05:28:28.245188809 +0000 UTC m=+48.404510184" Oct 13 05:28:29.222800 kubelet[2790]: I1013 05:28:29.222729 2790 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:28:29.826288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4135321211.mount: Deactivated successfully. Oct 13 05:28:31.716815 containerd[1634]: time="2025-10-13T05:28:31.716747749Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:31.717849 containerd[1634]: time="2025-10-13T05:28:31.717787766Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Oct 13 05:28:31.719029 containerd[1634]: time="2025-10-13T05:28:31.718954218Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:31.722041 containerd[1634]: time="2025-10-13T05:28:31.721991013Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:31.723561 containerd[1634]: time="2025-10-13T05:28:31.723503500Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 4.244308296s" Oct 13 05:28:31.723561 containerd[1634]: time="2025-10-13T05:28:31.723542443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Oct 13 05:28:31.725378 containerd[1634]: time="2025-10-13T05:28:31.724975782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Oct 13 05:28:31.729520 containerd[1634]: time="2025-10-13T05:28:31.729467527Z" level=info msg="CreateContainer within sandbox \"56094b86092df05a61a5e8f53bfe72036be854ba15371d0b650992ec28fd66a5\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Oct 13 05:28:31.740214 containerd[1634]: time="2025-10-13T05:28:31.739264452Z" level=info msg="Container d9f6fd26054604d74219a0a9017188dbab0b6a17c68bfa0dc29617703d1daacf: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:28:31.747502 containerd[1634]: time="2025-10-13T05:28:31.747434347Z" level=info msg="CreateContainer within sandbox \"56094b86092df05a61a5e8f53bfe72036be854ba15371d0b650992ec28fd66a5\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"d9f6fd26054604d74219a0a9017188dbab0b6a17c68bfa0dc29617703d1daacf\"" Oct 13 05:28:31.748191 containerd[1634]: time="2025-10-13T05:28:31.748157764Z" level=info msg="StartContainer for \"d9f6fd26054604d74219a0a9017188dbab0b6a17c68bfa0dc29617703d1daacf\"" Oct 13 05:28:31.749658 containerd[1634]: time="2025-10-13T05:28:31.749593769Z" level=info msg="connecting to shim d9f6fd26054604d74219a0a9017188dbab0b6a17c68bfa0dc29617703d1daacf" address="unix:///run/containerd/s/a8630fa01b9d035c5f251f70c516b3219058866c15ed51f8fdc24838fe1e5a32" protocol=ttrpc version=3 Oct 13 05:28:31.775876 systemd[1]: Started cri-containerd-d9f6fd26054604d74219a0a9017188dbab0b6a17c68bfa0dc29617703d1daacf.scope - libcontainer container d9f6fd26054604d74219a0a9017188dbab0b6a17c68bfa0dc29617703d1daacf. Oct 13 05:28:32.013858 containerd[1634]: time="2025-10-13T05:28:32.013723970Z" level=info msg="StartContainer for \"d9f6fd26054604d74219a0a9017188dbab0b6a17c68bfa0dc29617703d1daacf\" returns successfully" Oct 13 05:28:32.050284 systemd[1]: Started sshd@12-10.0.0.33:22-10.0.0.1:44194.service - OpenSSH per-connection server daemon (10.0.0.1:44194). Oct 13 05:28:32.131507 sshd[5500]: Accepted publickey for core from 10.0.0.1 port 44194 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:28:32.134221 sshd-session[5500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:32.139675 systemd-logind[1604]: New session 13 of user core. Oct 13 05:28:32.150782 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 13 05:28:32.259505 kubelet[2790]: I1013 05:28:32.259235 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-qhqcv" podStartSLOduration=25.855474021 podStartE2EDuration="37.259215833s" podCreationTimestamp="2025-10-13 05:27:55 +0000 UTC" firstStartedPulling="2025-10-13 05:28:20.32109961 +0000 UTC m=+40.480420985" lastFinishedPulling="2025-10-13 05:28:31.724841422 +0000 UTC m=+51.884162797" observedRunningTime="2025-10-13 05:28:32.258059809 +0000 UTC m=+52.417381195" watchObservedRunningTime="2025-10-13 05:28:32.259215833 +0000 UTC m=+52.418537208" Oct 13 05:28:32.299006 sshd[5503]: Connection closed by 10.0.0.1 port 44194 Oct 13 05:28:32.300881 sshd-session[5500]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:32.305501 systemd[1]: sshd@12-10.0.0.33:22-10.0.0.1:44194.service: Deactivated successfully. Oct 13 05:28:32.307664 systemd[1]: session-13.scope: Deactivated successfully. Oct 13 05:28:32.308578 systemd-logind[1604]: Session 13 logged out. Waiting for processes to exit. Oct 13 05:28:32.309792 systemd-logind[1604]: Removed session 13. Oct 13 05:28:33.243736 kubelet[2790]: I1013 05:28:33.243685 2790 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:28:33.482475 containerd[1634]: time="2025-10-13T05:28:33.482421911Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d9f6fd26054604d74219a0a9017188dbab0b6a17c68bfa0dc29617703d1daacf\" id:\"e3fe9e88ccdab9e55964070a458c1cbd9f3ada6bc0c6ad3755c200bd3c24f56d\" pid:5530 exit_status:1 exited_at:{seconds:1760333313 nanos:481964980}" Oct 13 05:28:33.571948 containerd[1634]: time="2025-10-13T05:28:33.571896792Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d9f6fd26054604d74219a0a9017188dbab0b6a17c68bfa0dc29617703d1daacf\" id:\"137d44e3983e7ea94be207c41b9c06da3958dc939f262f50050c96b016fed870\" pid:5553 exit_status:1 exited_at:{seconds:1760333313 nanos:571492209}" Oct 13 05:28:34.324817 containerd[1634]: time="2025-10-13T05:28:34.324757953Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d9f6fd26054604d74219a0a9017188dbab0b6a17c68bfa0dc29617703d1daacf\" id:\"62fe6c543e25511c34429bd8dffe6e1a9182287ac5f91e762277358bde4f401c\" pid:5575 exit_status:1 exited_at:{seconds:1760333314 nanos:324361876}" Oct 13 05:28:35.190739 containerd[1634]: time="2025-10-13T05:28:35.190681632Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:35.204871 containerd[1634]: time="2025-10-13T05:28:35.191520175Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Oct 13 05:28:35.204871 containerd[1634]: time="2025-10-13T05:28:35.192698351Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:35.205109 containerd[1634]: time="2025-10-13T05:28:35.195705896Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 3.470687656s" Oct 13 05:28:35.205109 containerd[1634]: time="2025-10-13T05:28:35.204944857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Oct 13 05:28:35.205494 containerd[1634]: time="2025-10-13T05:28:35.205443205Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:35.210256 containerd[1634]: time="2025-10-13T05:28:35.210216672Z" level=info msg="CreateContainer within sandbox \"b3da49cbb2c7122d744587dac2648c869a5997fe3b356eef70653e16a30272bb\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 13 05:28:35.439731 containerd[1634]: time="2025-10-13T05:28:35.435836942Z" level=info msg="Container 698e2b4f6b0cfce4e08edcd690481b880c71b41702a1f87c5bb1bbd20272c846: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:28:35.449735 containerd[1634]: time="2025-10-13T05:28:35.449378921Z" level=info msg="CreateContainer within sandbox \"b3da49cbb2c7122d744587dac2648c869a5997fe3b356eef70653e16a30272bb\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"698e2b4f6b0cfce4e08edcd690481b880c71b41702a1f87c5bb1bbd20272c846\"" Oct 13 05:28:35.450211 containerd[1634]: time="2025-10-13T05:28:35.450073556Z" level=info msg="StartContainer for \"698e2b4f6b0cfce4e08edcd690481b880c71b41702a1f87c5bb1bbd20272c846\"" Oct 13 05:28:35.452045 containerd[1634]: time="2025-10-13T05:28:35.452019834Z" level=info msg="connecting to shim 698e2b4f6b0cfce4e08edcd690481b880c71b41702a1f87c5bb1bbd20272c846" address="unix:///run/containerd/s/c91fb4531233ac17d3290da2baec4b52317c6fad707f47572a308c1e602389a8" protocol=ttrpc version=3 Oct 13 05:28:35.483815 systemd[1]: Started cri-containerd-698e2b4f6b0cfce4e08edcd690481b880c71b41702a1f87c5bb1bbd20272c846.scope - libcontainer container 698e2b4f6b0cfce4e08edcd690481b880c71b41702a1f87c5bb1bbd20272c846. Oct 13 05:28:35.530033 containerd[1634]: time="2025-10-13T05:28:35.529982975Z" level=info msg="StartContainer for \"698e2b4f6b0cfce4e08edcd690481b880c71b41702a1f87c5bb1bbd20272c846\" returns successfully" Oct 13 05:28:36.005512 kubelet[2790]: I1013 05:28:36.005466 2790 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 13 05:28:36.006903 kubelet[2790]: I1013 05:28:36.006877 2790 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 13 05:28:36.263201 kubelet[2790]: I1013 05:28:36.262780 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-cjhz6" podStartSLOduration=23.364565377 podStartE2EDuration="41.26276337s" podCreationTimestamp="2025-10-13 05:27:55 +0000 UTC" firstStartedPulling="2025-10-13 05:28:17.308065571 +0000 UTC m=+37.467386946" lastFinishedPulling="2025-10-13 05:28:35.206263564 +0000 UTC m=+55.365584939" observedRunningTime="2025-10-13 05:28:36.262160365 +0000 UTC m=+56.421481750" watchObservedRunningTime="2025-10-13 05:28:36.26276337 +0000 UTC m=+56.422084745" Oct 13 05:28:37.312804 systemd[1]: Started sshd@13-10.0.0.33:22-10.0.0.1:44198.service - OpenSSH per-connection server daemon (10.0.0.1:44198). Oct 13 05:28:37.380312 sshd[5638]: Accepted publickey for core from 10.0.0.1 port 44198 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:28:37.381990 sshd-session[5638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:37.386216 systemd-logind[1604]: New session 14 of user core. Oct 13 05:28:37.393766 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 13 05:28:37.533394 sshd[5641]: Connection closed by 10.0.0.1 port 44198 Oct 13 05:28:37.533717 sshd-session[5638]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:37.543471 systemd[1]: sshd@13-10.0.0.33:22-10.0.0.1:44198.service: Deactivated successfully. Oct 13 05:28:37.545697 systemd[1]: session-14.scope: Deactivated successfully. Oct 13 05:28:37.546464 systemd-logind[1604]: Session 14 logged out. Waiting for processes to exit. Oct 13 05:28:37.547674 systemd-logind[1604]: Removed session 14. Oct 13 05:28:40.108066 kubelet[2790]: I1013 05:28:40.108019 2790 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:28:42.545624 systemd[1]: Started sshd@14-10.0.0.33:22-10.0.0.1:37466.service - OpenSSH per-connection server daemon (10.0.0.1:37466). Oct 13 05:28:42.606755 sshd[5659]: Accepted publickey for core from 10.0.0.1 port 37466 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:28:42.608451 sshd-session[5659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:42.613059 systemd-logind[1604]: New session 15 of user core. Oct 13 05:28:42.626777 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 13 05:28:42.742574 sshd[5662]: Connection closed by 10.0.0.1 port 37466 Oct 13 05:28:42.742877 sshd-session[5659]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:42.748636 systemd[1]: sshd@14-10.0.0.33:22-10.0.0.1:37466.service: Deactivated successfully. Oct 13 05:28:42.750944 systemd[1]: session-15.scope: Deactivated successfully. Oct 13 05:28:42.751833 systemd-logind[1604]: Session 15 logged out. Waiting for processes to exit. Oct 13 05:28:42.753161 systemd-logind[1604]: Removed session 15. Oct 13 05:28:46.313699 containerd[1634]: time="2025-10-13T05:28:46.313608757Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1621ab9f6d764d9da32db25a5827324076a71694524fb34606133f8757dd1239\" id:\"6ceccfde76fc887b79de6507e0c13a27a43562d5adcef93778bde60d29c706b4\" pid:5696 exited_at:{seconds:1760333326 nanos:313203259}" Oct 13 05:28:47.754904 systemd[1]: Started sshd@15-10.0.0.33:22-10.0.0.1:37468.service - OpenSSH per-connection server daemon (10.0.0.1:37468). Oct 13 05:28:47.852433 sshd[5712]: Accepted publickey for core from 10.0.0.1 port 37468 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:28:47.854621 sshd-session[5712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:47.859794 systemd-logind[1604]: New session 16 of user core. Oct 13 05:28:47.866833 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 13 05:28:48.001622 sshd[5715]: Connection closed by 10.0.0.1 port 37468 Oct 13 05:28:48.001964 sshd-session[5712]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:48.011687 systemd[1]: sshd@15-10.0.0.33:22-10.0.0.1:37468.service: Deactivated successfully. Oct 13 05:28:48.013793 systemd[1]: session-16.scope: Deactivated successfully. Oct 13 05:28:48.014642 systemd-logind[1604]: Session 16 logged out. Waiting for processes to exit. Oct 13 05:28:48.017627 systemd[1]: Started sshd@16-10.0.0.33:22-10.0.0.1:37484.service - OpenSSH per-connection server daemon (10.0.0.1:37484). Oct 13 05:28:48.018304 systemd-logind[1604]: Removed session 16. Oct 13 05:28:48.077533 sshd[5728]: Accepted publickey for core from 10.0.0.1 port 37484 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:28:48.079733 sshd-session[5728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:48.084590 systemd-logind[1604]: New session 17 of user core. Oct 13 05:28:48.092922 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 13 05:28:48.544627 sshd[5732]: Connection closed by 10.0.0.1 port 37484 Oct 13 05:28:48.545727 sshd-session[5728]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:48.554385 systemd[1]: sshd@16-10.0.0.33:22-10.0.0.1:37484.service: Deactivated successfully. Oct 13 05:28:48.557531 systemd[1]: session-17.scope: Deactivated successfully. Oct 13 05:28:48.558376 systemd-logind[1604]: Session 17 logged out. Waiting for processes to exit. Oct 13 05:28:48.561957 systemd[1]: Started sshd@17-10.0.0.33:22-10.0.0.1:37494.service - OpenSSH per-connection server daemon (10.0.0.1:37494). Oct 13 05:28:48.562532 systemd-logind[1604]: Removed session 17. Oct 13 05:28:48.626240 sshd[5743]: Accepted publickey for core from 10.0.0.1 port 37494 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:28:48.627520 sshd-session[5743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:48.631858 systemd-logind[1604]: New session 18 of user core. Oct 13 05:28:48.641784 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 13 05:28:49.275092 sshd[5746]: Connection closed by 10.0.0.1 port 37494 Oct 13 05:28:49.275908 sshd-session[5743]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:49.289684 systemd[1]: sshd@17-10.0.0.33:22-10.0.0.1:37494.service: Deactivated successfully. Oct 13 05:28:49.291937 systemd[1]: session-18.scope: Deactivated successfully. Oct 13 05:28:49.293291 systemd-logind[1604]: Session 18 logged out. Waiting for processes to exit. Oct 13 05:28:49.298469 systemd[1]: Started sshd@18-10.0.0.33:22-10.0.0.1:37500.service - OpenSSH per-connection server daemon (10.0.0.1:37500). Oct 13 05:28:49.300401 systemd-logind[1604]: Removed session 18. Oct 13 05:28:49.358068 sshd[5768]: Accepted publickey for core from 10.0.0.1 port 37500 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:28:49.359409 sshd-session[5768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:49.364160 systemd-logind[1604]: New session 19 of user core. Oct 13 05:28:49.378792 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 13 05:28:49.707986 sshd[5771]: Connection closed by 10.0.0.1 port 37500 Oct 13 05:28:49.708817 sshd-session[5768]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:49.723853 systemd[1]: sshd@18-10.0.0.33:22-10.0.0.1:37500.service: Deactivated successfully. Oct 13 05:28:49.727036 systemd[1]: session-19.scope: Deactivated successfully. Oct 13 05:28:49.728127 systemd-logind[1604]: Session 19 logged out. Waiting for processes to exit. Oct 13 05:28:49.732020 systemd[1]: Started sshd@19-10.0.0.33:22-10.0.0.1:37506.service - OpenSSH per-connection server daemon (10.0.0.1:37506). Oct 13 05:28:49.732798 systemd-logind[1604]: Removed session 19. Oct 13 05:28:49.806437 sshd[5783]: Accepted publickey for core from 10.0.0.1 port 37506 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:28:49.808389 sshd-session[5783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:49.813270 systemd-logind[1604]: New session 20 of user core. Oct 13 05:28:49.826794 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 13 05:28:49.948352 sshd[5786]: Connection closed by 10.0.0.1 port 37506 Oct 13 05:28:49.950907 sshd-session[5783]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:49.956165 systemd[1]: sshd@19-10.0.0.33:22-10.0.0.1:37506.service: Deactivated successfully. Oct 13 05:28:49.958842 systemd[1]: session-20.scope: Deactivated successfully. Oct 13 05:28:49.959732 systemd-logind[1604]: Session 20 logged out. Waiting for processes to exit. Oct 13 05:28:49.961395 systemd-logind[1604]: Removed session 20. Oct 13 05:28:52.947542 kubelet[2790]: E1013 05:28:52.947493 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:28:53.277400 containerd[1634]: time="2025-10-13T05:28:53.277195365Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c23a672cbe68304f2edff0f69532740818b51cff29bab24c7a906afc795b6fe\" id:\"75d8043ec0ee6a00086029a14b21c95037734cbf7e712bba3786f256b2fb21c3\" pid:5810 exited_at:{seconds:1760333333 nanos:276709557}" Oct 13 05:28:54.960845 systemd[1]: Started sshd@20-10.0.0.33:22-10.0.0.1:37006.service - OpenSSH per-connection server daemon (10.0.0.1:37006). Oct 13 05:28:55.017146 sshd[5823]: Accepted publickey for core from 10.0.0.1 port 37006 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:28:55.018617 sshd-session[5823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:55.022923 systemd-logind[1604]: New session 21 of user core. Oct 13 05:28:55.034809 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 13 05:28:55.147567 sshd[5826]: Connection closed by 10.0.0.1 port 37006 Oct 13 05:28:55.147954 sshd-session[5823]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:55.153186 systemd[1]: sshd@20-10.0.0.33:22-10.0.0.1:37006.service: Deactivated successfully. Oct 13 05:28:55.155257 systemd[1]: session-21.scope: Deactivated successfully. Oct 13 05:28:55.156076 systemd-logind[1604]: Session 21 logged out. Waiting for processes to exit. Oct 13 05:28:55.157463 systemd-logind[1604]: Removed session 21. Oct 13 05:28:55.948295 kubelet[2790]: E1013 05:28:55.948230 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:28:57.313804 kubelet[2790]: I1013 05:28:57.313718 2790 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:29:00.172013 systemd[1]: Started sshd@21-10.0.0.33:22-10.0.0.1:37008.service - OpenSSH per-connection server daemon (10.0.0.1:37008). Oct 13 05:29:00.262413 sshd[5841]: Accepted publickey for core from 10.0.0.1 port 37008 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:29:00.264554 sshd-session[5841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:29:00.269405 systemd-logind[1604]: New session 22 of user core. Oct 13 05:29:00.282805 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 13 05:29:00.482601 sshd[5844]: Connection closed by 10.0.0.1 port 37008 Oct 13 05:29:00.482868 sshd-session[5841]: pam_unix(sshd:session): session closed for user core Oct 13 05:29:00.488530 systemd[1]: sshd@21-10.0.0.33:22-10.0.0.1:37008.service: Deactivated successfully. Oct 13 05:29:00.490788 systemd[1]: session-22.scope: Deactivated successfully. Oct 13 05:29:00.491556 systemd-logind[1604]: Session 22 logged out. Waiting for processes to exit. Oct 13 05:29:00.492805 systemd-logind[1604]: Removed session 22. Oct 13 05:29:04.333566 containerd[1634]: time="2025-10-13T05:29:04.333493549Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d9f6fd26054604d74219a0a9017188dbab0b6a17c68bfa0dc29617703d1daacf\" id:\"103e6d8b31cd6ab72f364b3b5fd816532003ab7c4b99a841d9c4ac0a77dfa4aa\" pid:5869 exited_at:{seconds:1760333344 nanos:323237935}" Oct 13 05:29:05.501305 systemd[1]: Started sshd@22-10.0.0.33:22-10.0.0.1:54396.service - OpenSSH per-connection server daemon (10.0.0.1:54396). Oct 13 05:29:05.547819 sshd[5891]: Accepted publickey for core from 10.0.0.1 port 54396 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:29:05.549168 sshd-session[5891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:29:05.553739 systemd-logind[1604]: New session 23 of user core. Oct 13 05:29:05.564783 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 13 05:29:05.678608 sshd[5894]: Connection closed by 10.0.0.1 port 54396 Oct 13 05:29:05.678924 sshd-session[5891]: pam_unix(sshd:session): session closed for user core Oct 13 05:29:05.682750 systemd[1]: sshd@22-10.0.0.33:22-10.0.0.1:54396.service: Deactivated successfully. Oct 13 05:29:05.684811 systemd[1]: session-23.scope: Deactivated successfully. Oct 13 05:29:05.686537 systemd-logind[1604]: Session 23 logged out. Waiting for processes to exit. Oct 13 05:29:05.687829 systemd-logind[1604]: Removed session 23.