Sep 4 04:18:30.864423 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Sep 4 02:15:54 -00 2025 Sep 4 04:18:30.864459 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d1884c9a158af3462973a912ddb17d2a643da411fd9cba6f05e0fc855c1b0a44 Sep 4 04:18:30.864471 kernel: BIOS-provided physical RAM map: Sep 4 04:18:30.864481 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 4 04:18:30.864490 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 4 04:18:30.864499 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 4 04:18:30.864509 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 4 04:18:30.864522 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 4 04:18:30.864536 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 4 04:18:30.864545 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 4 04:18:30.864555 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 4 04:18:30.864564 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 4 04:18:30.864580 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 4 04:18:30.864590 kernel: NX (Execute Disable) protection: active Sep 4 04:18:30.864605 kernel: APIC: Static calls initialized Sep 4 04:18:30.864614 kernel: SMBIOS 2.8 present. Sep 4 04:18:30.864628 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 4 04:18:30.864645 kernel: DMI: Memory slots populated: 1/1 Sep 4 04:18:30.864655 kernel: Hypervisor detected: KVM Sep 4 04:18:30.864664 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 04:18:30.864674 kernel: kvm-clock: using sched offset of 5066036798 cycles Sep 4 04:18:30.864685 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 04:18:30.864695 kernel: tsc: Detected 2794.750 MHz processor Sep 4 04:18:30.864711 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 04:18:30.864728 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 04:18:30.864746 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 4 04:18:30.864777 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 4 04:18:30.864788 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 04:18:30.864800 kernel: Using GB pages for direct mapping Sep 4 04:18:30.864810 kernel: ACPI: Early table checksum verification disabled Sep 4 04:18:30.864820 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 4 04:18:30.864830 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 04:18:30.864845 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 04:18:30.864855 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 04:18:30.864865 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 4 04:18:30.864875 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 04:18:30.864885 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 04:18:30.864895 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 04:18:30.864905 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 04:18:30.864915 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 4 04:18:30.864933 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 4 04:18:30.864943 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 4 04:18:30.864954 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 4 04:18:30.864964 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 4 04:18:30.864974 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 4 04:18:30.864985 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 4 04:18:30.864998 kernel: No NUMA configuration found Sep 4 04:18:30.865008 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 4 04:18:30.865019 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Sep 4 04:18:30.865030 kernel: Zone ranges: Sep 4 04:18:30.865040 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 04:18:30.865051 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 4 04:18:30.865061 kernel: Normal empty Sep 4 04:18:30.865071 kernel: Device empty Sep 4 04:18:30.865082 kernel: Movable zone start for each node Sep 4 04:18:30.865107 kernel: Early memory node ranges Sep 4 04:18:30.865117 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 4 04:18:30.865161 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 4 04:18:30.865172 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 4 04:18:30.865194 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 04:18:30.865207 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 4 04:18:30.865221 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 4 04:18:30.865235 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 4 04:18:30.865249 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 04:18:30.865260 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 4 04:18:30.865276 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 4 04:18:30.865287 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 04:18:30.865302 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 04:18:30.865312 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 04:18:30.865323 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 04:18:30.865334 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 04:18:30.865344 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 4 04:18:30.865355 kernel: TSC deadline timer available Sep 4 04:18:30.865365 kernel: CPU topo: Max. logical packages: 1 Sep 4 04:18:30.865380 kernel: CPU topo: Max. logical dies: 1 Sep 4 04:18:30.865390 kernel: CPU topo: Max. dies per package: 1 Sep 4 04:18:30.865401 kernel: CPU topo: Max. threads per core: 1 Sep 4 04:18:30.865411 kernel: CPU topo: Num. cores per package: 4 Sep 4 04:18:30.865421 kernel: CPU topo: Num. threads per package: 4 Sep 4 04:18:30.865432 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 4 04:18:30.865443 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 4 04:18:30.865459 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 4 04:18:30.865472 kernel: kvm-guest: setup PV sched yield Sep 4 04:18:30.865489 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 4 04:18:30.865500 kernel: Booting paravirtualized kernel on KVM Sep 4 04:18:30.865511 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 04:18:30.865521 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 4 04:18:30.865531 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 4 04:18:30.865542 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 4 04:18:30.865552 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 4 04:18:30.865563 kernel: kvm-guest: PV spinlocks enabled Sep 4 04:18:30.865573 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 04:18:30.865590 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d1884c9a158af3462973a912ddb17d2a643da411fd9cba6f05e0fc855c1b0a44 Sep 4 04:18:30.865601 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 04:18:30.865611 kernel: random: crng init done Sep 4 04:18:30.865622 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 04:18:30.865632 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 04:18:30.865643 kernel: Fallback order for Node 0: 0 Sep 4 04:18:30.865654 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Sep 4 04:18:30.865664 kernel: Policy zone: DMA32 Sep 4 04:18:30.865679 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 04:18:30.865690 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 4 04:18:30.865700 kernel: ftrace: allocating 40102 entries in 157 pages Sep 4 04:18:30.865710 kernel: ftrace: allocated 157 pages with 5 groups Sep 4 04:18:30.865720 kernel: Dynamic Preempt: voluntary Sep 4 04:18:30.865737 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 04:18:30.865755 kernel: rcu: RCU event tracing is enabled. Sep 4 04:18:30.865769 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 4 04:18:30.865780 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 04:18:30.865795 kernel: Rude variant of Tasks RCU enabled. Sep 4 04:18:30.865811 kernel: Tracing variant of Tasks RCU enabled. Sep 4 04:18:30.865821 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 04:18:30.865832 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 4 04:18:30.865843 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 04:18:30.865854 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 04:18:30.865864 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 04:18:30.865875 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 4 04:18:30.865886 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 04:18:30.865908 kernel: Console: colour VGA+ 80x25 Sep 4 04:18:30.865919 kernel: printk: legacy console [ttyS0] enabled Sep 4 04:18:30.865930 kernel: ACPI: Core revision 20240827 Sep 4 04:18:30.865944 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 4 04:18:30.865955 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 04:18:30.865966 kernel: x2apic enabled Sep 4 04:18:30.865977 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 04:18:30.865992 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 4 04:18:30.866003 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 4 04:18:30.866017 kernel: kvm-guest: setup PV IPIs Sep 4 04:18:30.866028 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 4 04:18:30.866040 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Sep 4 04:18:30.866051 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 4 04:18:30.866062 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 4 04:18:30.866073 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 4 04:18:30.866085 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 4 04:18:30.866107 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 04:18:30.866139 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 04:18:30.866152 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 4 04:18:30.866183 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 4 04:18:30.866195 kernel: active return thunk: retbleed_return_thunk Sep 4 04:18:30.866206 kernel: RETBleed: Mitigation: untrained return thunk Sep 4 04:18:30.866218 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 4 04:18:30.866232 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 4 04:18:30.866247 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 4 04:18:30.866264 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 4 04:18:30.866275 kernel: active return thunk: srso_return_thunk Sep 4 04:18:30.866294 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 4 04:18:30.866307 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 04:18:30.866318 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 04:18:30.866329 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 04:18:30.866340 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 04:18:30.866351 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 4 04:18:30.866362 kernel: Freeing SMP alternatives memory: 32K Sep 4 04:18:30.866377 kernel: pid_max: default: 32768 minimum: 301 Sep 4 04:18:30.866388 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 4 04:18:30.866399 kernel: landlock: Up and running. Sep 4 04:18:30.866410 kernel: SELinux: Initializing. Sep 4 04:18:30.866425 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 04:18:30.866437 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 04:18:30.866448 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 4 04:18:30.866459 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 4 04:18:30.866471 kernel: ... version: 0 Sep 4 04:18:30.866486 kernel: ... bit width: 48 Sep 4 04:18:30.866497 kernel: ... generic registers: 6 Sep 4 04:18:30.866508 kernel: ... value mask: 0000ffffffffffff Sep 4 04:18:30.866519 kernel: ... max period: 00007fffffffffff Sep 4 04:18:30.866530 kernel: ... fixed-purpose events: 0 Sep 4 04:18:30.866541 kernel: ... event mask: 000000000000003f Sep 4 04:18:30.866552 kernel: signal: max sigframe size: 1776 Sep 4 04:18:30.866562 kernel: rcu: Hierarchical SRCU implementation. Sep 4 04:18:30.866574 kernel: rcu: Max phase no-delay instances is 400. Sep 4 04:18:30.866589 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 4 04:18:30.866600 kernel: smp: Bringing up secondary CPUs ... Sep 4 04:18:30.866611 kernel: smpboot: x86: Booting SMP configuration: Sep 4 04:18:30.866622 kernel: .... node #0, CPUs: #1 #2 #3 Sep 4 04:18:30.866633 kernel: smp: Brought up 1 node, 4 CPUs Sep 4 04:18:30.866644 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 4 04:18:30.866655 kernel: Memory: 2426876K/2571752K available (14336K kernel code, 2428K rwdata, 9988K rodata, 57768K init, 1248K bss, 138952K reserved, 0K cma-reserved) Sep 4 04:18:30.866667 kernel: devtmpfs: initialized Sep 4 04:18:30.866678 kernel: x86/mm: Memory block size: 128MB Sep 4 04:18:30.866692 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 04:18:30.866703 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 4 04:18:30.866714 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 04:18:30.866733 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 04:18:30.866745 kernel: audit: initializing netlink subsys (disabled) Sep 4 04:18:30.866756 kernel: audit: type=2000 audit(1756959507.344:1): state=initialized audit_enabled=0 res=1 Sep 4 04:18:30.866767 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 04:18:30.866778 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 04:18:30.866792 kernel: cpuidle: using governor menu Sep 4 04:18:30.866807 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 04:18:30.866818 kernel: dca service started, version 1.12.1 Sep 4 04:18:30.866829 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Sep 4 04:18:30.866840 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 4 04:18:30.866852 kernel: PCI: Using configuration type 1 for base access Sep 4 04:18:30.866863 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 04:18:30.866874 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 04:18:30.866885 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 04:18:30.866896 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 04:18:30.866910 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 04:18:30.866921 kernel: ACPI: Added _OSI(Module Device) Sep 4 04:18:30.866932 kernel: ACPI: Added _OSI(Processor Device) Sep 4 04:18:30.866943 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 04:18:30.866957 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 04:18:30.866969 kernel: ACPI: Interpreter enabled Sep 4 04:18:30.866981 kernel: ACPI: PM: (supports S0 S3 S5) Sep 4 04:18:30.866992 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 04:18:30.867011 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 04:18:30.867026 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 04:18:30.867037 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 4 04:18:30.867048 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 04:18:30.867344 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 04:18:30.867505 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 4 04:18:30.867674 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 4 04:18:30.867691 kernel: PCI host bridge to bus 0000:00 Sep 4 04:18:30.867952 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 04:18:30.868204 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 04:18:30.868386 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 04:18:30.868563 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 4 04:18:30.868739 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 4 04:18:30.868928 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 4 04:18:30.869070 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 04:18:30.869332 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 4 04:18:30.869874 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 4 04:18:30.870031 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Sep 4 04:18:30.870228 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Sep 4 04:18:30.870390 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Sep 4 04:18:30.870568 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 04:18:30.870810 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 4 04:18:30.870959 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Sep 4 04:18:30.871231 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Sep 4 04:18:30.871475 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Sep 4 04:18:30.871643 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 4 04:18:30.871771 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Sep 4 04:18:30.871895 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Sep 4 04:18:30.872053 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Sep 4 04:18:30.872368 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 4 04:18:30.872582 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Sep 4 04:18:30.872790 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Sep 4 04:18:30.872958 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 4 04:18:30.873180 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Sep 4 04:18:30.873427 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 4 04:18:30.873657 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 4 04:18:30.873892 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 4 04:18:30.874181 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Sep 4 04:18:30.874443 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Sep 4 04:18:30.874741 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 4 04:18:30.874896 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Sep 4 04:18:30.874911 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 04:18:30.874929 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 04:18:30.874940 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 04:18:30.874951 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 04:18:30.874961 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 4 04:18:30.874972 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 4 04:18:30.874983 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 4 04:18:30.874994 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 4 04:18:30.875005 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 4 04:18:30.875025 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 4 04:18:30.875043 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 4 04:18:30.875054 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 4 04:18:30.875066 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 4 04:18:30.875076 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 4 04:18:30.875092 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 4 04:18:30.875119 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 4 04:18:30.875152 kernel: iommu: Default domain type: Translated Sep 4 04:18:30.875163 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 04:18:30.875174 kernel: PCI: Using ACPI for IRQ routing Sep 4 04:18:30.875191 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 04:18:30.875202 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 4 04:18:30.875214 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 4 04:18:30.876341 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 4 04:18:30.876597 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 4 04:18:30.876768 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 04:18:30.876786 kernel: vgaarb: loaded Sep 4 04:18:30.876797 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 4 04:18:30.876815 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 4 04:18:30.876827 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 04:18:30.876838 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 04:18:30.876851 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 04:18:30.876862 kernel: pnp: PnP ACPI init Sep 4 04:18:30.877083 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 4 04:18:30.877180 kernel: pnp: PnP ACPI: found 6 devices Sep 4 04:18:30.877197 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 04:18:30.877214 kernel: NET: Registered PF_INET protocol family Sep 4 04:18:30.877226 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 04:18:30.877238 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 04:18:30.877249 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 04:18:30.877260 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 04:18:30.877272 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 04:18:30.877283 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 04:18:30.877311 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 04:18:30.877333 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 04:18:30.877349 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 04:18:30.877367 kernel: NET: Registered PF_XDP protocol family Sep 4 04:18:30.877589 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 04:18:30.877928 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 04:18:30.878229 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 04:18:30.878384 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 4 04:18:30.878527 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 4 04:18:30.878672 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 4 04:18:30.878694 kernel: PCI: CLS 0 bytes, default 64 Sep 4 04:18:30.878706 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Sep 4 04:18:30.878717 kernel: Initialise system trusted keyrings Sep 4 04:18:30.878728 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 04:18:30.878740 kernel: Key type asymmetric registered Sep 4 04:18:30.878751 kernel: Asymmetric key parser 'x509' registered Sep 4 04:18:30.878762 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 04:18:30.878774 kernel: io scheduler mq-deadline registered Sep 4 04:18:30.878785 kernel: io scheduler kyber registered Sep 4 04:18:30.878800 kernel: io scheduler bfq registered Sep 4 04:18:30.878811 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 04:18:30.878823 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 4 04:18:30.878834 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 4 04:18:30.878844 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 4 04:18:30.878855 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 04:18:30.878866 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 04:18:30.878877 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 04:18:30.878888 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 04:18:30.878903 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 04:18:30.878914 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 4 04:18:30.879189 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 4 04:18:30.879344 kernel: rtc_cmos 00:04: registered as rtc0 Sep 4 04:18:30.879492 kernel: rtc_cmos 00:04: setting system clock to 2025-09-04T04:18:30 UTC (1756959510) Sep 4 04:18:30.879638 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 4 04:18:30.879654 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 4 04:18:30.879667 kernel: NET: Registered PF_INET6 protocol family Sep 4 04:18:30.879684 kernel: Segment Routing with IPv6 Sep 4 04:18:30.879695 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 04:18:30.879707 kernel: NET: Registered PF_PACKET protocol family Sep 4 04:18:30.879719 kernel: Key type dns_resolver registered Sep 4 04:18:30.879730 kernel: IPI shorthand broadcast: enabled Sep 4 04:18:30.879742 kernel: sched_clock: Marking stable (3510004062, 130408431)->(3672789259, -32376766) Sep 4 04:18:30.879754 kernel: registered taskstats version 1 Sep 4 04:18:30.879765 kernel: Loading compiled-in X.509 certificates Sep 4 04:18:30.879777 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 2c6c093c583f207375cbe16db1a23ce651c8380d' Sep 4 04:18:30.879792 kernel: Demotion targets for Node 0: null Sep 4 04:18:30.879803 kernel: Key type .fscrypt registered Sep 4 04:18:30.879814 kernel: Key type fscrypt-provisioning registered Sep 4 04:18:30.879825 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 04:18:30.879836 kernel: ima: Allocated hash algorithm: sha1 Sep 4 04:18:30.879847 kernel: ima: No architecture policies found Sep 4 04:18:30.879858 kernel: clk: Disabling unused clocks Sep 4 04:18:30.879869 kernel: Warning: unable to open an initial console. Sep 4 04:18:30.879881 kernel: Freeing unused kernel image (initmem) memory: 57768K Sep 4 04:18:30.879896 kernel: Write protecting the kernel read-only data: 24576k Sep 4 04:18:30.879907 kernel: Freeing unused kernel image (rodata/data gap) memory: 252K Sep 4 04:18:30.879918 kernel: Run /init as init process Sep 4 04:18:30.879929 kernel: with arguments: Sep 4 04:18:30.879939 kernel: /init Sep 4 04:18:30.879950 kernel: with environment: Sep 4 04:18:30.879961 kernel: HOME=/ Sep 4 04:18:30.879971 kernel: TERM=linux Sep 4 04:18:30.879983 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 04:18:30.880004 systemd[1]: Successfully made /usr/ read-only. Sep 4 04:18:30.880039 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 04:18:30.880084 systemd[1]: Detected virtualization kvm. Sep 4 04:18:30.880117 systemd[1]: Detected architecture x86-64. Sep 4 04:18:30.880176 systemd[1]: Running in initrd. Sep 4 04:18:30.880193 systemd[1]: No hostname configured, using default hostname. Sep 4 04:18:30.880222 systemd[1]: Hostname set to . Sep 4 04:18:30.880241 systemd[1]: Initializing machine ID from VM UUID. Sep 4 04:18:30.880252 systemd[1]: Queued start job for default target initrd.target. Sep 4 04:18:30.880264 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 04:18:30.880275 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 04:18:30.880288 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 04:18:30.880300 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 04:18:30.880318 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 04:18:30.880331 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 04:18:30.880344 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 04:18:30.880356 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 04:18:30.880368 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 04:18:30.880381 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 04:18:30.880392 systemd[1]: Reached target paths.target - Path Units. Sep 4 04:18:30.880408 systemd[1]: Reached target slices.target - Slice Units. Sep 4 04:18:30.880420 systemd[1]: Reached target swap.target - Swaps. Sep 4 04:18:30.880432 systemd[1]: Reached target timers.target - Timer Units. Sep 4 04:18:30.880444 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 04:18:30.880456 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 04:18:30.880468 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 04:18:30.880481 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 4 04:18:30.880493 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 04:18:30.880518 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 04:18:30.880542 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 04:18:30.880573 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 04:18:30.880601 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 04:18:30.880619 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 04:18:30.880636 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 04:18:30.880651 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 4 04:18:30.880683 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 04:18:30.880707 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 04:18:30.880732 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 04:18:30.880751 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 04:18:30.880763 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 04:18:30.880788 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 04:18:30.880804 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 04:18:30.880817 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 04:18:30.880868 systemd-journald[218]: Collecting audit messages is disabled. Sep 4 04:18:30.880904 systemd-journald[218]: Journal started Sep 4 04:18:30.880936 systemd-journald[218]: Runtime Journal (/run/log/journal/cedd74a6212042ebb20151b39209debd) is 6M, max 48.6M, 42.5M free. Sep 4 04:18:30.881157 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 04:18:30.861624 systemd-modules-load[221]: Inserted module 'overlay' Sep 4 04:18:30.915165 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 04:18:30.915207 kernel: Bridge firewalling registered Sep 4 04:18:30.895671 systemd-modules-load[221]: Inserted module 'br_netfilter' Sep 4 04:18:30.916594 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 04:18:30.919326 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 04:18:30.922199 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 04:18:30.929940 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 04:18:30.932800 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 04:18:30.935430 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 04:18:30.939259 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 04:18:30.950345 systemd-tmpfiles[243]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 4 04:18:30.951400 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 04:18:30.953414 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 04:18:30.962422 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 04:18:30.965327 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 04:18:30.966833 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 04:18:30.970881 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 04:18:31.000891 dracut-cmdline[263]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d1884c9a158af3462973a912ddb17d2a643da411fd9cba6f05e0fc855c1b0a44 Sep 4 04:18:31.030460 systemd-resolved[261]: Positive Trust Anchors: Sep 4 04:18:31.030476 systemd-resolved[261]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 04:18:31.030506 systemd-resolved[261]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 04:18:31.033845 systemd-resolved[261]: Defaulting to hostname 'linux'. Sep 4 04:18:31.035206 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 04:18:31.041870 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 04:18:31.146179 kernel: SCSI subsystem initialized Sep 4 04:18:31.156159 kernel: Loading iSCSI transport class v2.0-870. Sep 4 04:18:31.167171 kernel: iscsi: registered transport (tcp) Sep 4 04:18:31.189178 kernel: iscsi: registered transport (qla4xxx) Sep 4 04:18:31.189248 kernel: QLogic iSCSI HBA Driver Sep 4 04:18:31.212169 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 04:18:31.241325 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 04:18:31.246038 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 04:18:31.391745 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 04:18:31.393960 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 04:18:31.451167 kernel: raid6: avx2x4 gen() 29056 MB/s Sep 4 04:18:31.468147 kernel: raid6: avx2x2 gen() 30137 MB/s Sep 4 04:18:31.485231 kernel: raid6: avx2x1 gen() 24830 MB/s Sep 4 04:18:31.485254 kernel: raid6: using algorithm avx2x2 gen() 30137 MB/s Sep 4 04:18:31.503275 kernel: raid6: .... xor() 18984 MB/s, rmw enabled Sep 4 04:18:31.503313 kernel: raid6: using avx2x2 recovery algorithm Sep 4 04:18:31.526197 kernel: xor: automatically using best checksumming function avx Sep 4 04:18:31.701194 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 04:18:31.710022 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 04:18:31.712423 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 04:18:31.745465 systemd-udevd[473]: Using default interface naming scheme 'v255'. Sep 4 04:18:31.752644 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 04:18:31.753932 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 04:18:31.779279 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation Sep 4 04:18:31.813372 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 04:18:31.815195 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 04:18:31.916746 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 04:18:31.924949 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 04:18:31.974244 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 4 04:18:31.983083 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 4 04:18:31.988675 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 04:18:31.988759 kernel: libata version 3.00 loaded. Sep 4 04:18:32.004614 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 04:18:32.004684 kernel: GPT:9289727 != 19775487 Sep 4 04:18:32.004700 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 04:18:32.004715 kernel: GPT:9289727 != 19775487 Sep 4 04:18:32.006608 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 04:18:32.006645 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 04:18:32.020232 kernel: AES CTR mode by8 optimization enabled Sep 4 04:18:32.020293 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 4 04:18:32.035165 kernel: ahci 0000:00:1f.2: version 3.0 Sep 4 04:18:32.035437 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 4 04:18:32.034146 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 04:18:32.034281 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 04:18:32.061663 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 4 04:18:32.061926 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 4 04:18:32.062140 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 4 04:18:32.063861 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 04:18:32.097658 kernel: scsi host0: ahci Sep 4 04:18:32.098179 kernel: scsi host1: ahci Sep 4 04:18:32.098405 kernel: scsi host2: ahci Sep 4 04:18:32.102876 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 04:18:32.137216 kernel: scsi host3: ahci Sep 4 04:18:32.138180 kernel: scsi host4: ahci Sep 4 04:18:32.139869 kernel: scsi host5: ahci Sep 4 04:18:32.140106 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Sep 4 04:18:32.140142 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Sep 4 04:18:32.140826 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 04:18:32.160270 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Sep 4 04:18:32.160328 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Sep 4 04:18:32.160345 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Sep 4 04:18:32.160360 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Sep 4 04:18:32.180862 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 04:18:32.201650 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 04:18:32.210505 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 04:18:32.248702 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 04:18:32.250288 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 04:18:32.250615 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 04:18:32.257482 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 04:18:32.381653 disk-uuid[633]: Primary Header is updated. Sep 4 04:18:32.381653 disk-uuid[633]: Secondary Entries is updated. Sep 4 04:18:32.381653 disk-uuid[633]: Secondary Header is updated. Sep 4 04:18:32.386159 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 04:18:32.391168 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 04:18:32.475664 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 4 04:18:32.475742 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 4 04:18:32.475758 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 4 04:18:32.475772 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 4 04:18:32.475788 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 4 04:18:32.479052 kernel: ata3.00: LPM support broken, forcing max_power Sep 4 04:18:32.479113 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 4 04:18:32.479141 kernel: ata3.00: applying bridge limits Sep 4 04:18:32.481526 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 4 04:18:32.481565 kernel: ata3.00: LPM support broken, forcing max_power Sep 4 04:18:32.481580 kernel: ata3.00: configured for UDMA/100 Sep 4 04:18:32.484170 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 4 04:18:32.545193 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 4 04:18:32.545669 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 04:18:32.566178 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 4 04:18:32.989973 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 04:18:32.992256 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 04:18:32.994084 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 04:18:32.996730 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 04:18:33.000416 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 04:18:33.038578 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 04:18:33.393171 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 04:18:33.393567 disk-uuid[634]: The operation has completed successfully. Sep 4 04:18:33.429888 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 04:18:33.430015 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 04:18:33.464542 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 04:18:33.495288 sh[662]: Success Sep 4 04:18:33.516223 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 04:18:33.516300 kernel: device-mapper: uevent: version 1.0.3 Sep 4 04:18:33.517526 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 4 04:18:33.529171 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 4 04:18:33.565822 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 04:18:33.569675 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 04:18:33.594237 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 04:18:33.600161 kernel: BTRFS: device fsid c26d2db4-0109-42a5-bc6f-bbb834b82868 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (674) Sep 4 04:18:33.602266 kernel: BTRFS info (device dm-0): first mount of filesystem c26d2db4-0109-42a5-bc6f-bbb834b82868 Sep 4 04:18:33.602291 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 04:18:33.607363 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 04:18:33.607400 kernel: BTRFS info (device dm-0): enabling free space tree Sep 4 04:18:33.609347 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 04:18:33.612083 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 4 04:18:33.613591 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 04:18:33.614802 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 04:18:33.618844 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 04:18:33.650167 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (706) Sep 4 04:18:33.652909 kernel: BTRFS info (device vda6): first mount of filesystem 1535a26e-7205-4f17-83f6-e5f828340771 Sep 4 04:18:33.652943 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 04:18:33.656357 kernel: BTRFS info (device vda6): turning on async discard Sep 4 04:18:33.656387 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 04:18:33.663157 kernel: BTRFS info (device vda6): last unmount of filesystem 1535a26e-7205-4f17-83f6-e5f828340771 Sep 4 04:18:33.664006 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 04:18:33.666871 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 04:18:33.788333 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 04:18:33.791051 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 04:18:33.869888 ignition[749]: Ignition 2.22.0 Sep 4 04:18:33.870319 ignition[749]: Stage: fetch-offline Sep 4 04:18:33.870386 ignition[749]: no configs at "/usr/lib/ignition/base.d" Sep 4 04:18:33.870400 ignition[749]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 04:18:33.870528 ignition[749]: parsed url from cmdline: "" Sep 4 04:18:33.870533 ignition[749]: no config URL provided Sep 4 04:18:33.870541 ignition[749]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 04:18:33.870556 ignition[749]: no config at "/usr/lib/ignition/user.ign" Sep 4 04:18:33.870587 ignition[749]: op(1): [started] loading QEMU firmware config module Sep 4 04:18:33.876519 systemd-networkd[845]: lo: Link UP Sep 4 04:18:33.870594 ignition[749]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 4 04:18:33.876526 systemd-networkd[845]: lo: Gained carrier Sep 4 04:18:33.879095 systemd-networkd[845]: Enumeration completed Sep 4 04:18:33.879425 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 04:18:33.879805 systemd-networkd[845]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 04:18:33.879812 systemd-networkd[845]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 04:18:33.886736 ignition[749]: op(1): [finished] loading QEMU firmware config module Sep 4 04:18:33.880400 systemd-networkd[845]: eth0: Link UP Sep 4 04:18:33.882491 systemd[1]: Reached target network.target - Network. Sep 4 04:18:33.882783 systemd-networkd[845]: eth0: Gained carrier Sep 4 04:18:33.882797 systemd-networkd[845]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 04:18:33.894200 systemd-networkd[845]: eth0: DHCPv4 address 10.0.0.55/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 04:18:33.947432 ignition[749]: parsing config with SHA512: da9cecff4fda197e91dc85e9a444b1ea19e7132d87e56f454dd6b1e55c2b258c325d2ef8ec85dbf5c83f31d6c80f59c33673ffe80bef07f9ac37ad55e3018daf Sep 4 04:18:33.956894 unknown[749]: fetched base config from "system" Sep 4 04:18:33.956911 unknown[749]: fetched user config from "qemu" Sep 4 04:18:33.957408 ignition[749]: fetch-offline: fetch-offline passed Sep 4 04:18:33.957512 ignition[749]: Ignition finished successfully Sep 4 04:18:33.963062 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 04:18:33.965635 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 4 04:18:33.966886 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 04:18:34.045971 ignition[859]: Ignition 2.22.0 Sep 4 04:18:34.045988 ignition[859]: Stage: kargs Sep 4 04:18:34.046263 ignition[859]: no configs at "/usr/lib/ignition/base.d" Sep 4 04:18:34.046279 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 04:18:34.047283 ignition[859]: kargs: kargs passed Sep 4 04:18:34.047345 ignition[859]: Ignition finished successfully Sep 4 04:18:34.055026 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 04:18:34.056792 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 04:18:34.128937 ignition[867]: Ignition 2.22.0 Sep 4 04:18:34.128955 ignition[867]: Stage: disks Sep 4 04:18:34.129172 ignition[867]: no configs at "/usr/lib/ignition/base.d" Sep 4 04:18:34.129188 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 04:18:34.130561 ignition[867]: disks: disks passed Sep 4 04:18:34.130625 ignition[867]: Ignition finished successfully Sep 4 04:18:34.139922 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 04:18:34.141929 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 04:18:34.145239 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 04:18:34.153263 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 04:18:34.155750 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 04:18:34.156263 systemd[1]: Reached target basic.target - Basic System. Sep 4 04:18:34.158165 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 04:18:34.208684 systemd-resolved[261]: Detected conflict on linux IN A 10.0.0.55 Sep 4 04:18:34.208848 systemd-resolved[261]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Sep 4 04:18:34.215399 systemd-fsck[877]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 4 04:18:34.456983 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 04:18:34.469151 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 04:18:34.668186 kernel: EXT4-fs (vda9): mounted filesystem d147a273-ffc0-4c78-a5f1-46a3b3f6b4ff r/w with ordered data mode. Quota mode: none. Sep 4 04:18:34.669398 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 04:18:34.672417 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 04:18:34.678520 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 04:18:34.688562 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 04:18:34.691072 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 04:18:34.691147 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 04:18:34.691182 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 04:18:34.715446 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 04:18:34.717604 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 04:18:34.727181 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (885) Sep 4 04:18:34.729529 kernel: BTRFS info (device vda6): first mount of filesystem 1535a26e-7205-4f17-83f6-e5f828340771 Sep 4 04:18:34.729588 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 04:18:34.741447 kernel: BTRFS info (device vda6): turning on async discard Sep 4 04:18:34.742482 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 04:18:34.745042 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 04:18:34.790966 initrd-setup-root[909]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 04:18:34.796085 initrd-setup-root[916]: cut: /sysroot/etc/group: No such file or directory Sep 4 04:18:34.801814 initrd-setup-root[923]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 04:18:34.807193 initrd-setup-root[930]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 04:18:34.977511 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 04:18:34.985203 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 04:18:34.990145 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 04:18:35.007515 systemd-networkd[845]: eth0: Gained IPv6LL Sep 4 04:18:35.015094 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 04:18:35.020549 kernel: BTRFS info (device vda6): last unmount of filesystem 1535a26e-7205-4f17-83f6-e5f828340771 Sep 4 04:18:35.096933 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 04:18:35.131169 ignition[999]: INFO : Ignition 2.22.0 Sep 4 04:18:35.131169 ignition[999]: INFO : Stage: mount Sep 4 04:18:35.133344 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 04:18:35.133344 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 04:18:35.133344 ignition[999]: INFO : mount: mount passed Sep 4 04:18:35.133344 ignition[999]: INFO : Ignition finished successfully Sep 4 04:18:35.136772 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 04:18:35.139698 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 04:18:35.672099 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 04:18:35.721032 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1011) Sep 4 04:18:35.733960 kernel: BTRFS info (device vda6): first mount of filesystem 1535a26e-7205-4f17-83f6-e5f828340771 Sep 4 04:18:35.734031 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 04:18:35.738157 kernel: BTRFS info (device vda6): turning on async discard Sep 4 04:18:35.738208 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 04:18:35.740735 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 04:18:35.821280 ignition[1028]: INFO : Ignition 2.22.0 Sep 4 04:18:35.821280 ignition[1028]: INFO : Stage: files Sep 4 04:18:35.823848 ignition[1028]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 04:18:35.823848 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 04:18:35.827065 ignition[1028]: DEBUG : files: compiled without relabeling support, skipping Sep 4 04:18:35.827065 ignition[1028]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 04:18:35.827065 ignition[1028]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 04:18:35.831420 ignition[1028]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 04:18:35.831420 ignition[1028]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 04:18:35.831420 ignition[1028]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 04:18:35.830520 unknown[1028]: wrote ssh authorized keys file for user: core Sep 4 04:18:35.837006 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 04:18:35.837006 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 4 04:18:35.884036 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 04:18:36.199106 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 04:18:36.201842 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 04:18:36.201842 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 04:18:36.201842 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 04:18:36.201842 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 04:18:36.201842 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 04:18:36.201842 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 04:18:36.201842 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 04:18:36.201842 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 04:18:36.261113 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 04:18:36.263886 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 04:18:36.263886 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 4 04:18:36.320418 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 4 04:18:36.320418 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 4 04:18:36.326264 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 4 04:18:36.862745 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 04:18:37.493854 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 4 04:18:37.493854 ignition[1028]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 04:18:37.515220 ignition[1028]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 04:18:37.588663 ignition[1028]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 04:18:37.588663 ignition[1028]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 04:18:37.588663 ignition[1028]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 4 04:18:37.588663 ignition[1028]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 04:18:37.599477 ignition[1028]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 04:18:37.599477 ignition[1028]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 4 04:18:37.599477 ignition[1028]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 4 04:18:37.617611 ignition[1028]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 04:18:37.625901 ignition[1028]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 04:18:37.628365 ignition[1028]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 4 04:18:37.628365 ignition[1028]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 4 04:18:37.628365 ignition[1028]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 04:18:37.628365 ignition[1028]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 04:18:37.628365 ignition[1028]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 04:18:37.628365 ignition[1028]: INFO : files: files passed Sep 4 04:18:37.628365 ignition[1028]: INFO : Ignition finished successfully Sep 4 04:18:37.631743 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 04:18:37.646670 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 04:18:37.655067 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 04:18:37.665838 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 04:18:37.666039 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 04:18:37.672066 initrd-setup-root-after-ignition[1058]: grep: /sysroot/oem/oem-release: No such file or directory Sep 4 04:18:37.678543 initrd-setup-root-after-ignition[1060]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 04:18:37.678543 initrd-setup-root-after-ignition[1060]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 04:18:37.690239 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 04:18:37.692602 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 04:18:37.695872 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 04:18:37.698366 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 04:18:37.773386 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 04:18:37.773573 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 04:18:37.777453 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 04:18:37.779568 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 04:18:37.779764 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 04:18:37.783961 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 04:18:37.824753 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 04:18:37.828103 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 04:18:37.864167 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 04:18:37.866988 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 04:18:37.868525 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 04:18:37.894216 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 04:18:37.894396 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 04:18:37.897421 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 04:18:37.898522 systemd[1]: Stopped target basic.target - Basic System. Sep 4 04:18:37.899526 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 04:18:37.899834 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 04:18:37.900224 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 04:18:37.900672 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 4 04:18:37.901066 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 04:18:37.901572 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 04:18:37.901936 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 04:18:37.924693 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 04:18:37.925038 systemd[1]: Stopped target swap.target - Swaps. Sep 4 04:18:37.925520 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 04:18:37.925637 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 04:18:37.926387 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 04:18:37.926765 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 04:18:37.927118 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 04:18:37.927321 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 04:18:37.927744 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 04:18:37.927954 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 04:18:37.980781 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 04:18:37.981022 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 04:18:37.981986 systemd[1]: Stopped target paths.target - Path Units. Sep 4 04:18:37.985002 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 04:18:37.988267 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 04:18:38.001094 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 04:18:38.002796 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 04:18:38.010415 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 04:18:38.010573 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 04:18:38.012052 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 04:18:38.012173 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 04:18:38.013848 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 04:18:38.013993 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 04:18:38.015689 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 04:18:38.015838 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 04:18:38.020322 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 04:18:38.021459 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 04:18:38.021636 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 04:18:38.025590 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 04:18:38.027240 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 04:18:38.027425 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 04:18:38.027973 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 04:18:38.028119 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 04:18:38.034213 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 04:18:38.045508 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 04:18:38.071573 ignition[1084]: INFO : Ignition 2.22.0 Sep 4 04:18:38.071573 ignition[1084]: INFO : Stage: umount Sep 4 04:18:38.073358 ignition[1084]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 04:18:38.073358 ignition[1084]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 04:18:38.073358 ignition[1084]: INFO : umount: umount passed Sep 4 04:18:38.073358 ignition[1084]: INFO : Ignition finished successfully Sep 4 04:18:38.075957 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 04:18:38.076101 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 04:18:38.077953 systemd[1]: Stopped target network.target - Network. Sep 4 04:18:38.079476 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 04:18:38.079538 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 04:18:38.081735 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 04:18:38.081787 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 04:18:38.084085 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 04:18:38.084221 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 04:18:38.085863 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 04:18:38.085948 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 04:18:38.088303 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 04:18:38.090411 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 04:18:38.091718 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 04:18:38.092440 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 04:18:38.092580 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 04:18:38.106668 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 04:18:38.106788 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 04:18:38.123168 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 04:18:38.123316 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 04:18:38.129830 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 4 04:18:38.130172 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 04:18:38.130328 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 04:18:38.145735 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 4 04:18:38.147517 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 4 04:18:38.149070 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 04:18:38.149194 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 04:18:38.159596 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 04:18:38.160860 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 04:18:38.160932 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 04:18:38.161526 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 04:18:38.161573 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 04:18:38.168646 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 04:18:38.168746 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 04:18:38.169460 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 04:18:38.169519 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 04:18:38.175679 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 04:18:38.180776 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 4 04:18:38.180900 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 4 04:18:38.205399 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 04:18:38.205624 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 04:18:38.226092 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 04:18:38.226181 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 04:18:38.227415 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 04:18:38.227456 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 04:18:38.230661 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 04:18:38.230717 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 04:18:38.233760 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 04:18:38.233835 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 04:18:38.248321 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 04:18:38.248423 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 04:18:38.252571 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 04:18:38.253355 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 4 04:18:38.253412 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 04:18:38.268116 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 04:18:38.268235 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 04:18:38.273199 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 04:18:38.273287 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 04:18:38.290262 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 4 04:18:38.290361 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 4 04:18:38.290422 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 04:18:38.290848 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 04:18:38.293297 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 04:18:38.303098 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 04:18:38.303285 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 04:18:38.332495 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 04:18:38.355518 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 04:18:38.384990 systemd[1]: Switching root. Sep 4 04:18:38.433474 systemd-journald[218]: Journal stopped Sep 4 04:18:40.288721 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Sep 4 04:18:40.288817 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 04:18:40.288839 kernel: SELinux: policy capability open_perms=1 Sep 4 04:18:40.288856 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 04:18:40.288885 kernel: SELinux: policy capability always_check_network=0 Sep 4 04:18:40.288903 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 04:18:40.288930 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 04:18:40.288947 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 04:18:40.288964 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 04:18:40.288981 kernel: SELinux: policy capability userspace_initial_context=0 Sep 4 04:18:40.288999 kernel: audit: type=1403 audit(1756959519.159:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 04:18:40.289017 systemd[1]: Successfully loaded SELinux policy in 87.301ms. Sep 4 04:18:40.289061 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.238ms. Sep 4 04:18:40.289090 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 04:18:40.289112 systemd[1]: Detected virtualization kvm. Sep 4 04:18:40.289152 systemd[1]: Detected architecture x86-64. Sep 4 04:18:40.289172 systemd[1]: Detected first boot. Sep 4 04:18:40.289190 systemd[1]: Initializing machine ID from VM UUID. Sep 4 04:18:40.289209 zram_generator::config[1130]: No configuration found. Sep 4 04:18:40.289228 kernel: Guest personality initialized and is inactive Sep 4 04:18:40.289246 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 4 04:18:40.289263 kernel: Initialized host personality Sep 4 04:18:40.289279 kernel: NET: Registered PF_VSOCK protocol family Sep 4 04:18:40.289301 systemd[1]: Populated /etc with preset unit settings. Sep 4 04:18:40.289320 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 4 04:18:40.289338 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 04:18:40.289356 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 04:18:40.289374 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 04:18:40.289399 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 04:18:40.289418 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 04:18:40.289437 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 04:18:40.289459 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 04:18:40.289477 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 04:18:40.289496 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 04:18:40.289514 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 04:18:40.289533 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 04:18:40.289553 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 04:18:40.289572 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 04:18:40.289590 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 04:18:40.289609 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 04:18:40.289631 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 04:18:40.289649 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 04:18:40.289667 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 04:18:40.289686 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 04:18:40.289704 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 04:18:40.289723 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 04:18:40.289747 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 04:18:40.289766 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 04:18:40.289787 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 04:18:40.289806 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 04:18:40.289824 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 04:18:40.289842 systemd[1]: Reached target slices.target - Slice Units. Sep 4 04:18:40.289871 systemd[1]: Reached target swap.target - Swaps. Sep 4 04:18:40.289890 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 04:18:40.289908 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 04:18:40.289927 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 4 04:18:40.289945 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 04:18:40.289969 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 04:18:40.289987 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 04:18:40.290005 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 04:18:40.290023 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 04:18:40.290041 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 04:18:40.290060 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 04:18:40.290079 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 04:18:40.290098 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 04:18:40.290116 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 04:18:40.290159 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 04:18:40.290179 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 04:18:40.290198 systemd[1]: Reached target machines.target - Containers. Sep 4 04:18:40.290217 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 04:18:40.290236 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 04:18:40.290255 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 04:18:40.290273 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 04:18:40.290292 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 04:18:40.290313 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 04:18:40.290332 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 04:18:40.290350 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 04:18:40.290368 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 04:18:40.290390 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 04:18:40.290409 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 04:18:40.290427 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 04:18:40.290446 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 04:18:40.290464 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 04:18:40.290487 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 04:18:40.290505 kernel: fuse: init (API version 7.41) Sep 4 04:18:40.290523 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 04:18:40.290541 kernel: loop: module loaded Sep 4 04:18:40.290559 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 04:18:40.290576 kernel: ACPI: bus type drm_connector registered Sep 4 04:18:40.290594 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 04:18:40.290613 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 04:18:40.290632 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 4 04:18:40.290653 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 04:18:40.290673 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 04:18:40.290691 systemd[1]: Stopped verity-setup.service. Sep 4 04:18:40.290709 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 04:18:40.290748 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 04:18:40.290766 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 04:18:40.290817 systemd-journald[1194]: Collecting audit messages is disabled. Sep 4 04:18:40.290858 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 04:18:40.290894 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 04:18:40.290916 systemd-journald[1194]: Journal started Sep 4 04:18:40.290950 systemd-journald[1194]: Runtime Journal (/run/log/journal/cedd74a6212042ebb20151b39209debd) is 6M, max 48.6M, 42.5M free. Sep 4 04:18:39.860997 systemd[1]: Queued start job for default target multi-user.target. Sep 4 04:18:39.888292 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 04:18:39.888929 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 04:18:40.294180 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 04:18:40.295901 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 04:18:40.297158 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 04:18:40.298587 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 04:18:40.300194 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 04:18:40.300426 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 04:18:40.301978 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 04:18:40.302292 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 04:18:40.317334 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 04:18:40.317558 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 04:18:40.319050 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 04:18:40.319290 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 04:18:40.320831 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 04:18:40.321099 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 04:18:40.322532 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 04:18:40.322748 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 04:18:40.324304 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 04:18:40.325885 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 04:18:40.327560 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 04:18:40.329384 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 4 04:18:40.345653 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 04:18:40.348761 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 04:18:40.351550 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 04:18:40.352917 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 04:18:40.352956 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 04:18:40.355158 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 4 04:18:40.357274 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 04:18:40.359418 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 04:18:40.387668 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 04:18:40.392293 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 04:18:40.393514 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 04:18:40.397555 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 04:18:40.398828 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 04:18:40.405192 systemd-journald[1194]: Time spent on flushing to /var/log/journal/cedd74a6212042ebb20151b39209debd is 13.827ms for 981 entries. Sep 4 04:18:40.405192 systemd-journald[1194]: System Journal (/var/log/journal/cedd74a6212042ebb20151b39209debd) is 8M, max 195.6M, 187.6M free. Sep 4 04:18:40.739753 systemd-journald[1194]: Received client request to flush runtime journal. Sep 4 04:18:40.739881 kernel: loop0: detected capacity change from 0 to 128016 Sep 4 04:18:40.739929 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 04:18:40.739968 kernel: loop1: detected capacity change from 0 to 221472 Sep 4 04:18:40.402415 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 04:18:40.411153 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 04:18:40.414546 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 04:18:40.416082 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 04:18:40.417377 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 04:18:40.465542 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 04:18:40.570844 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 04:18:40.573781 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 04:18:40.576954 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 04:18:40.586307 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 4 04:18:40.590378 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 04:18:40.741962 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 04:18:41.209171 kernel: loop2: detected capacity change from 0 to 110984 Sep 4 04:18:41.326181 kernel: loop3: detected capacity change from 0 to 128016 Sep 4 04:18:41.365592 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 04:18:41.381179 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 04:18:41.400154 kernel: loop4: detected capacity change from 0 to 221472 Sep 4 04:18:41.467449 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Sep 4 04:18:41.467477 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Sep 4 04:18:41.474162 kernel: loop5: detected capacity change from 0 to 110984 Sep 4 04:18:41.475335 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 04:18:41.487299 (sd-merge)[1266]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 4 04:18:41.487965 (sd-merge)[1266]: Merged extensions into '/usr'. Sep 4 04:18:41.512463 systemd[1]: Reload requested from client PID 1241 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 04:18:41.512483 systemd[1]: Reloading... Sep 4 04:18:41.687179 zram_generator::config[1293]: No configuration found. Sep 4 04:18:41.908461 ldconfig[1236]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 04:18:41.923851 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 04:18:41.924295 systemd[1]: Reloading finished in 411 ms. Sep 4 04:18:41.957536 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 04:18:41.962052 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 4 04:18:41.996154 systemd[1]: Starting ensure-sysext.service... Sep 4 04:18:42.056264 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 04:18:42.124101 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 04:18:42.153798 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 4 04:18:42.153872 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 4 04:18:42.154270 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 04:18:42.154517 systemd[1]: Reload requested from client PID 1334 ('systemctl') (unit ensure-sysext.service)... Sep 4 04:18:42.154543 systemd[1]: Reloading... Sep 4 04:18:42.154548 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 04:18:42.155851 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 04:18:42.156211 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Sep 4 04:18:42.156306 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Sep 4 04:18:42.162717 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 04:18:42.162733 systemd-tmpfiles[1335]: Skipping /boot Sep 4 04:18:42.176727 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 04:18:42.176747 systemd-tmpfiles[1335]: Skipping /boot Sep 4 04:18:42.231163 zram_generator::config[1360]: No configuration found. Sep 4 04:18:42.482930 systemd[1]: Reloading finished in 327 ms. Sep 4 04:18:42.523373 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 04:18:42.545091 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 04:18:42.581850 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 04:18:42.585296 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 04:18:42.598716 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 04:18:42.602255 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 04:18:42.628856 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 04:18:42.629204 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 04:18:42.634069 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 04:18:42.636972 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 04:18:42.641448 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 04:18:42.644459 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 04:18:42.645074 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 04:18:42.657310 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 04:18:42.658661 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 04:18:42.660780 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 04:18:42.663366 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 04:18:42.663627 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 04:18:42.665915 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 04:18:42.666259 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 04:18:42.668527 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 04:18:42.672469 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 04:18:42.689057 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 04:18:42.706274 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 04:18:42.706466 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 04:18:42.707818 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 04:18:42.731480 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 04:18:42.754479 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 04:18:42.762971 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 04:18:42.766444 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 04:18:42.766508 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 04:18:42.766621 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 04:18:42.767381 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 04:18:42.773156 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 04:18:42.776217 systemd[1]: Finished ensure-sysext.service. Sep 4 04:18:42.777975 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 04:18:42.779644 augenrules[1444]: No rules Sep 4 04:18:42.779770 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 04:18:42.780605 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 04:18:42.784845 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 04:18:42.785108 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 04:18:42.786779 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 04:18:42.787018 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 04:18:42.788496 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 04:18:42.788747 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 04:18:42.790279 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 04:18:42.790504 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 04:18:42.797742 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 04:18:42.797919 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 04:18:42.800315 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 04:18:42.802763 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 04:18:42.806260 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 04:18:42.824521 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 04:18:42.841545 systemd-resolved[1404]: Positive Trust Anchors: Sep 4 04:18:42.841562 systemd-resolved[1404]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 04:18:42.841592 systemd-resolved[1404]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 04:18:42.846856 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 04:18:42.854852 systemd-resolved[1404]: Defaulting to hostname 'linux'. Sep 4 04:18:42.857283 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 04:18:42.867757 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 04:18:42.869575 systemd-udevd[1457]: Using default interface naming scheme 'v255'. Sep 4 04:18:42.890813 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 04:18:42.898954 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 04:18:42.906378 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 04:18:42.908080 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 04:18:42.914204 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 04:18:42.916561 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 04:18:42.918080 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 4 04:18:42.919495 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 04:18:42.921447 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 04:18:42.921493 systemd[1]: Reached target paths.target - Path Units. Sep 4 04:18:42.923176 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 04:18:42.924959 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 04:18:42.944511 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 04:18:42.946165 systemd[1]: Reached target timers.target - Timer Units. Sep 4 04:18:42.948420 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 04:18:42.951738 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 04:18:42.961603 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 4 04:18:42.965805 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 4 04:18:42.967853 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 4 04:18:43.021352 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 04:18:43.023757 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 4 04:18:43.026560 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 04:18:43.040206 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 04:18:43.042025 systemd[1]: Reached target basic.target - Basic System. Sep 4 04:18:43.043653 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 04:18:43.043697 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 04:18:43.046526 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 04:18:43.049433 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 04:18:43.058348 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 04:18:43.067392 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 04:18:43.068965 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 04:18:43.072488 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 4 04:18:43.081497 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 04:18:43.087972 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 04:18:43.089386 oslogin_cache_refresh[1503]: Refreshing passwd entry cache Sep 4 04:18:43.090634 google_oslogin_nss_cache[1503]: oslogin_cache_refresh[1503]: Refreshing passwd entry cache Sep 4 04:18:43.090864 jq[1499]: false Sep 4 04:18:43.091464 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 04:18:43.091768 google_oslogin_nss_cache[1503]: oslogin_cache_refresh[1503]: Failure getting users, quitting Sep 4 04:18:43.091760 oslogin_cache_refresh[1503]: Failure getting users, quitting Sep 4 04:18:43.092058 google_oslogin_nss_cache[1503]: oslogin_cache_refresh[1503]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 4 04:18:43.092058 google_oslogin_nss_cache[1503]: oslogin_cache_refresh[1503]: Refreshing group entry cache Sep 4 04:18:43.091780 oslogin_cache_refresh[1503]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 4 04:18:43.091849 oslogin_cache_refresh[1503]: Refreshing group entry cache Sep 4 04:18:43.092543 google_oslogin_nss_cache[1503]: oslogin_cache_refresh[1503]: Failure getting groups, quitting Sep 4 04:18:43.092543 google_oslogin_nss_cache[1503]: oslogin_cache_refresh[1503]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 4 04:18:43.092535 oslogin_cache_refresh[1503]: Failure getting groups, quitting Sep 4 04:18:43.092547 oslogin_cache_refresh[1503]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 4 04:18:43.099415 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 04:18:43.111388 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 04:18:43.114201 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 04:18:43.115037 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 04:18:43.117350 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 04:18:43.122043 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 04:18:43.134030 extend-filesystems[1502]: Found /dev/vda6 Sep 4 04:18:43.137168 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 04:18:43.141246 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 04:18:43.142538 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 04:18:43.143065 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 4 04:18:43.143393 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 4 04:18:43.146810 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 04:18:43.147195 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 04:18:43.147716 jq[1517]: true Sep 4 04:18:43.150617 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 04:18:43.173143 jq[1523]: true Sep 4 04:18:43.220807 update_engine[1514]: I20250904 04:18:43.220681 1514 main.cc:92] Flatcar Update Engine starting Sep 4 04:18:43.230150 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 04:18:43.230976 extend-filesystems[1502]: Found /dev/vda9 Sep 4 04:18:43.255346 tar[1522]: linux-amd64/helm Sep 4 04:18:43.274654 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 04:18:43.289182 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 4 04:18:43.295149 kernel: ACPI: button: Power Button [PWRF] Sep 4 04:18:43.296057 systemd-logind[1513]: New seat seat0. Sep 4 04:18:43.296193 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 04:18:43.300052 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 04:18:43.303734 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 4 04:18:43.304094 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 4 04:18:43.306689 extend-filesystems[1502]: Checking size of /dev/vda9 Sep 4 04:18:43.342941 dbus-daemon[1497]: [system] SELinux support is enabled Sep 4 04:18:43.343033 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 04:18:43.344771 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 04:18:43.358793 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 04:18:43.358834 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 04:18:43.363257 update_engine[1514]: I20250904 04:18:43.363196 1514 update_check_scheduler.cc:74] Next update check in 5m30s Sep 4 04:18:43.366078 dbus-daemon[1497]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 4 04:18:43.378955 systemd-networkd[1465]: lo: Link UP Sep 4 04:18:43.378973 systemd-networkd[1465]: lo: Gained carrier Sep 4 04:18:43.387271 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 04:18:43.388375 systemd-networkd[1465]: Enumeration completed Sep 4 04:18:43.392234 extend-filesystems[1502]: Resized partition /dev/vda9 Sep 4 04:18:43.419369 extend-filesystems[1566]: resize2fs 1.47.3 (8-Jul-2025) Sep 4 04:18:43.401896 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 04:18:43.401926 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 04:18:43.403415 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 04:18:43.404694 systemd[1]: Started update-engine.service - Update Engine. Sep 4 04:18:43.406462 systemd[1]: Reached target network.target - Network. Sep 4 04:18:43.411934 systemd-networkd[1465]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 04:18:43.411940 systemd-networkd[1465]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 04:18:43.416885 systemd-networkd[1465]: eth0: Link UP Sep 4 04:18:43.417467 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 04:18:43.419517 systemd-networkd[1465]: eth0: Gained carrier Sep 4 04:18:43.419549 systemd-networkd[1465]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 04:18:43.426430 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 4 04:18:43.443382 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 04:18:43.459082 systemd-networkd[1465]: eth0: DHCPv4 address 10.0.0.55/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 04:18:43.461799 systemd-timesyncd[1456]: Network configuration changed, trying to establish connection. Sep 4 04:18:43.463575 systemd-timesyncd[1456]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 4 04:18:43.463851 systemd-timesyncd[1456]: Initial clock synchronization to Thu 2025-09-04 04:18:43.843201 UTC. Sep 4 04:18:43.506945 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 04:18:43.509744 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 04:18:43.545175 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 4 04:18:43.575522 (ntainerd)[1582]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 04:18:43.685902 kernel: kvm_amd: TSC scaling supported Sep 4 04:18:43.685967 kernel: kvm_amd: Nested Virtualization enabled Sep 4 04:18:43.685989 kernel: kvm_amd: Nested Paging enabled Sep 4 04:18:43.686008 kernel: kvm_amd: LBR virtualization supported Sep 4 04:18:43.686025 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 4 04:18:43.686043 kernel: kvm_amd: Virtual GIF supported Sep 4 04:18:43.575721 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 4 04:18:43.686241 sshd_keygen[1518]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 04:18:43.615181 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 04:18:43.657440 systemd-logind[1513]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 04:18:43.665818 systemd-logind[1513]: Watching system buttons on /dev/input/event2 (Power Button) Sep 4 04:18:43.713826 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 04:18:43.716421 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 04:18:43.757121 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 04:18:43.757510 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 04:18:43.766974 bash[1549]: Updated "/home/core/.ssh/authorized_keys" Sep 4 04:18:43.764770 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 04:18:43.768383 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 04:18:43.789106 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 04:18:43.795903 locksmithd[1570]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 04:18:43.852399 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 04:18:43.858461 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 04:18:43.878439 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 04:18:43.878716 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 04:18:43.882430 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 4 04:18:43.909182 kernel: EDAC MC: Ver: 3.0.0 Sep 4 04:18:44.103976 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 04:18:44.125514 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 04:18:44.143583 systemd[1]: Started sshd@0-10.0.0.55:22-10.0.0.1:39728.service - OpenSSH per-connection server daemon (10.0.0.1:39728). Sep 4 04:18:44.433623 extend-filesystems[1566]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 04:18:44.433623 extend-filesystems[1566]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 04:18:44.433623 extend-filesystems[1566]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 4 04:18:44.536082 extend-filesystems[1502]: Resized filesystem in /dev/vda9 Sep 4 04:18:44.443265 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 04:18:44.443957 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 04:18:44.537906 containerd[1582]: time="2025-09-04T04:18:44Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 4 04:18:44.538800 containerd[1582]: time="2025-09-04T04:18:44.538738397Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 4 04:18:44.541725 sshd[1624]: Accepted publickey for core from 10.0.0.1 port 39728 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:18:44.543326 sshd-session[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:18:44.549193 tar[1522]: linux-amd64/LICENSE Sep 4 04:18:44.549193 tar[1522]: linux-amd64/README.md Sep 4 04:18:44.550646 containerd[1582]: time="2025-09-04T04:18:44.550594714Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="18.229µs" Sep 4 04:18:44.550646 containerd[1582]: time="2025-09-04T04:18:44.550637866Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 4 04:18:44.550763 containerd[1582]: time="2025-09-04T04:18:44.550665540Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 4 04:18:44.550947 containerd[1582]: time="2025-09-04T04:18:44.550904696Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 4 04:18:44.550947 containerd[1582]: time="2025-09-04T04:18:44.550924562Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 4 04:18:44.550994 containerd[1582]: time="2025-09-04T04:18:44.550974294Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 04:18:44.551074 containerd[1582]: time="2025-09-04T04:18:44.551053107Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 04:18:44.551074 containerd[1582]: time="2025-09-04T04:18:44.551069730Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 04:18:44.551461 containerd[1582]: time="2025-09-04T04:18:44.551428690Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 04:18:44.551461 containerd[1582]: time="2025-09-04T04:18:44.551450497Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 04:18:44.551461 containerd[1582]: time="2025-09-04T04:18:44.551462471Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 04:18:44.551549 containerd[1582]: time="2025-09-04T04:18:44.551472126Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 4 04:18:44.551604 containerd[1582]: time="2025-09-04T04:18:44.551578203Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 4 04:18:44.551901 containerd[1582]: time="2025-09-04T04:18:44.551863230Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 04:18:44.551928 containerd[1582]: time="2025-09-04T04:18:44.551908985Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 04:18:44.551928 containerd[1582]: time="2025-09-04T04:18:44.551920781Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 4 04:18:44.551983 containerd[1582]: time="2025-09-04T04:18:44.551966379Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 4 04:18:44.552404 containerd[1582]: time="2025-09-04T04:18:44.552355373Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 4 04:18:44.552533 containerd[1582]: time="2025-09-04T04:18:44.552506220Z" level=info msg="metadata content store policy set" policy=shared Sep 4 04:18:44.561537 systemd-logind[1513]: New session 1 of user core. Sep 4 04:18:44.564273 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 04:18:44.567047 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 04:18:44.577075 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 04:18:44.593818 containerd[1582]: time="2025-09-04T04:18:44.593778517Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 4 04:18:44.593886 containerd[1582]: time="2025-09-04T04:18:44.593866429Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 4 04:18:44.593910 containerd[1582]: time="2025-09-04T04:18:44.593891940Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 4 04:18:44.593931 containerd[1582]: time="2025-09-04T04:18:44.593909718Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 4 04:18:44.593951 containerd[1582]: time="2025-09-04T04:18:44.593934611Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 4 04:18:44.593988 containerd[1582]: time="2025-09-04T04:18:44.593951810Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 4 04:18:44.593988 containerd[1582]: time="2025-09-04T04:18:44.593972285Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 4 04:18:44.594027 containerd[1582]: time="2025-09-04T04:18:44.593989349Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 4 04:18:44.594027 containerd[1582]: time="2025-09-04T04:18:44.594004293Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 4 04:18:44.594027 containerd[1582]: time="2025-09-04T04:18:44.594017054Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 4 04:18:44.594082 containerd[1582]: time="2025-09-04T04:18:44.594029669Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 4 04:18:44.594082 containerd[1582]: time="2025-09-04T04:18:44.594048559Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 4 04:18:44.594280 containerd[1582]: time="2025-09-04T04:18:44.594236543Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 4 04:18:44.594271 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 04:18:44.595833 containerd[1582]: time="2025-09-04T04:18:44.594302584Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 4 04:18:44.595833 containerd[1582]: time="2025-09-04T04:18:44.594337510Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 4 04:18:44.595833 containerd[1582]: time="2025-09-04T04:18:44.594376696Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 4 04:18:44.595833 containerd[1582]: time="2025-09-04T04:18:44.594395575Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 4 04:18:44.595833 containerd[1582]: time="2025-09-04T04:18:44.594413552Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 4 04:18:44.595833 containerd[1582]: time="2025-09-04T04:18:44.594427163Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 4 04:18:44.595833 containerd[1582]: time="2025-09-04T04:18:44.594443514Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 4 04:18:44.595833 containerd[1582]: time="2025-09-04T04:18:44.594461091Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 4 04:18:44.595833 containerd[1582]: time="2025-09-04T04:18:44.594478113Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 4 04:18:44.595833 containerd[1582]: time="2025-09-04T04:18:44.594496310Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 4 04:18:44.595833 containerd[1582]: time="2025-09-04T04:18:44.594619797Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 4 04:18:44.595833 containerd[1582]: time="2025-09-04T04:18:44.594652089Z" level=info msg="Start snapshots syncer" Sep 4 04:18:44.595833 containerd[1582]: time="2025-09-04T04:18:44.594689355Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 4 04:18:44.596211 containerd[1582]: time="2025-09-04T04:18:44.595021837Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 4 04:18:44.596211 containerd[1582]: time="2025-09-04T04:18:44.595105225Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 4 04:18:44.596456 containerd[1582]: time="2025-09-04T04:18:44.595246468Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 4 04:18:44.596456 containerd[1582]: time="2025-09-04T04:18:44.595393179Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 4 04:18:44.596456 containerd[1582]: time="2025-09-04T04:18:44.595427034Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 4 04:18:44.596456 containerd[1582]: time="2025-09-04T04:18:44.595441464Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 4 04:18:44.596456 containerd[1582]: time="2025-09-04T04:18:44.595457425Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 4 04:18:44.596456 containerd[1582]: time="2025-09-04T04:18:44.595474343Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 4 04:18:44.596456 containerd[1582]: time="2025-09-04T04:18:44.595499582Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 4 04:18:44.596456 containerd[1582]: time="2025-09-04T04:18:44.595516047Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 4 04:18:44.596456 containerd[1582]: time="2025-09-04T04:18:44.595562369Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 4 04:18:44.596456 containerd[1582]: time="2025-09-04T04:18:44.595581029Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 4 04:18:44.596456 containerd[1582]: time="2025-09-04T04:18:44.595595227Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 4 04:18:44.596456 containerd[1582]: time="2025-09-04T04:18:44.595637342Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 04:18:44.596456 containerd[1582]: time="2025-09-04T04:18:44.595660649Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 04:18:44.596456 containerd[1582]: time="2025-09-04T04:18:44.595673473Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 04:18:44.597064 containerd[1582]: time="2025-09-04T04:18:44.595686790Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 04:18:44.597064 containerd[1582]: time="2025-09-04T04:18:44.595698187Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 4 04:18:44.597064 containerd[1582]: time="2025-09-04T04:18:44.595711715Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 4 04:18:44.597064 containerd[1582]: time="2025-09-04T04:18:44.595731266Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 4 04:18:44.597064 containerd[1582]: time="2025-09-04T04:18:44.595784000Z" level=info msg="runtime interface created" Sep 4 04:18:44.597064 containerd[1582]: time="2025-09-04T04:18:44.595794116Z" level=info msg="created NRI interface" Sep 4 04:18:44.597064 containerd[1582]: time="2025-09-04T04:18:44.595809826Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 4 04:18:44.597064 containerd[1582]: time="2025-09-04T04:18:44.595826103Z" level=info msg="Connect containerd service" Sep 4 04:18:44.597064 containerd[1582]: time="2025-09-04T04:18:44.595861606Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 04:18:44.598075 containerd[1582]: time="2025-09-04T04:18:44.597391200Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 04:18:44.599773 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 04:18:44.622069 (systemd)[1638]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 04:18:44.625909 systemd-logind[1513]: New session c1 of user core. Sep 4 04:18:44.763953 containerd[1582]: time="2025-09-04T04:18:44.763802819Z" level=info msg="Start subscribing containerd event" Sep 4 04:18:44.763953 containerd[1582]: time="2025-09-04T04:18:44.763929286Z" level=info msg="Start recovering state" Sep 4 04:18:44.764158 containerd[1582]: time="2025-09-04T04:18:44.764124660Z" level=info msg="Start event monitor" Sep 4 04:18:44.764199 containerd[1582]: time="2025-09-04T04:18:44.764163762Z" level=info msg="Start cni network conf syncer for default" Sep 4 04:18:44.764223 containerd[1582]: time="2025-09-04T04:18:44.764203472Z" level=info msg="Start streaming server" Sep 4 04:18:44.764245 containerd[1582]: time="2025-09-04T04:18:44.764139729Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 04:18:44.764245 containerd[1582]: time="2025-09-04T04:18:44.764240213Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 4 04:18:44.764287 containerd[1582]: time="2025-09-04T04:18:44.764255892Z" level=info msg="runtime interface starting up..." Sep 4 04:18:44.764287 containerd[1582]: time="2025-09-04T04:18:44.764268547Z" level=info msg="starting plugins..." Sep 4 04:18:44.764332 containerd[1582]: time="2025-09-04T04:18:44.764305603Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 4 04:18:44.764475 containerd[1582]: time="2025-09-04T04:18:44.764312435Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 04:18:44.765622 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 04:18:44.768598 containerd[1582]: time="2025-09-04T04:18:44.767770105Z" level=info msg="containerd successfully booted in 0.230644s" Sep 4 04:18:44.803474 systemd[1638]: Queued start job for default target default.target. Sep 4 04:18:44.825229 systemd[1638]: Created slice app.slice - User Application Slice. Sep 4 04:18:44.825270 systemd[1638]: Reached target paths.target - Paths. Sep 4 04:18:44.825339 systemd[1638]: Reached target timers.target - Timers. Sep 4 04:18:44.827315 systemd[1638]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 04:18:44.841297 systemd[1638]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 04:18:44.841559 systemd[1638]: Reached target sockets.target - Sockets. Sep 4 04:18:44.841652 systemd[1638]: Reached target basic.target - Basic System. Sep 4 04:18:44.841732 systemd[1638]: Reached target default.target - Main User Target. Sep 4 04:18:44.841797 systemd[1638]: Startup finished in 204ms. Sep 4 04:18:44.842079 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 04:18:44.849533 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 04:18:44.925085 systemd[1]: Started sshd@1-10.0.0.55:22-10.0.0.1:39746.service - OpenSSH per-connection server daemon (10.0.0.1:39746). Sep 4 04:18:44.928363 systemd-networkd[1465]: eth0: Gained IPv6LL Sep 4 04:18:44.945747 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 04:18:44.948888 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 04:18:44.952802 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 4 04:18:44.959572 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 04:18:44.967290 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 04:18:44.998339 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 4 04:18:44.998645 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 4 04:18:45.001841 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 39746 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:18:45.003601 sshd-session[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:18:45.010070 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 04:18:45.012912 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 04:18:45.036612 systemd-logind[1513]: New session 2 of user core. Sep 4 04:18:45.043347 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 04:18:45.109315 sshd[1683]: Connection closed by 10.0.0.1 port 39746 Sep 4 04:18:45.109862 sshd-session[1662]: pam_unix(sshd:session): session closed for user core Sep 4 04:18:45.126049 systemd[1]: sshd@1-10.0.0.55:22-10.0.0.1:39746.service: Deactivated successfully. Sep 4 04:18:45.128229 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 04:18:45.129069 systemd-logind[1513]: Session 2 logged out. Waiting for processes to exit. Sep 4 04:18:45.132949 systemd[1]: Started sshd@2-10.0.0.55:22-10.0.0.1:39758.service - OpenSSH per-connection server daemon (10.0.0.1:39758). Sep 4 04:18:45.135253 systemd-logind[1513]: Removed session 2. Sep 4 04:18:45.192854 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 39758 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:18:45.194650 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:18:45.199632 systemd-logind[1513]: New session 3 of user core. Sep 4 04:18:45.227546 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 04:18:45.286525 sshd[1692]: Connection closed by 10.0.0.1 port 39758 Sep 4 04:18:45.286906 sshd-session[1689]: pam_unix(sshd:session): session closed for user core Sep 4 04:18:45.291198 systemd[1]: sshd@2-10.0.0.55:22-10.0.0.1:39758.service: Deactivated successfully. Sep 4 04:18:45.293581 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 04:18:45.294450 systemd-logind[1513]: Session 3 logged out. Waiting for processes to exit. Sep 4 04:18:45.295909 systemd-logind[1513]: Removed session 3. Sep 4 04:18:46.440866 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 04:18:46.442895 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 04:18:46.445283 systemd[1]: Startup finished in 3.591s (kernel) + 8.486s (initrd) + 7.363s (userspace) = 19.441s. Sep 4 04:18:46.489246 (kubelet)[1703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 04:18:46.990887 kubelet[1703]: E0904 04:18:46.990802 1703 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 04:18:46.995537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 04:18:46.995795 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 04:18:46.996277 systemd[1]: kubelet.service: Consumed 1.668s CPU time, 266.3M memory peak. Sep 4 04:18:55.538829 systemd[1]: Started sshd@3-10.0.0.55:22-10.0.0.1:56432.service - OpenSSH per-connection server daemon (10.0.0.1:56432). Sep 4 04:18:55.596120 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 56432 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:18:55.598474 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:18:55.603911 systemd-logind[1513]: New session 4 of user core. Sep 4 04:18:55.613294 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 04:18:55.671167 sshd[1719]: Connection closed by 10.0.0.1 port 56432 Sep 4 04:18:55.671652 sshd-session[1716]: pam_unix(sshd:session): session closed for user core Sep 4 04:18:55.681045 systemd[1]: sshd@3-10.0.0.55:22-10.0.0.1:56432.service: Deactivated successfully. Sep 4 04:18:55.683317 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 04:18:55.684125 systemd-logind[1513]: Session 4 logged out. Waiting for processes to exit. Sep 4 04:18:55.687665 systemd[1]: Started sshd@4-10.0.0.55:22-10.0.0.1:56442.service - OpenSSH per-connection server daemon (10.0.0.1:56442). Sep 4 04:18:55.688315 systemd-logind[1513]: Removed session 4. Sep 4 04:18:55.743080 sshd[1725]: Accepted publickey for core from 10.0.0.1 port 56442 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:18:55.744653 sshd-session[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:18:55.749305 systemd-logind[1513]: New session 5 of user core. Sep 4 04:18:55.757438 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 04:18:55.808394 sshd[1728]: Connection closed by 10.0.0.1 port 56442 Sep 4 04:18:55.808587 sshd-session[1725]: pam_unix(sshd:session): session closed for user core Sep 4 04:18:55.821079 systemd[1]: sshd@4-10.0.0.55:22-10.0.0.1:56442.service: Deactivated successfully. Sep 4 04:18:55.823000 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 04:18:55.823873 systemd-logind[1513]: Session 5 logged out. Waiting for processes to exit. Sep 4 04:18:55.826672 systemd[1]: Started sshd@5-10.0.0.55:22-10.0.0.1:56452.service - OpenSSH per-connection server daemon (10.0.0.1:56452). Sep 4 04:18:55.827419 systemd-logind[1513]: Removed session 5. Sep 4 04:18:55.895523 sshd[1734]: Accepted publickey for core from 10.0.0.1 port 56452 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:18:55.896866 sshd-session[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:18:55.901296 systemd-logind[1513]: New session 6 of user core. Sep 4 04:18:55.911307 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 04:18:55.967342 sshd[1737]: Connection closed by 10.0.0.1 port 56452 Sep 4 04:18:55.967799 sshd-session[1734]: pam_unix(sshd:session): session closed for user core Sep 4 04:18:55.984375 systemd[1]: sshd@5-10.0.0.55:22-10.0.0.1:56452.service: Deactivated successfully. Sep 4 04:18:55.986634 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 04:18:55.987462 systemd-logind[1513]: Session 6 logged out. Waiting for processes to exit. Sep 4 04:18:55.989996 systemd[1]: Started sshd@6-10.0.0.55:22-10.0.0.1:56464.service - OpenSSH per-connection server daemon (10.0.0.1:56464). Sep 4 04:18:55.990870 systemd-logind[1513]: Removed session 6. Sep 4 04:18:56.057342 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 56464 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:18:56.058749 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:18:56.063400 systemd-logind[1513]: New session 7 of user core. Sep 4 04:18:56.073312 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 04:18:56.133162 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 04:18:56.133488 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 04:18:56.152593 sudo[1747]: pam_unix(sudo:session): session closed for user root Sep 4 04:18:56.154363 sshd[1746]: Connection closed by 10.0.0.1 port 56464 Sep 4 04:18:56.154833 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Sep 4 04:18:56.165617 systemd[1]: sshd@6-10.0.0.55:22-10.0.0.1:56464.service: Deactivated successfully. Sep 4 04:18:56.167652 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 04:18:56.168592 systemd-logind[1513]: Session 7 logged out. Waiting for processes to exit. Sep 4 04:18:56.170499 systemd-logind[1513]: Removed session 7. Sep 4 04:18:56.171916 systemd[1]: Started sshd@7-10.0.0.55:22-10.0.0.1:56492.service - OpenSSH per-connection server daemon (10.0.0.1:56492). Sep 4 04:18:56.238297 sshd[1753]: Accepted publickey for core from 10.0.0.1 port 56492 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:18:56.239749 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:18:56.243945 systemd-logind[1513]: New session 8 of user core. Sep 4 04:18:56.250286 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 04:18:56.304467 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 04:18:56.304778 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 04:18:56.353831 sudo[1758]: pam_unix(sudo:session): session closed for user root Sep 4 04:18:56.360697 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 4 04:18:56.361106 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 04:18:56.371460 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 04:18:56.420161 augenrules[1780]: No rules Sep 4 04:18:56.421798 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 04:18:56.422079 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 04:18:56.423322 sudo[1757]: pam_unix(sudo:session): session closed for user root Sep 4 04:18:56.425005 sshd[1756]: Connection closed by 10.0.0.1 port 56492 Sep 4 04:18:56.425418 sshd-session[1753]: pam_unix(sshd:session): session closed for user core Sep 4 04:18:56.433903 systemd[1]: sshd@7-10.0.0.55:22-10.0.0.1:56492.service: Deactivated successfully. Sep 4 04:18:56.435930 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 04:18:56.436913 systemd-logind[1513]: Session 8 logged out. Waiting for processes to exit. Sep 4 04:18:56.439729 systemd[1]: Started sshd@8-10.0.0.55:22-10.0.0.1:56508.service - OpenSSH per-connection server daemon (10.0.0.1:56508). Sep 4 04:18:56.440665 systemd-logind[1513]: Removed session 8. Sep 4 04:18:56.512965 sshd[1789]: Accepted publickey for core from 10.0.0.1 port 56508 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:18:56.514877 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:18:56.520430 systemd-logind[1513]: New session 9 of user core. Sep 4 04:18:56.530472 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 04:18:56.587420 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 04:18:56.587734 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 04:18:57.246594 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 04:18:57.248662 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 04:18:57.461355 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 04:18:57.482605 (dockerd)[1816]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 04:18:57.559686 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 04:18:57.565419 (kubelet)[1822]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 04:18:57.704889 kubelet[1822]: E0904 04:18:57.704760 1822 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 04:18:57.711914 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 04:18:57.712153 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 04:18:57.712615 systemd[1]: kubelet.service: Consumed 408ms CPU time, 110.4M memory peak. Sep 4 04:18:58.230163 dockerd[1816]: time="2025-09-04T04:18:58.230059848Z" level=info msg="Starting up" Sep 4 04:18:58.231452 dockerd[1816]: time="2025-09-04T04:18:58.231415463Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 4 04:18:58.258030 dockerd[1816]: time="2025-09-04T04:18:58.257878988Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 4 04:18:59.309999 dockerd[1816]: time="2025-09-04T04:18:59.309890494Z" level=info msg="Loading containers: start." Sep 4 04:18:59.429291 kernel: Initializing XFRM netlink socket Sep 4 04:19:00.059571 systemd-networkd[1465]: docker0: Link UP Sep 4 04:19:00.260510 dockerd[1816]: time="2025-09-04T04:19:00.260430855Z" level=info msg="Loading containers: done." Sep 4 04:19:00.472084 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck664658962-merged.mount: Deactivated successfully. Sep 4 04:19:00.533661 dockerd[1816]: time="2025-09-04T04:19:00.533580431Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 04:19:00.534194 dockerd[1816]: time="2025-09-04T04:19:00.533791833Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 4 04:19:00.534194 dockerd[1816]: time="2025-09-04T04:19:00.533964578Z" level=info msg="Initializing buildkit" Sep 4 04:19:01.387031 dockerd[1816]: time="2025-09-04T04:19:01.386900552Z" level=info msg="Completed buildkit initialization" Sep 4 04:19:01.392806 dockerd[1816]: time="2025-09-04T04:19:01.392743115Z" level=info msg="Daemon has completed initialization" Sep 4 04:19:01.393071 dockerd[1816]: time="2025-09-04T04:19:01.392987181Z" level=info msg="API listen on /run/docker.sock" Sep 4 04:19:01.393198 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 04:19:02.503972 containerd[1582]: time="2025-09-04T04:19:02.503873071Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 4 04:19:05.913272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4273604668.mount: Deactivated successfully. Sep 4 04:19:07.962724 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 04:19:07.965287 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 04:19:08.473432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 04:19:08.495491 (kubelet)[2067]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 04:19:08.576765 kubelet[2067]: E0904 04:19:08.576668 2067 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 04:19:08.581606 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 04:19:08.582032 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 04:19:08.582608 systemd[1]: kubelet.service: Consumed 305ms CPU time, 108.8M memory peak. Sep 4 04:19:11.436107 containerd[1582]: time="2025-09-04T04:19:11.436018672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:19:11.436884 containerd[1582]: time="2025-09-04T04:19:11.436816023Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=28079631" Sep 4 04:19:11.438013 containerd[1582]: time="2025-09-04T04:19:11.437960090Z" level=info msg="ImageCreate event name:\"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:19:11.440714 containerd[1582]: time="2025-09-04T04:19:11.440672573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:19:11.441694 containerd[1582]: time="2025-09-04T04:19:11.441656714Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"28076431\" in 8.937677064s" Sep 4 04:19:11.441694 containerd[1582]: time="2025-09-04T04:19:11.441692076Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 4 04:19:11.442412 containerd[1582]: time="2025-09-04T04:19:11.442348742Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 4 04:19:13.619239 containerd[1582]: time="2025-09-04T04:19:13.619102661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:19:13.620177 containerd[1582]: time="2025-09-04T04:19:13.620103693Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=24714681" Sep 4 04:19:13.621470 containerd[1582]: time="2025-09-04T04:19:13.621404183Z" level=info msg="ImageCreate event name:\"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:19:13.624234 containerd[1582]: time="2025-09-04T04:19:13.624188546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:19:13.625272 containerd[1582]: time="2025-09-04T04:19:13.625231078Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"26317875\" in 2.182818286s" Sep 4 04:19:13.625272 containerd[1582]: time="2025-09-04T04:19:13.625270380Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 4 04:19:13.626037 containerd[1582]: time="2025-09-04T04:19:13.626012352Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 4 04:19:15.027508 containerd[1582]: time="2025-09-04T04:19:15.027414313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:19:15.028664 containerd[1582]: time="2025-09-04T04:19:15.028624394Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=18782427" Sep 4 04:19:15.029766 containerd[1582]: time="2025-09-04T04:19:15.029728979Z" level=info msg="ImageCreate event name:\"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:19:15.032748 containerd[1582]: time="2025-09-04T04:19:15.032693062Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:19:15.033965 containerd[1582]: time="2025-09-04T04:19:15.033901159Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"20385639\" in 1.40784974s" Sep 4 04:19:15.033965 containerd[1582]: time="2025-09-04T04:19:15.033951271Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 4 04:19:15.034624 containerd[1582]: time="2025-09-04T04:19:15.034461933Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 4 04:19:17.827351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount573637031.mount: Deactivated successfully. Sep 4 04:19:18.703042 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 4 04:19:18.707105 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 04:19:19.131310 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 04:19:19.139156 (kubelet)[2145]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 04:19:19.260985 kubelet[2145]: E0904 04:19:19.260875 2145 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 04:19:19.266775 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 04:19:19.267017 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 04:19:19.267484 systemd[1]: kubelet.service: Consumed 388ms CPU time, 110.4M memory peak. Sep 4 04:19:19.776040 containerd[1582]: time="2025-09-04T04:19:19.775960295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:19:19.776807 containerd[1582]: time="2025-09-04T04:19:19.776750229Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=30384255" Sep 4 04:19:19.777835 containerd[1582]: time="2025-09-04T04:19:19.777790326Z" level=info msg="ImageCreate event name:\"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:19:19.780027 containerd[1582]: time="2025-09-04T04:19:19.779982399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:19:19.780875 containerd[1582]: time="2025-09-04T04:19:19.780803129Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"30383274\" in 4.746293858s" Sep 4 04:19:19.780875 containerd[1582]: time="2025-09-04T04:19:19.780856687Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 4 04:19:19.782154 containerd[1582]: time="2025-09-04T04:19:19.781728318Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 4 04:19:20.344482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3845139676.mount: Deactivated successfully. Sep 4 04:19:21.746313 containerd[1582]: time="2025-09-04T04:19:21.746221654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:19:21.747685 containerd[1582]: time="2025-09-04T04:19:21.747649881Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 4 04:19:21.749360 containerd[1582]: time="2025-09-04T04:19:21.749294545Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:19:21.753045 containerd[1582]: time="2025-09-04T04:19:21.752959008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:19:21.754209 containerd[1582]: time="2025-09-04T04:19:21.754114204Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.972340744s" Sep 4 04:19:21.754209 containerd[1582]: time="2025-09-04T04:19:21.754202100Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 4 04:19:21.755444 containerd[1582]: time="2025-09-04T04:19:21.755362761Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 4 04:19:22.372871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3532735620.mount: Deactivated successfully. Sep 4 04:19:22.380626 containerd[1582]: time="2025-09-04T04:19:22.380539634Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 04:19:22.381211 containerd[1582]: time="2025-09-04T04:19:22.381165631Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 4 04:19:22.382375 containerd[1582]: time="2025-09-04T04:19:22.382331507Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 04:19:22.384422 containerd[1582]: time="2025-09-04T04:19:22.384350476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 04:19:22.385058 containerd[1582]: time="2025-09-04T04:19:22.384994530Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 629.581335ms" Sep 4 04:19:22.385058 containerd[1582]: time="2025-09-04T04:19:22.385042205Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 4 04:19:22.385794 containerd[1582]: time="2025-09-04T04:19:22.385759752Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 4 04:19:22.962139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3875431789.mount: Deactivated successfully. Sep 4 04:19:25.514201 containerd[1582]: time="2025-09-04T04:19:25.514085804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:19:25.515159 containerd[1582]: time="2025-09-04T04:19:25.515085912Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 4 04:19:25.517615 containerd[1582]: time="2025-09-04T04:19:25.517572642Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:19:25.521431 containerd[1582]: time="2025-09-04T04:19:25.521294232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:19:25.523218 containerd[1582]: time="2025-09-04T04:19:25.523168430Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.137365256s" Sep 4 04:19:25.523218 containerd[1582]: time="2025-09-04T04:19:25.523214847Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 4 04:19:28.155650 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 04:19:28.155888 systemd[1]: kubelet.service: Consumed 388ms CPU time, 110.4M memory peak. Sep 4 04:19:28.158731 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 04:19:28.192515 systemd[1]: Reload requested from client PID 2294 ('systemctl') (unit session-9.scope)... Sep 4 04:19:28.192564 systemd[1]: Reloading... Sep 4 04:19:28.295158 zram_generator::config[2332]: No configuration found. Sep 4 04:19:28.647204 systemd[1]: Reloading finished in 453 ms. Sep 4 04:19:28.733073 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 04:19:28.733201 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 04:19:28.733564 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 04:19:28.733614 systemd[1]: kubelet.service: Consumed 191ms CPU time, 98.3M memory peak. Sep 4 04:19:28.735467 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 04:19:28.935017 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 04:19:28.948634 (kubelet)[2384]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 04:19:28.993362 kubelet[2384]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 04:19:28.993362 kubelet[2384]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 04:19:28.993362 kubelet[2384]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 04:19:28.993882 kubelet[2384]: I0904 04:19:28.993436 2384 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 04:19:29.033972 update_engine[1514]: I20250904 04:19:29.033829 1514 update_attempter.cc:509] Updating boot flags... Sep 4 04:19:29.406323 kubelet[2384]: I0904 04:19:29.406266 2384 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 4 04:19:29.406323 kubelet[2384]: I0904 04:19:29.406302 2384 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 04:19:29.406693 kubelet[2384]: I0904 04:19:29.406663 2384 server.go:934] "Client rotation is on, will bootstrap in background" Sep 4 04:19:29.467453 kubelet[2384]: E0904 04:19:29.465486 2384 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Sep 4 04:19:29.473553 kubelet[2384]: I0904 04:19:29.473504 2384 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 04:19:29.487195 kubelet[2384]: I0904 04:19:29.487150 2384 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 04:19:29.502194 kubelet[2384]: I0904 04:19:29.502148 2384 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 04:19:29.502356 kubelet[2384]: I0904 04:19:29.502313 2384 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 4 04:19:29.502557 kubelet[2384]: I0904 04:19:29.502503 2384 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 04:19:29.502862 kubelet[2384]: I0904 04:19:29.502556 2384 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 04:19:29.503098 kubelet[2384]: I0904 04:19:29.502877 2384 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 04:19:29.503098 kubelet[2384]: I0904 04:19:29.502892 2384 container_manager_linux.go:300] "Creating device plugin manager" Sep 4 04:19:29.503098 kubelet[2384]: I0904 04:19:29.503090 2384 state_mem.go:36] "Initialized new in-memory state store" Sep 4 04:19:29.509853 kubelet[2384]: I0904 04:19:29.509823 2384 kubelet.go:408] "Attempting to sync node with API server" Sep 4 04:19:29.509853 kubelet[2384]: I0904 04:19:29.509855 2384 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 04:19:29.509949 kubelet[2384]: I0904 04:19:29.509900 2384 kubelet.go:314] "Adding apiserver pod source" Sep 4 04:19:29.509949 kubelet[2384]: I0904 04:19:29.509922 2384 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 04:19:29.515379 kubelet[2384]: I0904 04:19:29.515343 2384 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 4 04:19:29.515868 kubelet[2384]: I0904 04:19:29.515840 2384 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 04:19:29.515955 kubelet[2384]: W0904 04:19:29.515928 2384 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 04:19:29.519552 kubelet[2384]: W0904 04:19:29.519500 2384 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Sep 4 04:19:29.519605 kubelet[2384]: E0904 04:19:29.519559 2384 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Sep 4 04:19:29.519605 kubelet[2384]: W0904 04:19:29.519493 2384 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Sep 4 04:19:29.519605 kubelet[2384]: E0904 04:19:29.519592 2384 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Sep 4 04:19:29.521743 kubelet[2384]: I0904 04:19:29.521480 2384 server.go:1274] "Started kubelet" Sep 4 04:19:29.522110 kubelet[2384]: I0904 04:19:29.522069 2384 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 04:19:29.524828 kubelet[2384]: I0904 04:19:29.524801 2384 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 04:19:29.526220 kubelet[2384]: I0904 04:19:29.526190 2384 server.go:449] "Adding debug handlers to kubelet server" Sep 4 04:19:29.534424 kubelet[2384]: I0904 04:19:29.534378 2384 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 04:19:29.534718 kubelet[2384]: I0904 04:19:29.534663 2384 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 04:19:29.542081 kubelet[2384]: I0904 04:19:29.542041 2384 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 4 04:19:29.543847 kubelet[2384]: I0904 04:19:29.543812 2384 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 04:19:29.544533 kubelet[2384]: E0904 04:19:29.544501 2384 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 04:19:29.545293 kubelet[2384]: I0904 04:19:29.545088 2384 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 4 04:19:29.546773 kubelet[2384]: E0904 04:19:29.546711 2384 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="200ms" Sep 4 04:19:29.550763 kubelet[2384]: E0904 04:19:29.547351 2384 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.55:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.55:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1861f96b720c99d7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-04 04:19:29.521449431 +0000 UTC m=+0.568391520,LastTimestamp:2025-09-04 04:19:29.521449431 +0000 UTC m=+0.568391520,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 04:19:29.551827 kubelet[2384]: E0904 04:19:29.551799 2384 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 04:19:29.554405 kubelet[2384]: I0904 04:19:29.554381 2384 reconciler.go:26] "Reconciler: start to sync state" Sep 4 04:19:29.554623 kubelet[2384]: W0904 04:19:29.554543 2384 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Sep 4 04:19:29.554669 kubelet[2384]: E0904 04:19:29.554625 2384 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Sep 4 04:19:29.556277 kubelet[2384]: I0904 04:19:29.556252 2384 factory.go:221] Registration of the containerd container factory successfully Sep 4 04:19:29.556277 kubelet[2384]: I0904 04:19:29.556272 2384 factory.go:221] Registration of the systemd container factory successfully Sep 4 04:19:29.556399 kubelet[2384]: I0904 04:19:29.556377 2384 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 04:19:29.582752 kubelet[2384]: I0904 04:19:29.582685 2384 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 04:19:29.582752 kubelet[2384]: I0904 04:19:29.582704 2384 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 04:19:29.582752 kubelet[2384]: I0904 04:19:29.582719 2384 state_mem.go:36] "Initialized new in-memory state store" Sep 4 04:19:29.583916 kubelet[2384]: I0904 04:19:29.582749 2384 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 04:19:29.584082 kubelet[2384]: I0904 04:19:29.584059 2384 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 04:19:29.584282 kubelet[2384]: I0904 04:19:29.584099 2384 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 04:19:29.584282 kubelet[2384]: I0904 04:19:29.584241 2384 kubelet.go:2321] "Starting kubelet main sync loop" Sep 4 04:19:29.584344 kubelet[2384]: E0904 04:19:29.584279 2384 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 04:19:29.645041 kubelet[2384]: E0904 04:19:29.644975 2384 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 04:19:29.684581 kubelet[2384]: E0904 04:19:29.684447 2384 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 04:19:29.685263 kubelet[2384]: W0904 04:19:29.685191 2384 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Sep 4 04:19:29.685331 kubelet[2384]: E0904 04:19:29.685272 2384 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Sep 4 04:19:29.704520 kubelet[2384]: I0904 04:19:29.704482 2384 policy_none.go:49] "None policy: Start" Sep 4 04:19:29.705727 kubelet[2384]: I0904 04:19:29.705660 2384 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 04:19:29.705727 kubelet[2384]: I0904 04:19:29.705689 2384 state_mem.go:35] "Initializing new in-memory state store" Sep 4 04:19:29.713115 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 04:19:29.732737 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 04:19:29.736512 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 04:19:29.745682 kubelet[2384]: E0904 04:19:29.745644 2384 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 04:19:29.747254 kubelet[2384]: E0904 04:19:29.747195 2384 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="400ms" Sep 4 04:19:29.750143 kubelet[2384]: I0904 04:19:29.750104 2384 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 04:19:29.750382 kubelet[2384]: I0904 04:19:29.750365 2384 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 04:19:29.750433 kubelet[2384]: I0904 04:19:29.750380 2384 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 04:19:29.750724 kubelet[2384]: I0904 04:19:29.750633 2384 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 04:19:29.752014 kubelet[2384]: E0904 04:19:29.751989 2384 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 4 04:19:29.852800 kubelet[2384]: I0904 04:19:29.852711 2384 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 4 04:19:29.853405 kubelet[2384]: E0904 04:19:29.853344 2384 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Sep 4 04:19:29.895877 systemd[1]: Created slice kubepods-burstable-podaaa18e3965d9f670dc3da04ca990942b.slice - libcontainer container kubepods-burstable-podaaa18e3965d9f670dc3da04ca990942b.slice. Sep 4 04:19:29.927877 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice - libcontainer container kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Sep 4 04:19:29.943340 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice - libcontainer container kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Sep 4 04:19:29.957167 kubelet[2384]: I0904 04:19:29.957072 2384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 04:19:29.957167 kubelet[2384]: I0904 04:19:29.957164 2384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 04:19:29.957406 kubelet[2384]: I0904 04:19:29.957197 2384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 4 04:19:29.957406 kubelet[2384]: I0904 04:19:29.957221 2384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aaa18e3965d9f670dc3da04ca990942b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"aaa18e3965d9f670dc3da04ca990942b\") " pod="kube-system/kube-apiserver-localhost" Sep 4 04:19:29.957406 kubelet[2384]: I0904 04:19:29.957244 2384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aaa18e3965d9f670dc3da04ca990942b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"aaa18e3965d9f670dc3da04ca990942b\") " pod="kube-system/kube-apiserver-localhost" Sep 4 04:19:29.957406 kubelet[2384]: I0904 04:19:29.957301 2384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aaa18e3965d9f670dc3da04ca990942b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"aaa18e3965d9f670dc3da04ca990942b\") " pod="kube-system/kube-apiserver-localhost" Sep 4 04:19:29.957406 kubelet[2384]: I0904 04:19:29.957333 2384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 04:19:29.957578 kubelet[2384]: I0904 04:19:29.957351 2384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 04:19:29.957578 kubelet[2384]: I0904 04:19:29.957364 2384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 04:19:30.055376 kubelet[2384]: I0904 04:19:30.055305 2384 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 4 04:19:30.055773 kubelet[2384]: E0904 04:19:30.055732 2384 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Sep 4 04:19:30.147914 kubelet[2384]: E0904 04:19:30.147832 2384 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="800ms" Sep 4 04:19:30.225397 kubelet[2384]: E0904 04:19:30.225330 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:19:30.226213 containerd[1582]: time="2025-09-04T04:19:30.226167800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:aaa18e3965d9f670dc3da04ca990942b,Namespace:kube-system,Attempt:0,}" Sep 4 04:19:30.240621 kubelet[2384]: E0904 04:19:30.240542 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:19:30.241228 containerd[1582]: time="2025-09-04T04:19:30.241178215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 4 04:19:30.247724 kubelet[2384]: E0904 04:19:30.247070 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:19:30.247866 containerd[1582]: time="2025-09-04T04:19:30.247821569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 4 04:19:30.256902 containerd[1582]: time="2025-09-04T04:19:30.256788877Z" level=info msg="connecting to shim 99de174747166b7a6d38383fd72c6db7f64e62bdba423834c905959865caafea" address="unix:///run/containerd/s/268471a9729ec7185cb63e4d49215190044439b52c2e98d5a87727f940e5ca65" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:19:30.293514 containerd[1582]: time="2025-09-04T04:19:30.293438728Z" level=info msg="connecting to shim 176275e56c67359ecc740802a0bfc051f8bd6afaf0be4e9b2deab3558e870bd5" address="unix:///run/containerd/s/16bfc7f85a16863e49d238c4047e1f441731f675eb5fc8a3511fd7ee2162bc13" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:19:30.300393 systemd[1]: Started cri-containerd-99de174747166b7a6d38383fd72c6db7f64e62bdba423834c905959865caafea.scope - libcontainer container 99de174747166b7a6d38383fd72c6db7f64e62bdba423834c905959865caafea. Sep 4 04:19:30.304340 containerd[1582]: time="2025-09-04T04:19:30.304291188Z" level=info msg="connecting to shim 1c3bc955e83432b8216db5676ba71f32615b41ce62f725582e95ec537e46add7" address="unix:///run/containerd/s/3ec87d7c43856146256f974ad31f320c907e741f8a0ecd3a491bafa2d4a3df77" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:19:30.334284 systemd[1]: Started cri-containerd-176275e56c67359ecc740802a0bfc051f8bd6afaf0be4e9b2deab3558e870bd5.scope - libcontainer container 176275e56c67359ecc740802a0bfc051f8bd6afaf0be4e9b2deab3558e870bd5. Sep 4 04:19:30.342882 systemd[1]: Started cri-containerd-1c3bc955e83432b8216db5676ba71f32615b41ce62f725582e95ec537e46add7.scope - libcontainer container 1c3bc955e83432b8216db5676ba71f32615b41ce62f725582e95ec537e46add7. Sep 4 04:19:30.407376 containerd[1582]: time="2025-09-04T04:19:30.407320594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:aaa18e3965d9f670dc3da04ca990942b,Namespace:kube-system,Attempt:0,} returns sandbox id \"99de174747166b7a6d38383fd72c6db7f64e62bdba423834c905959865caafea\"" Sep 4 04:19:30.410553 kubelet[2384]: E0904 04:19:30.410517 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:19:30.412766 containerd[1582]: time="2025-09-04T04:19:30.412716958Z" level=info msg="CreateContainer within sandbox \"99de174747166b7a6d38383fd72c6db7f64e62bdba423834c905959865caafea\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 04:19:30.420703 containerd[1582]: time="2025-09-04T04:19:30.420639116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c3bc955e83432b8216db5676ba71f32615b41ce62f725582e95ec537e46add7\"" Sep 4 04:19:30.422048 kubelet[2384]: E0904 04:19:30.422014 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:19:30.423816 containerd[1582]: time="2025-09-04T04:19:30.423691518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"176275e56c67359ecc740802a0bfc051f8bd6afaf0be4e9b2deab3558e870bd5\"" Sep 4 04:19:30.424310 containerd[1582]: time="2025-09-04T04:19:30.424274912Z" level=info msg="CreateContainer within sandbox \"1c3bc955e83432b8216db5676ba71f32615b41ce62f725582e95ec537e46add7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 04:19:30.425328 kubelet[2384]: E0904 04:19:30.425299 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:19:30.426956 containerd[1582]: time="2025-09-04T04:19:30.426924048Z" level=info msg="CreateContainer within sandbox \"176275e56c67359ecc740802a0bfc051f8bd6afaf0be4e9b2deab3558e870bd5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 04:19:30.429494 containerd[1582]: time="2025-09-04T04:19:30.429462872Z" level=info msg="Container 23ed1e01f5cc7cafd4b19508798ea46d3cf9894e11d3b624c9ba17ab5c242822: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:19:30.438574 containerd[1582]: time="2025-09-04T04:19:30.438496276Z" level=info msg="Container 90c9dd9cd251aa9eee7a74c436f37d3ca5d84cb84cf444adbd429eed2e8ce844: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:19:30.438753 containerd[1582]: time="2025-09-04T04:19:30.438697313Z" level=info msg="CreateContainer within sandbox \"99de174747166b7a6d38383fd72c6db7f64e62bdba423834c905959865caafea\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"23ed1e01f5cc7cafd4b19508798ea46d3cf9894e11d3b624c9ba17ab5c242822\"" Sep 4 04:19:30.440339 containerd[1582]: time="2025-09-04T04:19:30.440307133Z" level=info msg="StartContainer for \"23ed1e01f5cc7cafd4b19508798ea46d3cf9894e11d3b624c9ba17ab5c242822\"" Sep 4 04:19:30.441509 containerd[1582]: time="2025-09-04T04:19:30.441479064Z" level=info msg="connecting to shim 23ed1e01f5cc7cafd4b19508798ea46d3cf9894e11d3b624c9ba17ab5c242822" address="unix:///run/containerd/s/268471a9729ec7185cb63e4d49215190044439b52c2e98d5a87727f940e5ca65" protocol=ttrpc version=3 Sep 4 04:19:30.442120 containerd[1582]: time="2025-09-04T04:19:30.442083007Z" level=info msg="Container 669e1f06340c7a48e8e33a6911f68e9640b0d55caede7d0950a37442d6dfa73e: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:19:30.446878 containerd[1582]: time="2025-09-04T04:19:30.446843983Z" level=info msg="CreateContainer within sandbox \"1c3bc955e83432b8216db5676ba71f32615b41ce62f725582e95ec537e46add7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"90c9dd9cd251aa9eee7a74c436f37d3ca5d84cb84cf444adbd429eed2e8ce844\"" Sep 4 04:19:30.448159 containerd[1582]: time="2025-09-04T04:19:30.447486338Z" level=info msg="StartContainer for \"90c9dd9cd251aa9eee7a74c436f37d3ca5d84cb84cf444adbd429eed2e8ce844\"" Sep 4 04:19:30.448834 containerd[1582]: time="2025-09-04T04:19:30.448800466Z" level=info msg="connecting to shim 90c9dd9cd251aa9eee7a74c436f37d3ca5d84cb84cf444adbd429eed2e8ce844" address="unix:///run/containerd/s/3ec87d7c43856146256f974ad31f320c907e741f8a0ecd3a491bafa2d4a3df77" protocol=ttrpc version=3 Sep 4 04:19:30.451342 containerd[1582]: time="2025-09-04T04:19:30.451306402Z" level=info msg="CreateContainer within sandbox \"176275e56c67359ecc740802a0bfc051f8bd6afaf0be4e9b2deab3558e870bd5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"669e1f06340c7a48e8e33a6911f68e9640b0d55caede7d0950a37442d6dfa73e\"" Sep 4 04:19:30.451997 containerd[1582]: time="2025-09-04T04:19:30.451960344Z" level=info msg="StartContainer for \"669e1f06340c7a48e8e33a6911f68e9640b0d55caede7d0950a37442d6dfa73e\"" Sep 4 04:19:30.453005 containerd[1582]: time="2025-09-04T04:19:30.452977527Z" level=info msg="connecting to shim 669e1f06340c7a48e8e33a6911f68e9640b0d55caede7d0950a37442d6dfa73e" address="unix:///run/containerd/s/16bfc7f85a16863e49d238c4047e1f441731f675eb5fc8a3511fd7ee2162bc13" protocol=ttrpc version=3 Sep 4 04:19:30.458153 kubelet[2384]: I0904 04:19:30.457991 2384 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 4 04:19:30.458811 kubelet[2384]: E0904 04:19:30.458782 2384 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Sep 4 04:19:30.466770 systemd[1]: Started cri-containerd-23ed1e01f5cc7cafd4b19508798ea46d3cf9894e11d3b624c9ba17ab5c242822.scope - libcontainer container 23ed1e01f5cc7cafd4b19508798ea46d3cf9894e11d3b624c9ba17ab5c242822. Sep 4 04:19:30.478299 systemd[1]: Started cri-containerd-90c9dd9cd251aa9eee7a74c436f37d3ca5d84cb84cf444adbd429eed2e8ce844.scope - libcontainer container 90c9dd9cd251aa9eee7a74c436f37d3ca5d84cb84cf444adbd429eed2e8ce844. Sep 4 04:19:30.482676 systemd[1]: Started cri-containerd-669e1f06340c7a48e8e33a6911f68e9640b0d55caede7d0950a37442d6dfa73e.scope - libcontainer container 669e1f06340c7a48e8e33a6911f68e9640b0d55caede7d0950a37442d6dfa73e. Sep 4 04:19:30.958627 containerd[1582]: time="2025-09-04T04:19:30.958583975Z" level=info msg="StartContainer for \"669e1f06340c7a48e8e33a6911f68e9640b0d55caede7d0950a37442d6dfa73e\" returns successfully" Sep 4 04:19:30.958751 containerd[1582]: time="2025-09-04T04:19:30.958740737Z" level=info msg="StartContainer for \"90c9dd9cd251aa9eee7a74c436f37d3ca5d84cb84cf444adbd429eed2e8ce844\" returns successfully" Sep 4 04:19:30.961939 containerd[1582]: time="2025-09-04T04:19:30.961802051Z" level=info msg="StartContainer for \"23ed1e01f5cc7cafd4b19508798ea46d3cf9894e11d3b624c9ba17ab5c242822\" returns successfully" Sep 4 04:19:30.978798 kubelet[2384]: E0904 04:19:30.978753 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:19:30.979168 kubelet[2384]: E0904 04:19:30.979119 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:19:31.262605 kubelet[2384]: I0904 04:19:31.262470 2384 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 4 04:19:31.980528 kubelet[2384]: E0904 04:19:31.980466 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:19:31.980730 kubelet[2384]: E0904 04:19:31.980670 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:19:32.211698 kubelet[2384]: E0904 04:19:32.211627 2384 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 4 04:19:32.315315 kubelet[2384]: I0904 04:19:32.314906 2384 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 4 04:19:32.315315 kubelet[2384]: E0904 04:19:32.314955 2384 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 4 04:19:32.512217 kubelet[2384]: I0904 04:19:32.511928 2384 apiserver.go:52] "Watching apiserver" Sep 4 04:19:32.545691 kubelet[2384]: I0904 04:19:32.545640 2384 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 4 04:19:32.984853 kubelet[2384]: E0904 04:19:32.984796 2384 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 4 04:19:32.985058 kubelet[2384]: E0904 04:19:32.984965 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:19:33.328648 kubelet[2384]: E0904 04:19:33.328488 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:19:33.981395 kubelet[2384]: E0904 04:19:33.981344 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:19:34.889736 systemd[1]: Reload requested from client PID 2680 ('systemctl') (unit session-9.scope)... Sep 4 04:19:34.889756 systemd[1]: Reloading... Sep 4 04:19:34.996163 zram_generator::config[2724]: No configuration found. Sep 4 04:19:35.307197 systemd[1]: Reloading finished in 417 ms. Sep 4 04:19:35.336865 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 04:19:35.346263 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 04:19:35.346582 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 04:19:35.346646 systemd[1]: kubelet.service: Consumed 997ms CPU time, 132.9M memory peak. Sep 4 04:19:35.348800 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 04:19:35.593973 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 04:19:35.605728 (kubelet)[2768]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 04:19:35.941838 kubelet[2768]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 04:19:35.943374 kubelet[2768]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 04:19:35.943374 kubelet[2768]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 04:19:35.943374 kubelet[2768]: I0904 04:19:35.942494 2768 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 04:19:35.952156 kubelet[2768]: I0904 04:19:35.952098 2768 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 4 04:19:35.952156 kubelet[2768]: I0904 04:19:35.952162 2768 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 04:19:35.952611 kubelet[2768]: I0904 04:19:35.952462 2768 server.go:934] "Client rotation is on, will bootstrap in background" Sep 4 04:19:35.953934 kubelet[2768]: I0904 04:19:35.953904 2768 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 04:19:35.955872 kubelet[2768]: I0904 04:19:35.955830 2768 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 04:19:35.963256 kubelet[2768]: I0904 04:19:35.961561 2768 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 04:19:35.967277 kubelet[2768]: I0904 04:19:35.967228 2768 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 04:19:35.967474 kubelet[2768]: I0904 04:19:35.967455 2768 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 4 04:19:35.967664 kubelet[2768]: I0904 04:19:35.967617 2768 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 04:19:35.967918 kubelet[2768]: I0904 04:19:35.967662 2768 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 04:19:35.967918 kubelet[2768]: I0904 04:19:35.967861 2768 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 04:19:35.967918 kubelet[2768]: I0904 04:19:35.967871 2768 container_manager_linux.go:300] "Creating device plugin manager" Sep 4 04:19:35.967918 kubelet[2768]: I0904 04:19:35.967901 2768 state_mem.go:36] "Initialized new in-memory state store" Sep 4 04:19:35.968097 kubelet[2768]: I0904 04:19:35.968025 2768 kubelet.go:408] "Attempting to sync node with API server" Sep 4 04:19:35.968097 kubelet[2768]: I0904 04:19:35.968038 2768 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 04:19:35.968097 kubelet[2768]: I0904 04:19:35.968072 2768 kubelet.go:314] "Adding apiserver pod source" Sep 4 04:19:35.968097 kubelet[2768]: I0904 04:19:35.968082 2768 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 04:19:35.969243 kubelet[2768]: I0904 04:19:35.968768 2768 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 4 04:19:35.970575 kubelet[2768]: I0904 04:19:35.970513 2768 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 04:19:35.972493 kubelet[2768]: I0904 04:19:35.972398 2768 server.go:1274] "Started kubelet" Sep 4 04:19:35.972981 kubelet[2768]: I0904 04:19:35.972897 2768 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 04:19:35.973223 kubelet[2768]: I0904 04:19:35.973053 2768 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 04:19:35.973781 kubelet[2768]: I0904 04:19:35.973753 2768 server.go:449] "Adding debug handlers to kubelet server" Sep 4 04:19:35.976883 kubelet[2768]: I0904 04:19:35.976294 2768 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 04:19:35.977419 kubelet[2768]: I0904 04:19:35.977383 2768 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 04:19:35.979900 kubelet[2768]: I0904 04:19:35.979219 2768 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 4 04:19:35.979900 kubelet[2768]: I0904 04:19:35.979348 2768 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 4 04:19:35.979900 kubelet[2768]: I0904 04:19:35.979503 2768 reconciler.go:26] "Reconciler: start to sync state" Sep 4 04:19:35.979900 kubelet[2768]: I0904 04:19:35.979898 2768 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 04:19:35.984580 kubelet[2768]: I0904 04:19:35.984544 2768 factory.go:221] Registration of the systemd container factory successfully Sep 4 04:19:35.984724 kubelet[2768]: I0904 04:19:35.984686 2768 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 04:19:35.985976 kubelet[2768]: E0904 04:19:35.985939 2768 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 04:19:35.988926 kubelet[2768]: I0904 04:19:35.988891 2768 factory.go:221] Registration of the containerd container factory successfully Sep 4 04:19:35.989907 kubelet[2768]: I0904 04:19:35.989866 2768 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 04:19:35.992174 kubelet[2768]: I0904 04:19:35.992117 2768 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 04:19:35.992174 kubelet[2768]: I0904 04:19:35.992168 2768 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 04:19:35.992330 kubelet[2768]: I0904 04:19:35.992193 2768 kubelet.go:2321] "Starting kubelet main sync loop" Sep 4 04:19:35.992330 kubelet[2768]: E0904 04:19:35.992259 2768 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 04:19:36.021075 kubelet[2768]: I0904 04:19:36.021029 2768 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 04:19:36.021075 kubelet[2768]: I0904 04:19:36.021069 2768 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 04:19:36.021075 kubelet[2768]: I0904 04:19:36.021089 2768 state_mem.go:36] "Initialized new in-memory state store" Sep 4 04:19:36.021343 kubelet[2768]: I0904 04:19:36.021289 2768 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 04:19:36.021343 kubelet[2768]: I0904 04:19:36.021324 2768 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 04:19:36.021343 kubelet[2768]: I0904 04:19:36.021343 2768 policy_none.go:49] "None policy: Start" Sep 4 04:19:36.021983 kubelet[2768]: I0904 04:19:36.021964 2768 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 04:19:36.021983 kubelet[2768]: I0904 04:19:36.021985 2768 state_mem.go:35] "Initializing new in-memory state store" Sep 4 04:19:36.022173 kubelet[2768]: I0904 04:19:36.022107 2768 state_mem.go:75] "Updated machine memory state" Sep 4 04:19:36.030364 kubelet[2768]: I0904 04:19:36.029910 2768 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 04:19:36.030364 kubelet[2768]: I0904 04:19:36.030192 2768 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 04:19:36.030364 kubelet[2768]: I0904 04:19:36.030205 2768 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 04:19:36.030737 kubelet[2768]: I0904 04:19:36.030694 2768 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 04:19:36.100423 kubelet[2768]: E0904 04:19:36.100349 2768 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 4 04:19:36.137046 kubelet[2768]: I0904 04:19:36.136987 2768 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 4 04:19:36.146890 kubelet[2768]: I0904 04:19:36.146835 2768 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 4 04:19:36.147035 kubelet[2768]: I0904 04:19:36.146954 2768 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 4 04:19:36.280594 kubelet[2768]: I0904 04:19:36.280541 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 04:19:36.280594 kubelet[2768]: I0904 04:19:36.280592 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 04:19:36.280887 kubelet[2768]: I0904 04:19:36.280627 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 04:19:36.280887 kubelet[2768]: I0904 04:19:36.280650 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 04:19:36.280887 kubelet[2768]: I0904 04:19:36.280705 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aaa18e3965d9f670dc3da04ca990942b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"aaa18e3965d9f670dc3da04ca990942b\") " pod="kube-system/kube-apiserver-localhost" Sep 4 04:19:36.280887 kubelet[2768]: I0904 04:19:36.280754 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aaa18e3965d9f670dc3da04ca990942b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"aaa18e3965d9f670dc3da04ca990942b\") " pod="kube-system/kube-apiserver-localhost" Sep 4 04:19:36.280887 kubelet[2768]: I0904 04:19:36.280779 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 04:19:36.281057 kubelet[2768]: I0904 04:19:36.280801 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 4 04:19:36.281057 kubelet[2768]: I0904 04:19:36.280819 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aaa18e3965d9f670dc3da04ca990942b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"aaa18e3965d9f670dc3da04ca990942b\") " pod="kube-system/kube-apiserver-localhost" Sep 4 04:19:36.400327 kubelet[2768]: E0904 04:19:36.400272 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:19:36.400482 kubelet[2768]: E0904 04:19:36.400338 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:19:36.400557 kubelet[2768]: E0904 04:19:36.400531 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:19:36.968826 kubelet[2768]: I0904 04:19:36.968762 2768 apiserver.go:52] "Watching apiserver" Sep 4 04:19:36.979529 kubelet[2768]: I0904 04:19:36.979472 2768 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 4 04:19:37.006276 kubelet[2768]: E0904 04:19:37.005626 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:19:37.006585 kubelet[2768]: E0904 04:19:37.006508 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:19:37.080093 kubelet[2768]: E0904 04:19:37.080051 2768 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 4 04:19:37.080385 kubelet[2768]: E0904 04:19:37.080366 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:19:37.148331 kubelet[2768]: I0904 04:19:37.148244 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.148215105 podStartE2EDuration="1.148215105s" podCreationTimestamp="2025-09-04 04:19:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 04:19:37.073372847 +0000 UTC m=+1.462300774" watchObservedRunningTime="2025-09-04 04:19:37.148215105 +0000 UTC m=+1.537143032" Sep 4 04:19:37.166718 kubelet[2768]: I0904 04:19:37.166636 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.166614209 podStartE2EDuration="1.166614209s" podCreationTimestamp="2025-09-04 04:19:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 04:19:37.148448707 +0000 UTC m=+1.537376634" watchObservedRunningTime="2025-09-04 04:19:37.166614209 +0000 UTC m=+1.555542146" Sep 4 04:19:37.179403 kubelet[2768]: I0904 04:19:37.179308 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.179296819 podStartE2EDuration="4.179296819s" podCreationTimestamp="2025-09-04 04:19:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 04:19:37.167051696 +0000 UTC m=+1.555979623" watchObservedRunningTime="2025-09-04 04:19:37.179296819 +0000 UTC m=+1.568224746" Sep 4 04:19:38.006515 kubelet[2768]: E0904 04:19:38.006475 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:19:38.006968 kubelet[2768]: E0904 04:19:38.006549 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:19:39.966881 kubelet[2768]: I0904 04:19:39.966814 2768 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 04:19:39.967823 kubelet[2768]: I0904 04:19:39.967640 2768 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 04:19:39.967859 containerd[1582]: time="2025-09-04T04:19:39.967409710Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 04:19:40.247839 kubelet[2768]: E0904 04:19:40.247626 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:19:40.563168 systemd[1]: Created slice kubepods-besteffort-pod09476a89_0427_4ceb_b688_d0bc4d6ded00.slice - libcontainer container kubepods-besteffort-pod09476a89_0427_4ceb_b688_d0bc4d6ded00.slice. Sep 4 04:19:40.608829 kubelet[2768]: I0904 04:19:40.608745 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn29g\" (UniqueName: \"kubernetes.io/projected/09476a89-0427-4ceb-b688-d0bc4d6ded00-kube-api-access-tn29g\") pod \"kube-proxy-vfkfk\" (UID: \"09476a89-0427-4ceb-b688-d0bc4d6ded00\") " pod="kube-system/kube-proxy-vfkfk" Sep 4 04:19:40.608829 kubelet[2768]: I0904 04:19:40.608799 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zjfk\" (UniqueName: \"kubernetes.io/projected/3b46ce19-e6a8-4632-9779-0ca7bf8d78a2-kube-api-access-8zjfk\") pod \"tigera-operator-58fc44c59b-kdwnl\" (UID: \"3b46ce19-e6a8-4632-9779-0ca7bf8d78a2\") " pod="tigera-operator/tigera-operator-58fc44c59b-kdwnl" Sep 4 04:19:40.608829 kubelet[2768]: I0904 04:19:40.608838 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09476a89-0427-4ceb-b688-d0bc4d6ded00-xtables-lock\") pod \"kube-proxy-vfkfk\" (UID: \"09476a89-0427-4ceb-b688-d0bc4d6ded00\") " pod="kube-system/kube-proxy-vfkfk" Sep 4 04:19:40.609108 kubelet[2768]: I0904 04:19:40.608869 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09476a89-0427-4ceb-b688-d0bc4d6ded00-lib-modules\") pod \"kube-proxy-vfkfk\" (UID: \"09476a89-0427-4ceb-b688-d0bc4d6ded00\") " pod="kube-system/kube-proxy-vfkfk" Sep 4 04:19:40.609108 kubelet[2768]: I0904 04:19:40.608889 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3b46ce19-e6a8-4632-9779-0ca7bf8d78a2-var-lib-calico\") pod \"tigera-operator-58fc44c59b-kdwnl\" (UID: \"3b46ce19-e6a8-4632-9779-0ca7bf8d78a2\") " pod="tigera-operator/tigera-operator-58fc44c59b-kdwnl" Sep 4 04:19:40.609108 kubelet[2768]: I0904 04:19:40.608903 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/09476a89-0427-4ceb-b688-d0bc4d6ded00-kube-proxy\") pod \"kube-proxy-vfkfk\" (UID: \"09476a89-0427-4ceb-b688-d0bc4d6ded00\") " pod="kube-system/kube-proxy-vfkfk" Sep 4 04:19:40.614894 systemd[1]: Created slice kubepods-besteffort-pod3b46ce19_e6a8_4632_9779_0ca7bf8d78a2.slice - libcontainer container kubepods-besteffort-pod3b46ce19_e6a8_4632_9779_0ca7bf8d78a2.slice. Sep 4 04:19:40.873504 kubelet[2768]: E0904 04:19:40.873355 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:19:40.874161 containerd[1582]: time="2025-09-04T04:19:40.874101876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vfkfk,Uid:09476a89-0427-4ceb-b688-d0bc4d6ded00,Namespace:kube-system,Attempt:0,}" Sep 4 04:19:40.904827 containerd[1582]: time="2025-09-04T04:19:40.904751458Z" level=info msg="connecting to shim 30cfe512738ec8bea086cdacb7db04058ba4252e4cb984300702c46f02f3150d" address="unix:///run/containerd/s/81e41c69a658f980d0198205509f2b50fd69c7dddeace64d93fe756292d621f1" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:19:40.918602 containerd[1582]: time="2025-09-04T04:19:40.918538408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-kdwnl,Uid:3b46ce19-e6a8-4632-9779-0ca7bf8d78a2,Namespace:tigera-operator,Attempt:0,}" Sep 4 04:19:40.947711 containerd[1582]: time="2025-09-04T04:19:40.947647553Z" level=info msg="connecting to shim 84e1060640466a3678ea59000bc3ba6df770cf765de9c050578ef646f2cb90bd" address="unix:///run/containerd/s/9f53a67a560036f93fda62f569a1689e646ff6b3c34397ed630eb946c44147b3" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:19:40.948361 systemd[1]: Started cri-containerd-30cfe512738ec8bea086cdacb7db04058ba4252e4cb984300702c46f02f3150d.scope - libcontainer container 30cfe512738ec8bea086cdacb7db04058ba4252e4cb984300702c46f02f3150d. Sep 4 04:19:40.985639 containerd[1582]: time="2025-09-04T04:19:40.985515858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vfkfk,Uid:09476a89-0427-4ceb-b688-d0bc4d6ded00,Namespace:kube-system,Attempt:0,} returns sandbox id \"30cfe512738ec8bea086cdacb7db04058ba4252e4cb984300702c46f02f3150d\"" Sep 4 04:19:40.986508 kubelet[2768]: E0904 04:19:40.986271 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:19:40.988963 containerd[1582]: time="2025-09-04T04:19:40.988932517Z" level=info msg="CreateContainer within sandbox \"30cfe512738ec8bea086cdacb7db04058ba4252e4cb984300702c46f02f3150d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 04:19:41.005515 containerd[1582]: time="2025-09-04T04:19:41.005452745Z" level=info msg="Container 7594e26da2850376891826d41467ec54be5faa57ebebf7d5804df9a1cacaebed: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:19:41.011448 systemd[1]: Started cri-containerd-84e1060640466a3678ea59000bc3ba6df770cf765de9c050578ef646f2cb90bd.scope - libcontainer container 84e1060640466a3678ea59000bc3ba6df770cf765de9c050578ef646f2cb90bd. Sep 4 04:19:41.022434 kubelet[2768]: E0904 04:19:41.022382 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:19:41.024547 containerd[1582]: time="2025-09-04T04:19:41.024494946Z" level=info msg="CreateContainer within sandbox \"30cfe512738ec8bea086cdacb7db04058ba4252e4cb984300702c46f02f3150d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7594e26da2850376891826d41467ec54be5faa57ebebf7d5804df9a1cacaebed\"" Sep 4 04:19:41.024970 containerd[1582]: time="2025-09-04T04:19:41.024943814Z" level=info msg="StartContainer for \"7594e26da2850376891826d41467ec54be5faa57ebebf7d5804df9a1cacaebed\"" Sep 4 04:19:41.031514 containerd[1582]: time="2025-09-04T04:19:41.031454504Z" level=info msg="connecting to shim 7594e26da2850376891826d41467ec54be5faa57ebebf7d5804df9a1cacaebed" address="unix:///run/containerd/s/81e41c69a658f980d0198205509f2b50fd69c7dddeace64d93fe756292d621f1" protocol=ttrpc version=3 Sep 4 04:19:41.066684 systemd[1]: Started cri-containerd-7594e26da2850376891826d41467ec54be5faa57ebebf7d5804df9a1cacaebed.scope - libcontainer container 7594e26da2850376891826d41467ec54be5faa57ebebf7d5804df9a1cacaebed. Sep 4 04:19:41.070796 containerd[1582]: time="2025-09-04T04:19:41.070751410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-kdwnl,Uid:3b46ce19-e6a8-4632-9779-0ca7bf8d78a2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"84e1060640466a3678ea59000bc3ba6df770cf765de9c050578ef646f2cb90bd\"" Sep 4 04:19:41.073678 containerd[1582]: time="2025-09-04T04:19:41.073629423Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 4 04:19:41.287265 containerd[1582]: time="2025-09-04T04:19:41.287186904Z" level=info msg="StartContainer for \"7594e26da2850376891826d41467ec54be5faa57ebebf7d5804df9a1cacaebed\" returns successfully" Sep 4 04:19:42.026088 kubelet[2768]: E0904 04:19:42.026038 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:19:42.603436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3744618173.mount: Deactivated successfully. Sep 4 04:19:43.018971 containerd[1582]: time="2025-09-04T04:19:43.018870214Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:19:43.019730 containerd[1582]: time="2025-09-04T04:19:43.019680602Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 4 04:19:43.020896 containerd[1582]: time="2025-09-04T04:19:43.020839121Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:19:43.023813 containerd[1582]: time="2025-09-04T04:19:43.023746702Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:19:43.024301 containerd[1582]: time="2025-09-04T04:19:43.024261572Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 1.950584336s" Sep 4 04:19:43.024301 containerd[1582]: time="2025-09-04T04:19:43.024297640Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 4 04:19:43.026821 containerd[1582]: time="2025-09-04T04:19:43.026788703Z" level=info msg="CreateContainer within sandbox \"84e1060640466a3678ea59000bc3ba6df770cf765de9c050578ef646f2cb90bd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 04:19:43.029638 kubelet[2768]: E0904 04:19:43.029598 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:19:43.043161 containerd[1582]: time="2025-09-04T04:19:43.042550889Z" level=info msg="Container 4eaea2b91b382caa92cecc98b45fded71ddcd803609d802b3f1cb4b0de4c24de: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:19:43.050543 containerd[1582]: time="2025-09-04T04:19:43.050481644Z" level=info msg="CreateContainer within sandbox \"84e1060640466a3678ea59000bc3ba6df770cf765de9c050578ef646f2cb90bd\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4eaea2b91b382caa92cecc98b45fded71ddcd803609d802b3f1cb4b0de4c24de\"" Sep 4 04:19:43.051185 containerd[1582]: time="2025-09-04T04:19:43.051147470Z" level=info msg="StartContainer for \"4eaea2b91b382caa92cecc98b45fded71ddcd803609d802b3f1cb4b0de4c24de\"" Sep 4 04:19:43.052160 containerd[1582]: time="2025-09-04T04:19:43.052111288Z" level=info msg="connecting to shim 4eaea2b91b382caa92cecc98b45fded71ddcd803609d802b3f1cb4b0de4c24de" address="unix:///run/containerd/s/9f53a67a560036f93fda62f569a1689e646ff6b3c34397ed630eb946c44147b3" protocol=ttrpc version=3 Sep 4 04:19:43.113301 systemd[1]: Started cri-containerd-4eaea2b91b382caa92cecc98b45fded71ddcd803609d802b3f1cb4b0de4c24de.scope - libcontainer container 4eaea2b91b382caa92cecc98b45fded71ddcd803609d802b3f1cb4b0de4c24de. Sep 4 04:19:43.150920 containerd[1582]: time="2025-09-04T04:19:43.150863950Z" level=info msg="StartContainer for \"4eaea2b91b382caa92cecc98b45fded71ddcd803609d802b3f1cb4b0de4c24de\" returns successfully" Sep 4 04:19:44.040869 kubelet[2768]: I0904 04:19:44.040757 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vfkfk" podStartSLOduration=4.040733428 podStartE2EDuration="4.040733428s" podCreationTimestamp="2025-09-04 04:19:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 04:19:42.036949825 +0000 UTC m=+6.425877752" watchObservedRunningTime="2025-09-04 04:19:44.040733428 +0000 UTC m=+8.429661355" Sep 4 04:19:44.966400 kubelet[2768]: E0904 04:19:44.964048 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:19:45.002548 kubelet[2768]: I0904 04:19:45.002437 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-kdwnl" podStartSLOduration=3.049431487 podStartE2EDuration="5.002400798s" podCreationTimestamp="2025-09-04 04:19:40 +0000 UTC" firstStartedPulling="2025-09-04 04:19:41.072255449 +0000 UTC m=+5.461183386" lastFinishedPulling="2025-09-04 04:19:43.025224769 +0000 UTC m=+7.414152697" observedRunningTime="2025-09-04 04:19:44.040987363 +0000 UTC m=+8.429915290" watchObservedRunningTime="2025-09-04 04:19:45.002400798 +0000 UTC m=+9.391328725" Sep 4 04:19:45.036973 kubelet[2768]: E0904 04:19:45.036909 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:19:47.924473 kubelet[2768]: E0904 04:19:47.924428 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:19:48.737187 sudo[1793]: pam_unix(sudo:session): session closed for user root Sep 4 04:19:48.741719 sshd[1792]: Connection closed by 10.0.0.1 port 56508 Sep 4 04:19:48.743835 sshd-session[1789]: pam_unix(sshd:session): session closed for user core Sep 4 04:19:48.756391 systemd[1]: sshd@8-10.0.0.55:22-10.0.0.1:56508.service: Deactivated successfully. Sep 4 04:19:48.764958 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 04:19:48.765974 systemd[1]: session-9.scope: Consumed 5.578s CPU time, 223.9M memory peak. Sep 4 04:19:48.774709 systemd-logind[1513]: Session 9 logged out. Waiting for processes to exit. Sep 4 04:19:48.777204 systemd-logind[1513]: Removed session 9. Sep 4 04:19:51.505039 systemd[1]: Created slice kubepods-besteffort-pod8aa7c9cc_ad05_4b9b_92d7_62189f0130e0.slice - libcontainer container kubepods-besteffort-pod8aa7c9cc_ad05_4b9b_92d7_62189f0130e0.slice. Sep 4 04:19:51.584668 kubelet[2768]: I0904 04:19:51.584561 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8aa7c9cc-ad05-4b9b-92d7-62189f0130e0-tigera-ca-bundle\") pod \"calico-typha-d5d87b9fc-ctzkx\" (UID: \"8aa7c9cc-ad05-4b9b-92d7-62189f0130e0\") " pod="calico-system/calico-typha-d5d87b9fc-ctzkx" Sep 4 04:19:51.585636 kubelet[2768]: I0904 04:19:51.584863 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr5r2\" (UniqueName: \"kubernetes.io/projected/8aa7c9cc-ad05-4b9b-92d7-62189f0130e0-kube-api-access-hr5r2\") pod \"calico-typha-d5d87b9fc-ctzkx\" (UID: \"8aa7c9cc-ad05-4b9b-92d7-62189f0130e0\") " pod="calico-system/calico-typha-d5d87b9fc-ctzkx" Sep 4 04:19:51.585801 kubelet[2768]: I0904 04:19:51.585729 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8aa7c9cc-ad05-4b9b-92d7-62189f0130e0-typha-certs\") pod \"calico-typha-d5d87b9fc-ctzkx\" (UID: \"8aa7c9cc-ad05-4b9b-92d7-62189f0130e0\") " pod="calico-system/calico-typha-d5d87b9fc-ctzkx" Sep 4 04:19:51.809663 kubelet[2768]: E0904 04:19:51.809495 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:19:51.810631 containerd[1582]: time="2025-09-04T04:19:51.810561722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d5d87b9fc-ctzkx,Uid:8aa7c9cc-ad05-4b9b-92d7-62189f0130e0,Namespace:calico-system,Attempt:0,}" Sep 4 04:19:51.903777 systemd[1]: Created slice kubepods-besteffort-podf1f4efaf_4b6d_49ec_8d7d_22bafb120deb.slice - libcontainer container kubepods-besteffort-podf1f4efaf_4b6d_49ec_8d7d_22bafb120deb.slice. Sep 4 04:19:51.918713 containerd[1582]: time="2025-09-04T04:19:51.918653239Z" level=info msg="connecting to shim 9d835f94bf619b7b3123de3df9bac32e7252398e7c03da59b55b35cf7d40e9d7" address="unix:///run/containerd/s/736a7cdf904d9c27f357d7d184d14e73bfc91c71d9aba3be1876fc920b64539a" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:19:51.953687 systemd[1]: Started cri-containerd-9d835f94bf619b7b3123de3df9bac32e7252398e7c03da59b55b35cf7d40e9d7.scope - libcontainer container 9d835f94bf619b7b3123de3df9bac32e7252398e7c03da59b55b35cf7d40e9d7. Sep 4 04:19:51.987953 kubelet[2768]: I0904 04:19:51.987890 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f1f4efaf-4b6d-49ec-8d7d-22bafb120deb-flexvol-driver-host\") pod \"calico-node-d4rr9\" (UID: \"f1f4efaf-4b6d-49ec-8d7d-22bafb120deb\") " pod="calico-system/calico-node-d4rr9" Sep 4 04:19:51.989010 kubelet[2768]: I0904 04:19:51.988301 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1f4efaf-4b6d-49ec-8d7d-22bafb120deb-lib-modules\") pod \"calico-node-d4rr9\" (UID: \"f1f4efaf-4b6d-49ec-8d7d-22bafb120deb\") " pod="calico-system/calico-node-d4rr9" Sep 4 04:19:51.989010 kubelet[2768]: I0904 04:19:51.988340 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f1f4efaf-4b6d-49ec-8d7d-22bafb120deb-policysync\") pod \"calico-node-d4rr9\" (UID: \"f1f4efaf-4b6d-49ec-8d7d-22bafb120deb\") " pod="calico-system/calico-node-d4rr9" Sep 4 04:19:51.989010 kubelet[2768]: I0904 04:19:51.988358 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f1f4efaf-4b6d-49ec-8d7d-22bafb120deb-var-run-calico\") pod \"calico-node-d4rr9\" (UID: \"f1f4efaf-4b6d-49ec-8d7d-22bafb120deb\") " pod="calico-system/calico-node-d4rr9" Sep 4 04:19:51.989010 kubelet[2768]: I0904 04:19:51.988372 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f1f4efaf-4b6d-49ec-8d7d-22bafb120deb-cni-log-dir\") pod \"calico-node-d4rr9\" (UID: \"f1f4efaf-4b6d-49ec-8d7d-22bafb120deb\") " pod="calico-system/calico-node-d4rr9" Sep 4 04:19:51.989010 kubelet[2768]: I0904 04:19:51.988385 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f1f4efaf-4b6d-49ec-8d7d-22bafb120deb-cni-net-dir\") pod \"calico-node-d4rr9\" (UID: \"f1f4efaf-4b6d-49ec-8d7d-22bafb120deb\") " pod="calico-system/calico-node-d4rr9" Sep 4 04:19:51.989183 kubelet[2768]: I0904 04:19:51.988398 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1f4efaf-4b6d-49ec-8d7d-22bafb120deb-xtables-lock\") pod \"calico-node-d4rr9\" (UID: \"f1f4efaf-4b6d-49ec-8d7d-22bafb120deb\") " pod="calico-system/calico-node-d4rr9" Sep 4 04:19:51.989183 kubelet[2768]: I0904 04:19:51.988411 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1f4efaf-4b6d-49ec-8d7d-22bafb120deb-tigera-ca-bundle\") pod \"calico-node-d4rr9\" (UID: \"f1f4efaf-4b6d-49ec-8d7d-22bafb120deb\") " pod="calico-system/calico-node-d4rr9" Sep 4 04:19:51.989183 kubelet[2768]: I0904 04:19:51.988425 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfn7k\" (UniqueName: \"kubernetes.io/projected/f1f4efaf-4b6d-49ec-8d7d-22bafb120deb-kube-api-access-lfn7k\") pod \"calico-node-d4rr9\" (UID: \"f1f4efaf-4b6d-49ec-8d7d-22bafb120deb\") " pod="calico-system/calico-node-d4rr9" Sep 4 04:19:51.989183 kubelet[2768]: I0904 04:19:51.988439 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f1f4efaf-4b6d-49ec-8d7d-22bafb120deb-cni-bin-dir\") pod \"calico-node-d4rr9\" (UID: \"f1f4efaf-4b6d-49ec-8d7d-22bafb120deb\") " pod="calico-system/calico-node-d4rr9" Sep 4 04:19:51.989183 kubelet[2768]: I0904 04:19:51.988461 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f1f4efaf-4b6d-49ec-8d7d-22bafb120deb-node-certs\") pod \"calico-node-d4rr9\" (UID: \"f1f4efaf-4b6d-49ec-8d7d-22bafb120deb\") " pod="calico-system/calico-node-d4rr9" Sep 4 04:19:51.989304 kubelet[2768]: I0904 04:19:51.988477 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f1f4efaf-4b6d-49ec-8d7d-22bafb120deb-var-lib-calico\") pod \"calico-node-d4rr9\" (UID: \"f1f4efaf-4b6d-49ec-8d7d-22bafb120deb\") " pod="calico-system/calico-node-d4rr9" Sep 4 04:19:52.052931 containerd[1582]: time="2025-09-04T04:19:52.052870706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d5d87b9fc-ctzkx,Uid:8aa7c9cc-ad05-4b9b-92d7-62189f0130e0,Namespace:calico-system,Attempt:0,} returns sandbox id \"9d835f94bf619b7b3123de3df9bac32e7252398e7c03da59b55b35cf7d40e9d7\"" Sep 4 04:19:52.053798 kubelet[2768]: E0904 04:19:52.053754 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:19:52.055349 containerd[1582]: time="2025-09-04T04:19:52.055314601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 4 04:19:52.097086 kubelet[2768]: E0904 04:19:52.096581 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.097086 kubelet[2768]: W0904 04:19:52.096619 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.097086 kubelet[2768]: E0904 04:19:52.096672 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.102523 kubelet[2768]: E0904 04:19:52.102491 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.102523 kubelet[2768]: W0904 04:19:52.102508 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.102729 kubelet[2768]: E0904 04:19:52.102525 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.175734 kubelet[2768]: E0904 04:19:52.175500 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vxmmz" podUID="c4122301-ffe2-4d53-9323-f407a3657094" Sep 4 04:19:52.193254 kubelet[2768]: E0904 04:19:52.193195 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.193254 kubelet[2768]: W0904 04:19:52.193225 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.193254 kubelet[2768]: E0904 04:19:52.193248 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.193686 kubelet[2768]: E0904 04:19:52.193559 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.193686 kubelet[2768]: W0904 04:19:52.193604 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.193686 kubelet[2768]: E0904 04:19:52.193617 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.194022 kubelet[2768]: E0904 04:19:52.193883 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.194022 kubelet[2768]: W0904 04:19:52.193899 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.194022 kubelet[2768]: E0904 04:19:52.193912 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.194992 kubelet[2768]: E0904 04:19:52.194971 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.194992 kubelet[2768]: W0904 04:19:52.194986 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.195112 kubelet[2768]: E0904 04:19:52.195008 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.195852 kubelet[2768]: E0904 04:19:52.195832 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.195852 kubelet[2768]: W0904 04:19:52.195848 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.195992 kubelet[2768]: E0904 04:19:52.195861 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.196254 kubelet[2768]: E0904 04:19:52.196234 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.196254 kubelet[2768]: W0904 04:19:52.196248 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.196254 kubelet[2768]: E0904 04:19:52.196258 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.196510 kubelet[2768]: E0904 04:19:52.196457 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.196510 kubelet[2768]: W0904 04:19:52.196472 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.196510 kubelet[2768]: E0904 04:19:52.196483 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.196990 kubelet[2768]: E0904 04:19:52.196968 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.196990 kubelet[2768]: W0904 04:19:52.196985 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.197073 kubelet[2768]: E0904 04:19:52.196997 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.197270 kubelet[2768]: E0904 04:19:52.197243 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.197270 kubelet[2768]: W0904 04:19:52.197265 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.197371 kubelet[2768]: E0904 04:19:52.197277 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.197594 kubelet[2768]: E0904 04:19:52.197574 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.197594 kubelet[2768]: W0904 04:19:52.197588 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.197667 kubelet[2768]: E0904 04:19:52.197600 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.197862 kubelet[2768]: E0904 04:19:52.197841 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.197862 kubelet[2768]: W0904 04:19:52.197856 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.197967 kubelet[2768]: E0904 04:19:52.197867 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.198184 kubelet[2768]: E0904 04:19:52.198144 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.198184 kubelet[2768]: W0904 04:19:52.198164 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.198184 kubelet[2768]: E0904 04:19:52.198175 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.198474 kubelet[2768]: E0904 04:19:52.198442 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.198474 kubelet[2768]: W0904 04:19:52.198466 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.198581 kubelet[2768]: E0904 04:19:52.198479 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.199403 kubelet[2768]: E0904 04:19:52.199244 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.199403 kubelet[2768]: W0904 04:19:52.199385 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.199403 kubelet[2768]: E0904 04:19:52.199394 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.199655 kubelet[2768]: E0904 04:19:52.199594 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.199655 kubelet[2768]: W0904 04:19:52.199612 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.199655 kubelet[2768]: E0904 04:19:52.199623 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.199839 kubelet[2768]: E0904 04:19:52.199820 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.199839 kubelet[2768]: W0904 04:19:52.199837 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.199918 kubelet[2768]: E0904 04:19:52.199849 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.200149 kubelet[2768]: E0904 04:19:52.200109 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.200222 kubelet[2768]: W0904 04:19:52.200161 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.200222 kubelet[2768]: E0904 04:19:52.200176 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.201189 kubelet[2768]: E0904 04:19:52.200397 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.201189 kubelet[2768]: W0904 04:19:52.200413 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.201189 kubelet[2768]: E0904 04:19:52.200423 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.201189 kubelet[2768]: E0904 04:19:52.200620 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.201189 kubelet[2768]: W0904 04:19:52.200633 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.201189 kubelet[2768]: E0904 04:19:52.200643 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.201189 kubelet[2768]: E0904 04:19:52.200851 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.201189 kubelet[2768]: W0904 04:19:52.200861 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.201189 kubelet[2768]: E0904 04:19:52.200872 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.207622 containerd[1582]: time="2025-09-04T04:19:52.207546071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d4rr9,Uid:f1f4efaf-4b6d-49ec-8d7d-22bafb120deb,Namespace:calico-system,Attempt:0,}" Sep 4 04:19:52.240491 containerd[1582]: time="2025-09-04T04:19:52.240437489Z" level=info msg="connecting to shim 99fc060342cb53167236a7c8295955c7c7b024515618301b5e2fca7476807f82" address="unix:///run/containerd/s/dfef4edb9ae2355d128601f2400a9ec18a307337b329e57bf744da7640970606" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:19:52.278471 systemd[1]: Started cri-containerd-99fc060342cb53167236a7c8295955c7c7b024515618301b5e2fca7476807f82.scope - libcontainer container 99fc060342cb53167236a7c8295955c7c7b024515618301b5e2fca7476807f82. Sep 4 04:19:52.292430 kubelet[2768]: E0904 04:19:52.292383 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.292430 kubelet[2768]: W0904 04:19:52.292417 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.292631 kubelet[2768]: E0904 04:19:52.292450 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.292631 kubelet[2768]: I0904 04:19:52.292504 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c4122301-ffe2-4d53-9323-f407a3657094-varrun\") pod \"csi-node-driver-vxmmz\" (UID: \"c4122301-ffe2-4d53-9323-f407a3657094\") " pod="calico-system/csi-node-driver-vxmmz" Sep 4 04:19:52.292906 kubelet[2768]: E0904 04:19:52.292848 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.292906 kubelet[2768]: W0904 04:19:52.292900 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.292906 kubelet[2768]: E0904 04:19:52.292917 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.293205 kubelet[2768]: I0904 04:19:52.292937 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bhfl\" (UniqueName: \"kubernetes.io/projected/c4122301-ffe2-4d53-9323-f407a3657094-kube-api-access-9bhfl\") pod \"csi-node-driver-vxmmz\" (UID: \"c4122301-ffe2-4d53-9323-f407a3657094\") " pod="calico-system/csi-node-driver-vxmmz" Sep 4 04:19:52.293478 kubelet[2768]: E0904 04:19:52.293451 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.293516 kubelet[2768]: W0904 04:19:52.293508 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.293579 kubelet[2768]: E0904 04:19:52.293533 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.293877 kubelet[2768]: E0904 04:19:52.293844 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.293877 kubelet[2768]: W0904 04:19:52.293860 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.293877 kubelet[2768]: E0904 04:19:52.293876 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.294239 kubelet[2768]: E0904 04:19:52.294213 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.294239 kubelet[2768]: W0904 04:19:52.294234 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.294306 kubelet[2768]: E0904 04:19:52.294253 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.294306 kubelet[2768]: I0904 04:19:52.294274 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c4122301-ffe2-4d53-9323-f407a3657094-kubelet-dir\") pod \"csi-node-driver-vxmmz\" (UID: \"c4122301-ffe2-4d53-9323-f407a3657094\") " pod="calico-system/csi-node-driver-vxmmz" Sep 4 04:19:52.294491 kubelet[2768]: E0904 04:19:52.294459 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.294491 kubelet[2768]: W0904 04:19:52.294477 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.294915 kubelet[2768]: E0904 04:19:52.294885 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.294915 kubelet[2768]: W0904 04:19:52.294901 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.295113 kubelet[2768]: E0904 04:19:52.295096 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.295113 kubelet[2768]: W0904 04:19:52.295109 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.295209 kubelet[2768]: E0904 04:19:52.295120 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.295209 kubelet[2768]: E0904 04:19:52.294506 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.295209 kubelet[2768]: I0904 04:19:52.295185 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c4122301-ffe2-4d53-9323-f407a3657094-registration-dir\") pod \"csi-node-driver-vxmmz\" (UID: \"c4122301-ffe2-4d53-9323-f407a3657094\") " pod="calico-system/csi-node-driver-vxmmz" Sep 4 04:19:52.295209 kubelet[2768]: E0904 04:19:52.295202 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.295409 kubelet[2768]: E0904 04:19:52.295321 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.295409 kubelet[2768]: W0904 04:19:52.295346 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.295409 kubelet[2768]: E0904 04:19:52.295356 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.295662 kubelet[2768]: E0904 04:19:52.295640 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.295662 kubelet[2768]: W0904 04:19:52.295657 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.295803 kubelet[2768]: E0904 04:19:52.295669 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.295879 kubelet[2768]: E0904 04:19:52.295838 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.295879 kubelet[2768]: W0904 04:19:52.295847 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.295879 kubelet[2768]: E0904 04:19:52.295857 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.296250 kubelet[2768]: E0904 04:19:52.296090 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.296250 kubelet[2768]: W0904 04:19:52.296099 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.296250 kubelet[2768]: E0904 04:19:52.296109 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.296578 kubelet[2768]: E0904 04:19:52.296413 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.296578 kubelet[2768]: W0904 04:19:52.296423 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.296578 kubelet[2768]: E0904 04:19:52.296433 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.296578 kubelet[2768]: I0904 04:19:52.296456 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c4122301-ffe2-4d53-9323-f407a3657094-socket-dir\") pod \"csi-node-driver-vxmmz\" (UID: \"c4122301-ffe2-4d53-9323-f407a3657094\") " pod="calico-system/csi-node-driver-vxmmz" Sep 4 04:19:52.297082 kubelet[2768]: E0904 04:19:52.296811 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.297082 kubelet[2768]: W0904 04:19:52.296821 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.297082 kubelet[2768]: E0904 04:19:52.296832 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.297378 kubelet[2768]: E0904 04:19:52.297355 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.297378 kubelet[2768]: W0904 04:19:52.297370 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.297378 kubelet[2768]: E0904 04:19:52.297382 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.319029 containerd[1582]: time="2025-09-04T04:19:52.318965182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d4rr9,Uid:f1f4efaf-4b6d-49ec-8d7d-22bafb120deb,Namespace:calico-system,Attempt:0,} returns sandbox id \"99fc060342cb53167236a7c8295955c7c7b024515618301b5e2fca7476807f82\"" Sep 4 04:19:52.398304 kubelet[2768]: E0904 04:19:52.398161 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.398304 kubelet[2768]: W0904 04:19:52.398185 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.398304 kubelet[2768]: E0904 04:19:52.398207 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.398534 kubelet[2768]: E0904 04:19:52.398407 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.398534 kubelet[2768]: W0904 04:19:52.398415 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.398534 kubelet[2768]: E0904 04:19:52.398425 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.398644 kubelet[2768]: E0904 04:19:52.398627 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.398644 kubelet[2768]: W0904 04:19:52.398635 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.398644 kubelet[2768]: E0904 04:19:52.398643 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.399166 kubelet[2768]: E0904 04:19:52.399057 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.399237 kubelet[2768]: W0904 04:19:52.399159 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.399281 kubelet[2768]: E0904 04:19:52.399255 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.399603 kubelet[2768]: E0904 04:19:52.399568 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.399603 kubelet[2768]: W0904 04:19:52.399596 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.399685 kubelet[2768]: E0904 04:19:52.399617 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.400047 kubelet[2768]: E0904 04:19:52.399863 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.400047 kubelet[2768]: W0904 04:19:52.399880 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.400047 kubelet[2768]: E0904 04:19:52.399990 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.400533 kubelet[2768]: E0904 04:19:52.400520 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.400656 kubelet[2768]: W0904 04:19:52.400634 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.400852 kubelet[2768]: E0904 04:19:52.400770 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.401100 kubelet[2768]: E0904 04:19:52.401086 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.401258 kubelet[2768]: W0904 04:19:52.401199 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.401377 kubelet[2768]: E0904 04:19:52.401306 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.401657 kubelet[2768]: E0904 04:19:52.401625 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.401657 kubelet[2768]: W0904 04:19:52.401637 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.401657 kubelet[2768]: E0904 04:19:52.401663 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.401872 kubelet[2768]: E0904 04:19:52.401845 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.401872 kubelet[2768]: W0904 04:19:52.401853 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.401984 kubelet[2768]: E0904 04:19:52.401960 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.402086 kubelet[2768]: E0904 04:19:52.402059 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.402086 kubelet[2768]: W0904 04:19:52.402069 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.402182 kubelet[2768]: E0904 04:19:52.402109 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.402301 kubelet[2768]: E0904 04:19:52.402285 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.402301 kubelet[2768]: W0904 04:19:52.402296 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.402348 kubelet[2768]: E0904 04:19:52.402327 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.402489 kubelet[2768]: E0904 04:19:52.402474 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.402489 kubelet[2768]: W0904 04:19:52.402485 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.402533 kubelet[2768]: E0904 04:19:52.402500 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.402703 kubelet[2768]: E0904 04:19:52.402688 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.402703 kubelet[2768]: W0904 04:19:52.402699 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.402755 kubelet[2768]: E0904 04:19:52.402712 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.402973 kubelet[2768]: E0904 04:19:52.402938 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.402973 kubelet[2768]: W0904 04:19:52.402957 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.403030 kubelet[2768]: E0904 04:19:52.403000 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.403179 kubelet[2768]: E0904 04:19:52.403165 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.403179 kubelet[2768]: W0904 04:19:52.403175 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.403234 kubelet[2768]: E0904 04:19:52.403201 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.403386 kubelet[2768]: E0904 04:19:52.403371 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.403386 kubelet[2768]: W0904 04:19:52.403381 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.403429 kubelet[2768]: E0904 04:19:52.403411 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.403574 kubelet[2768]: E0904 04:19:52.403559 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.403574 kubelet[2768]: W0904 04:19:52.403570 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.403633 kubelet[2768]: E0904 04:19:52.403606 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.403770 kubelet[2768]: E0904 04:19:52.403756 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.403770 kubelet[2768]: W0904 04:19:52.403767 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.403819 kubelet[2768]: E0904 04:19:52.403791 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.403968 kubelet[2768]: E0904 04:19:52.403954 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.403968 kubelet[2768]: W0904 04:19:52.403964 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.404010 kubelet[2768]: E0904 04:19:52.403976 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.405293 kubelet[2768]: E0904 04:19:52.404239 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.405293 kubelet[2768]: W0904 04:19:52.404256 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.405293 kubelet[2768]: E0904 04:19:52.404272 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.405293 kubelet[2768]: E0904 04:19:52.404467 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.405293 kubelet[2768]: W0904 04:19:52.404477 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.405293 kubelet[2768]: E0904 04:19:52.404493 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.405293 kubelet[2768]: E0904 04:19:52.404753 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.405293 kubelet[2768]: W0904 04:19:52.404768 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.405293 kubelet[2768]: E0904 04:19:52.404782 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.405293 kubelet[2768]: E0904 04:19:52.405035 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.405544 kubelet[2768]: W0904 04:19:52.405046 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.405544 kubelet[2768]: E0904 04:19:52.405063 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.405544 kubelet[2768]: E0904 04:19:52.405304 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.405544 kubelet[2768]: W0904 04:19:52.405314 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.405544 kubelet[2768]: E0904 04:19:52.405325 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:52.413258 kubelet[2768]: E0904 04:19:52.413217 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:52.413258 kubelet[2768]: W0904 04:19:52.413244 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:52.413258 kubelet[2768]: E0904 04:19:52.413268 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:53.612584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount480085386.mount: Deactivated successfully. Sep 4 04:19:53.993534 kubelet[2768]: E0904 04:19:53.993173 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vxmmz" podUID="c4122301-ffe2-4d53-9323-f407a3657094" Sep 4 04:19:54.998549 containerd[1582]: time="2025-09-04T04:19:54.998467138Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:19:54.999364 containerd[1582]: time="2025-09-04T04:19:54.999333785Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 4 04:19:55.000518 containerd[1582]: time="2025-09-04T04:19:55.000490681Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:19:55.004080 containerd[1582]: time="2025-09-04T04:19:55.003993015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:19:55.004416 containerd[1582]: time="2025-09-04T04:19:55.004228382Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.948873969s" Sep 4 04:19:55.004416 containerd[1582]: time="2025-09-04T04:19:55.004273014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 4 04:19:55.006105 containerd[1582]: time="2025-09-04T04:19:55.005974644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 4 04:19:55.021553 containerd[1582]: time="2025-09-04T04:19:55.021489922Z" level=info msg="CreateContainer within sandbox \"9d835f94bf619b7b3123de3df9bac32e7252398e7c03da59b55b35cf7d40e9d7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 04:19:55.041332 containerd[1582]: time="2025-09-04T04:19:55.040387246Z" level=info msg="Container 870fab60aa65b7f1537d29cdc885627688c8eda818b210daa1dd87e3d78063b7: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:19:55.049236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3681088904.mount: Deactivated successfully. Sep 4 04:19:55.062741 containerd[1582]: time="2025-09-04T04:19:55.062675462Z" level=info msg="CreateContainer within sandbox \"9d835f94bf619b7b3123de3df9bac32e7252398e7c03da59b55b35cf7d40e9d7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"870fab60aa65b7f1537d29cdc885627688c8eda818b210daa1dd87e3d78063b7\"" Sep 4 04:19:55.063584 containerd[1582]: time="2025-09-04T04:19:55.063098357Z" level=info msg="StartContainer for \"870fab60aa65b7f1537d29cdc885627688c8eda818b210daa1dd87e3d78063b7\"" Sep 4 04:19:55.064753 containerd[1582]: time="2025-09-04T04:19:55.064709449Z" level=info msg="connecting to shim 870fab60aa65b7f1537d29cdc885627688c8eda818b210daa1dd87e3d78063b7" address="unix:///run/containerd/s/736a7cdf904d9c27f357d7d184d14e73bfc91c71d9aba3be1876fc920b64539a" protocol=ttrpc version=3 Sep 4 04:19:55.094411 systemd[1]: Started cri-containerd-870fab60aa65b7f1537d29cdc885627688c8eda818b210daa1dd87e3d78063b7.scope - libcontainer container 870fab60aa65b7f1537d29cdc885627688c8eda818b210daa1dd87e3d78063b7. Sep 4 04:19:55.161578 containerd[1582]: time="2025-09-04T04:19:55.161449947Z" level=info msg="StartContainer for \"870fab60aa65b7f1537d29cdc885627688c8eda818b210daa1dd87e3d78063b7\" returns successfully" Sep 4 04:19:55.993593 kubelet[2768]: E0904 04:19:55.993504 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vxmmz" podUID="c4122301-ffe2-4d53-9323-f407a3657094" Sep 4 04:19:56.071278 kubelet[2768]: E0904 04:19:56.071191 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:19:56.089288 kubelet[2768]: I0904 04:19:56.089172 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-d5d87b9fc-ctzkx" podStartSLOduration=2.137985644 podStartE2EDuration="5.089150905s" podCreationTimestamp="2025-09-04 04:19:51 +0000 UTC" firstStartedPulling="2025-09-04 04:19:52.054461823 +0000 UTC m=+16.443389750" lastFinishedPulling="2025-09-04 04:19:55.005627084 +0000 UTC m=+19.394555011" observedRunningTime="2025-09-04 04:19:56.088958909 +0000 UTC m=+20.477886826" watchObservedRunningTime="2025-09-04 04:19:56.089150905 +0000 UTC m=+20.478078832" Sep 4 04:19:56.129644 kubelet[2768]: E0904 04:19:56.129598 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:56.129644 kubelet[2768]: W0904 04:19:56.129626 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:56.129644 kubelet[2768]: E0904 04:19:56.129648 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:56.129892 kubelet[2768]: E0904 04:19:56.129869 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:56.129892 kubelet[2768]: W0904 04:19:56.129883 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:56.129892 kubelet[2768]: E0904 04:19:56.129892 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:56.130096 kubelet[2768]: E0904 04:19:56.130078 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:56.130096 kubelet[2768]: W0904 04:19:56.130088 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:56.130096 kubelet[2768]: E0904 04:19:56.130096 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:56.130306 kubelet[2768]: E0904 04:19:56.130287 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:56.130306 kubelet[2768]: W0904 04:19:56.130297 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:56.130306 kubelet[2768]: E0904 04:19:56.130305 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:56.130486 kubelet[2768]: E0904 04:19:56.130475 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:56.130486 kubelet[2768]: W0904 04:19:56.130483 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:56.130533 kubelet[2768]: E0904 04:19:56.130491 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:56.130689 kubelet[2768]: E0904 04:19:56.130677 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:56.130689 kubelet[2768]: W0904 04:19:56.130686 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:56.130742 kubelet[2768]: E0904 04:19:56.130693 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:56.130851 kubelet[2768]: E0904 04:19:56.130839 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:56.130851 kubelet[2768]: W0904 04:19:56.130848 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:56.130901 kubelet[2768]: E0904 04:19:56.130857 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:56.131031 kubelet[2768]: E0904 04:19:56.131020 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:56.131031 kubelet[2768]: W0904 04:19:56.131029 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:56.131089 kubelet[2768]: E0904 04:19:56.131036 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:56.131254 kubelet[2768]: E0904 04:19:56.131232 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:56.131254 kubelet[2768]: W0904 04:19:56.131253 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:56.131311 kubelet[2768]: E0904 04:19:56.131261 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:56.131423 kubelet[2768]: E0904 04:19:56.131412 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:56.131423 kubelet[2768]: W0904 04:19:56.131420 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:56.131473 kubelet[2768]: E0904 04:19:56.131427 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:56.131600 kubelet[2768]: E0904 04:19:56.131589 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:56.131600 kubelet[2768]: W0904 04:19:56.131597 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:56.131648 kubelet[2768]: E0904 04:19:56.131605 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:56.131777 kubelet[2768]: E0904 04:19:56.131765 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:56.131777 kubelet[2768]: W0904 04:19:56.131774 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:56.131842 kubelet[2768]: E0904 04:19:56.131781 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:56.131970 kubelet[2768]: E0904 04:19:56.131958 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:56.131970 kubelet[2768]: W0904 04:19:56.131967 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:56.132018 kubelet[2768]: E0904 04:19:56.131974 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:56.132146 kubelet[2768]: E0904 04:19:56.132118 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:56.132146 kubelet[2768]: W0904 04:19:56.132144 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:56.132200 kubelet[2768]: E0904 04:19:56.132155 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:56.132341 kubelet[2768]: E0904 04:19:56.132330 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:56.132341 kubelet[2768]: W0904 04:19:56.132339 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:56.132411 kubelet[2768]: E0904 04:19:56.132347 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:56.227903 kubelet[2768]: E0904 04:19:56.227847 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:56.227903 kubelet[2768]: W0904 04:19:56.227882 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:56.227903 kubelet[2768]: E0904 04:19:56.227909 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:56.228383 kubelet[2768]: E0904 04:19:56.228328 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:56.228383 kubelet[2768]: W0904 04:19:56.228371 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:56.228463 kubelet[2768]: E0904 04:19:56.228416 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:56.228662 kubelet[2768]: E0904 04:19:56.228630 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:56.228662 kubelet[2768]: W0904 04:19:56.228644 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:56.228662 kubelet[2768]: E0904 04:19:56.228658 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:56.228945 kubelet[2768]: E0904 04:19:56.228906 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:56.228945 kubelet[2768]: W0904 04:19:56.228925 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:56.229026 kubelet[2768]: E0904 04:19:56.228947 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:56.229304 kubelet[2768]: E0904 04:19:56.229279 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:56.229304 kubelet[2768]: W0904 04:19:56.229297 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:56.229406 kubelet[2768]: E0904 04:19:56.229318 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:56.229594 kubelet[2768]: E0904 04:19:56.229572 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:56.229594 kubelet[2768]: W0904 04:19:56.229587 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:56.229690 kubelet[2768]: E0904 04:19:56.229604 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:56.229837 kubelet[2768]: E0904 04:19:56.229816 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:56.229837 kubelet[2768]: W0904 04:19:56.229831 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:56.229906 kubelet[2768]: E0904 04:19:56.229866 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:56.230068 kubelet[2768]: E0904 04:19:56.230046 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:56.230068 kubelet[2768]: W0904 04:19:56.230061 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:56.230159 kubelet[2768]: E0904 04:19:56.230104 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:56.230329 kubelet[2768]: E0904 04:19:56.230307 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:56.230329 kubelet[2768]: W0904 04:19:56.230321 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:56.230400 kubelet[2768]: E0904 04:19:56.230353 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:56.230585 kubelet[2768]: E0904 04:19:56.230565 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:56.230585 kubelet[2768]: W0904 04:19:56.230578 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:56.230663 kubelet[2768]: E0904 04:19:56.230597 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:56.230896 kubelet[2768]: E0904 04:19:56.230868 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:56.230896 kubelet[2768]: W0904 04:19:56.230882 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:56.230970 kubelet[2768]: E0904 04:19:56.230903 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:56.231170 kubelet[2768]: E0904 04:19:56.231155 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:56.231170 kubelet[2768]: W0904 04:19:56.231167 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:56.231244 kubelet[2768]: E0904 04:19:56.231183 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:56.231519 kubelet[2768]: E0904 04:19:56.231494 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:56.231519 kubelet[2768]: W0904 04:19:56.231513 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:56.231597 kubelet[2768]: E0904 04:19:56.231535 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:56.231777 kubelet[2768]: E0904 04:19:56.231756 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:56.231777 kubelet[2768]: W0904 04:19:56.231770 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:56.231843 kubelet[2768]: E0904 04:19:56.231782 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:56.232046 kubelet[2768]: E0904 04:19:56.232024 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:56.232046 kubelet[2768]: W0904 04:19:56.232039 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:56.232168 kubelet[2768]: E0904 04:19:56.232103 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:56.232304 kubelet[2768]: E0904 04:19:56.232275 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:56.232304 kubelet[2768]: W0904 04:19:56.232299 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:56.232384 kubelet[2768]: E0904 04:19:56.232311 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:56.232688 kubelet[2768]: E0904 04:19:56.232665 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:56.232688 kubelet[2768]: W0904 04:19:56.232680 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:56.232780 kubelet[2768]: E0904 04:19:56.232693 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:56.233120 kubelet[2768]: E0904 04:19:56.233096 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:19:56.233120 kubelet[2768]: W0904 04:19:56.233109 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:19:56.233196 kubelet[2768]: E0904 04:19:56.233120 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:19:56.535863 containerd[1582]: time="2025-09-04T04:19:56.535787003Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:19:56.538225 containerd[1582]: time="2025-09-04T04:19:56.538192246Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 4 04:19:56.538680 containerd[1582]: time="2025-09-04T04:19:56.538650202Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:19:56.541951 containerd[1582]: time="2025-09-04T04:19:56.541907182Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:19:56.542977 containerd[1582]: time="2025-09-04T04:19:56.542786837Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.536777773s" Sep 4 04:19:56.542977 containerd[1582]: time="2025-09-04T04:19:56.542840338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 4 04:19:56.545717 containerd[1582]: time="2025-09-04T04:19:56.545672692Z" level=info msg="CreateContainer within sandbox \"99fc060342cb53167236a7c8295955c7c7b024515618301b5e2fca7476807f82\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 04:19:56.557411 containerd[1582]: time="2025-09-04T04:19:56.557346360Z" level=info msg="Container 7a1f78b73ff5a3477f55d5d6d44cf717a2ebc4b3bfe95a900e57e8e08aee2bbc: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:19:56.566879 containerd[1582]: time="2025-09-04T04:19:56.566821271Z" level=info msg="CreateContainer within sandbox \"99fc060342cb53167236a7c8295955c7c7b024515618301b5e2fca7476807f82\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7a1f78b73ff5a3477f55d5d6d44cf717a2ebc4b3bfe95a900e57e8e08aee2bbc\"" Sep 4 04:19:56.567509 containerd[1582]: time="2025-09-04T04:19:56.567476794Z" level=info msg="StartContainer for \"7a1f78b73ff5a3477f55d5d6d44cf717a2ebc4b3bfe95a900e57e8e08aee2bbc\"" Sep 4 04:19:56.569450 containerd[1582]: time="2025-09-04T04:19:56.569412328Z" level=info msg="connecting to shim 7a1f78b73ff5a3477f55d5d6d44cf717a2ebc4b3bfe95a900e57e8e08aee2bbc" address="unix:///run/containerd/s/dfef4edb9ae2355d128601f2400a9ec18a307337b329e57bf744da7640970606" protocol=ttrpc version=3 Sep 4 04:19:56.597991 systemd[1]: Started cri-containerd-7a1f78b73ff5a3477f55d5d6d44cf717a2ebc4b3bfe95a900e57e8e08aee2bbc.scope - libcontainer container 7a1f78b73ff5a3477f55d5d6d44cf717a2ebc4b3bfe95a900e57e8e08aee2bbc. Sep 4 04:19:56.650886 containerd[1582]: time="2025-09-04T04:19:56.650760087Z" level=info msg="StartContainer for \"7a1f78b73ff5a3477f55d5d6d44cf717a2ebc4b3bfe95a900e57e8e08aee2bbc\" returns successfully" Sep 4 04:19:56.666219 systemd[1]: cri-containerd-7a1f78b73ff5a3477f55d5d6d44cf717a2ebc4b3bfe95a900e57e8e08aee2bbc.scope: Deactivated successfully. Sep 4 04:19:56.668992 containerd[1582]: time="2025-09-04T04:19:56.668937976Z" level=info msg="received exit event container_id:\"7a1f78b73ff5a3477f55d5d6d44cf717a2ebc4b3bfe95a900e57e8e08aee2bbc\" id:\"7a1f78b73ff5a3477f55d5d6d44cf717a2ebc4b3bfe95a900e57e8e08aee2bbc\" pid:3459 exited_at:{seconds:1756959596 nanos:668454899}" Sep 4 04:19:56.669202 containerd[1582]: time="2025-09-04T04:19:56.669155525Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7a1f78b73ff5a3477f55d5d6d44cf717a2ebc4b3bfe95a900e57e8e08aee2bbc\" id:\"7a1f78b73ff5a3477f55d5d6d44cf717a2ebc4b3bfe95a900e57e8e08aee2bbc\" pid:3459 exited_at:{seconds:1756959596 nanos:668454899}" Sep 4 04:19:56.693403 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a1f78b73ff5a3477f55d5d6d44cf717a2ebc4b3bfe95a900e57e8e08aee2bbc-rootfs.mount: Deactivated successfully. Sep 4 04:19:57.075067 kubelet[2768]: E0904 04:19:57.075025 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:19:57.075781 containerd[1582]: time="2025-09-04T04:19:57.075723833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 4 04:19:57.993502 kubelet[2768]: E0904 04:19:57.993437 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vxmmz" podUID="c4122301-ffe2-4d53-9323-f407a3657094" Sep 4 04:19:58.076496 kubelet[2768]: E0904 04:19:58.076447 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:19:59.992774 kubelet[2768]: E0904 04:19:59.992724 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vxmmz" podUID="c4122301-ffe2-4d53-9323-f407a3657094" Sep 4 04:20:01.992974 kubelet[2768]: E0904 04:20:01.992900 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vxmmz" podUID="c4122301-ffe2-4d53-9323-f407a3657094" Sep 4 04:20:02.274256 containerd[1582]: time="2025-09-04T04:20:02.274049750Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:20:02.275440 containerd[1582]: time="2025-09-04T04:20:02.275392921Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 4 04:20:02.276609 containerd[1582]: time="2025-09-04T04:20:02.276577419Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:20:02.279274 containerd[1582]: time="2025-09-04T04:20:02.279247438Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:20:02.279848 containerd[1582]: time="2025-09-04T04:20:02.279803302Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 5.204029827s" Sep 4 04:20:02.279848 containerd[1582]: time="2025-09-04T04:20:02.279832271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 4 04:20:02.281881 containerd[1582]: time="2025-09-04T04:20:02.281831411Z" level=info msg="CreateContainer within sandbox \"99fc060342cb53167236a7c8295955c7c7b024515618301b5e2fca7476807f82\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 04:20:02.295010 containerd[1582]: time="2025-09-04T04:20:02.294939459Z" level=info msg="Container 28387390e3e8275570c0729246a4747d104a4c267037e499286343ec0bd377b9: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:20:02.313116 containerd[1582]: time="2025-09-04T04:20:02.313040236Z" level=info msg="CreateContainer within sandbox \"99fc060342cb53167236a7c8295955c7c7b024515618301b5e2fca7476807f82\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"28387390e3e8275570c0729246a4747d104a4c267037e499286343ec0bd377b9\"" Sep 4 04:20:02.313595 containerd[1582]: time="2025-09-04T04:20:02.313556671Z" level=info msg="StartContainer for \"28387390e3e8275570c0729246a4747d104a4c267037e499286343ec0bd377b9\"" Sep 4 04:20:02.315371 containerd[1582]: time="2025-09-04T04:20:02.315330251Z" level=info msg="connecting to shim 28387390e3e8275570c0729246a4747d104a4c267037e499286343ec0bd377b9" address="unix:///run/containerd/s/dfef4edb9ae2355d128601f2400a9ec18a307337b329e57bf744da7640970606" protocol=ttrpc version=3 Sep 4 04:20:02.342660 systemd[1]: Started cri-containerd-28387390e3e8275570c0729246a4747d104a4c267037e499286343ec0bd377b9.scope - libcontainer container 28387390e3e8275570c0729246a4747d104a4c267037e499286343ec0bd377b9. Sep 4 04:20:02.392847 containerd[1582]: time="2025-09-04T04:20:02.392800180Z" level=info msg="StartContainer for \"28387390e3e8275570c0729246a4747d104a4c267037e499286343ec0bd377b9\" returns successfully" Sep 4 04:20:03.992747 kubelet[2768]: E0904 04:20:03.992662 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vxmmz" podUID="c4122301-ffe2-4d53-9323-f407a3657094" Sep 4 04:20:04.053876 systemd[1]: cri-containerd-28387390e3e8275570c0729246a4747d104a4c267037e499286343ec0bd377b9.scope: Deactivated successfully. Sep 4 04:20:04.054287 systemd[1]: cri-containerd-28387390e3e8275570c0729246a4747d104a4c267037e499286343ec0bd377b9.scope: Consumed 730ms CPU time, 179.8M memory peak, 3.1M read from disk, 171.3M written to disk. Sep 4 04:20:04.055951 containerd[1582]: time="2025-09-04T04:20:04.055892106Z" level=info msg="received exit event container_id:\"28387390e3e8275570c0729246a4747d104a4c267037e499286343ec0bd377b9\" id:\"28387390e3e8275570c0729246a4747d104a4c267037e499286343ec0bd377b9\" pid:3517 exited_at:{seconds:1756959604 nanos:55650625}" Sep 4 04:20:04.056336 containerd[1582]: time="2025-09-04T04:20:04.056068466Z" level=info msg="TaskExit event in podsandbox handler container_id:\"28387390e3e8275570c0729246a4747d104a4c267037e499286343ec0bd377b9\" id:\"28387390e3e8275570c0729246a4747d104a4c267037e499286343ec0bd377b9\" pid:3517 exited_at:{seconds:1756959604 nanos:55650625}" Sep 4 04:20:04.078470 kubelet[2768]: I0904 04:20:04.078417 2768 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 4 04:20:04.079031 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28387390e3e8275570c0729246a4747d104a4c267037e499286343ec0bd377b9-rootfs.mount: Deactivated successfully. Sep 4 04:20:04.115372 systemd[1]: Created slice kubepods-burstable-pod358ec66d_ac3e_40ac_8894_fbc7c8591e68.slice - libcontainer container kubepods-burstable-pod358ec66d_ac3e_40ac_8894_fbc7c8591e68.slice. Sep 4 04:20:04.132734 systemd[1]: Created slice kubepods-besteffort-podfef499e2_73e2_47bb_b0c6_be0bc00abfc8.slice - libcontainer container kubepods-besteffort-podfef499e2_73e2_47bb_b0c6_be0bc00abfc8.slice. Sep 4 04:20:04.139606 systemd[1]: Created slice kubepods-burstable-pod19dae45b_0a4c_4ef8_8765_597e5b94e56b.slice - libcontainer container kubepods-burstable-pod19dae45b_0a4c_4ef8_8765_597e5b94e56b.slice. Sep 4 04:20:04.147162 systemd[1]: Created slice kubepods-besteffort-pod555f180c_2b38_4604_ad12_b5ea4891e907.slice - libcontainer container kubepods-besteffort-pod555f180c_2b38_4604_ad12_b5ea4891e907.slice. Sep 4 04:20:04.152385 systemd[1]: Created slice kubepods-besteffort-pod640dc2d5_cd5d_434f_8f0d_3d5aec52bc0b.slice - libcontainer container kubepods-besteffort-pod640dc2d5_cd5d_434f_8f0d_3d5aec52bc0b.slice. Sep 4 04:20:04.160188 systemd[1]: Created slice kubepods-besteffort-pod9be9fbca_716f_4567_931e_2b5c2be64b35.slice - libcontainer container kubepods-besteffort-pod9be9fbca_716f_4567_931e_2b5c2be64b35.slice. Sep 4 04:20:04.167670 systemd[1]: Created slice kubepods-besteffort-poda5774771_f398_4b32_a748_6ad0d5e830e7.slice - libcontainer container kubepods-besteffort-poda5774771_f398_4b32_a748_6ad0d5e830e7.slice. Sep 4 04:20:04.285682 kubelet[2768]: I0904 04:20:04.285393 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/358ec66d-ac3e-40ac-8894-fbc7c8591e68-config-volume\") pod \"coredns-7c65d6cfc9-vps4h\" (UID: \"358ec66d-ac3e-40ac-8894-fbc7c8591e68\") " pod="kube-system/coredns-7c65d6cfc9-vps4h" Sep 4 04:20:04.285682 kubelet[2768]: I0904 04:20:04.285478 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5774771-f398-4b32-a748-6ad0d5e830e7-goldmane-ca-bundle\") pod \"goldmane-7988f88666-tcfnr\" (UID: \"a5774771-f398-4b32-a748-6ad0d5e830e7\") " pod="calico-system/goldmane-7988f88666-tcfnr" Sep 4 04:20:04.285682 kubelet[2768]: I0904 04:20:04.285510 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fef499e2-73e2-47bb-b0c6-be0bc00abfc8-tigera-ca-bundle\") pod \"calico-kube-controllers-7db647d8c7-gc5kl\" (UID: \"fef499e2-73e2-47bb-b0c6-be0bc00abfc8\") " pod="calico-system/calico-kube-controllers-7db647d8c7-gc5kl" Sep 4 04:20:04.285682 kubelet[2768]: I0904 04:20:04.285597 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f6tz\" (UniqueName: \"kubernetes.io/projected/19dae45b-0a4c-4ef8-8765-597e5b94e56b-kube-api-access-5f6tz\") pod \"coredns-7c65d6cfc9-4fh5g\" (UID: \"19dae45b-0a4c-4ef8-8765-597e5b94e56b\") " pod="kube-system/coredns-7c65d6cfc9-4fh5g" Sep 4 04:20:04.285682 kubelet[2768]: I0904 04:20:04.285662 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a5774771-f398-4b32-a748-6ad0d5e830e7-goldmane-key-pair\") pod \"goldmane-7988f88666-tcfnr\" (UID: \"a5774771-f398-4b32-a748-6ad0d5e830e7\") " pod="calico-system/goldmane-7988f88666-tcfnr" Sep 4 04:20:04.285990 kubelet[2768]: I0904 04:20:04.285683 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/640dc2d5-cd5d-434f-8f0d-3d5aec52bc0b-calico-apiserver-certs\") pod \"calico-apiserver-84574cdc5d-lvm6s\" (UID: \"640dc2d5-cd5d-434f-8f0d-3d5aec52bc0b\") " pod="calico-apiserver/calico-apiserver-84574cdc5d-lvm6s" Sep 4 04:20:04.285990 kubelet[2768]: I0904 04:20:04.285725 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dksft\" (UniqueName: \"kubernetes.io/projected/358ec66d-ac3e-40ac-8894-fbc7c8591e68-kube-api-access-dksft\") pod \"coredns-7c65d6cfc9-vps4h\" (UID: \"358ec66d-ac3e-40ac-8894-fbc7c8591e68\") " pod="kube-system/coredns-7c65d6cfc9-vps4h" Sep 4 04:20:04.285990 kubelet[2768]: I0904 04:20:04.285742 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2hz2\" (UniqueName: \"kubernetes.io/projected/a5774771-f398-4b32-a748-6ad0d5e830e7-kube-api-access-j2hz2\") pod \"goldmane-7988f88666-tcfnr\" (UID: \"a5774771-f398-4b32-a748-6ad0d5e830e7\") " pod="calico-system/goldmane-7988f88666-tcfnr" Sep 4 04:20:04.285990 kubelet[2768]: I0904 04:20:04.285758 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx96h\" (UniqueName: \"kubernetes.io/projected/9be9fbca-716f-4567-931e-2b5c2be64b35-kube-api-access-mx96h\") pod \"whisker-577d6b859d-996kf\" (UID: \"9be9fbca-716f-4567-931e-2b5c2be64b35\") " pod="calico-system/whisker-577d6b859d-996kf" Sep 4 04:20:04.285990 kubelet[2768]: I0904 04:20:04.285775 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd8tn\" (UniqueName: \"kubernetes.io/projected/555f180c-2b38-4604-ad12-b5ea4891e907-kube-api-access-cd8tn\") pod \"calico-apiserver-84574cdc5d-fdvcg\" (UID: \"555f180c-2b38-4604-ad12-b5ea4891e907\") " pod="calico-apiserver/calico-apiserver-84574cdc5d-fdvcg" Sep 4 04:20:04.286174 kubelet[2768]: I0904 04:20:04.285806 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19dae45b-0a4c-4ef8-8765-597e5b94e56b-config-volume\") pod \"coredns-7c65d6cfc9-4fh5g\" (UID: \"19dae45b-0a4c-4ef8-8765-597e5b94e56b\") " pod="kube-system/coredns-7c65d6cfc9-4fh5g" Sep 4 04:20:04.286174 kubelet[2768]: I0904 04:20:04.285824 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swrm9\" (UniqueName: \"kubernetes.io/projected/fef499e2-73e2-47bb-b0c6-be0bc00abfc8-kube-api-access-swrm9\") pod \"calico-kube-controllers-7db647d8c7-gc5kl\" (UID: \"fef499e2-73e2-47bb-b0c6-be0bc00abfc8\") " pod="calico-system/calico-kube-controllers-7db647d8c7-gc5kl" Sep 4 04:20:04.286174 kubelet[2768]: I0904 04:20:04.285843 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9be9fbca-716f-4567-931e-2b5c2be64b35-whisker-backend-key-pair\") pod \"whisker-577d6b859d-996kf\" (UID: \"9be9fbca-716f-4567-931e-2b5c2be64b35\") " pod="calico-system/whisker-577d6b859d-996kf" Sep 4 04:20:04.286174 kubelet[2768]: I0904 04:20:04.285858 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9be9fbca-716f-4567-931e-2b5c2be64b35-whisker-ca-bundle\") pod \"whisker-577d6b859d-996kf\" (UID: \"9be9fbca-716f-4567-931e-2b5c2be64b35\") " pod="calico-system/whisker-577d6b859d-996kf" Sep 4 04:20:04.286174 kubelet[2768]: I0904 04:20:04.285872 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/555f180c-2b38-4604-ad12-b5ea4891e907-calico-apiserver-certs\") pod \"calico-apiserver-84574cdc5d-fdvcg\" (UID: \"555f180c-2b38-4604-ad12-b5ea4891e907\") " pod="calico-apiserver/calico-apiserver-84574cdc5d-fdvcg" Sep 4 04:20:04.286312 kubelet[2768]: I0904 04:20:04.285885 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq7q8\" (UniqueName: \"kubernetes.io/projected/640dc2d5-cd5d-434f-8f0d-3d5aec52bc0b-kube-api-access-kq7q8\") pod \"calico-apiserver-84574cdc5d-lvm6s\" (UID: \"640dc2d5-cd5d-434f-8f0d-3d5aec52bc0b\") " pod="calico-apiserver/calico-apiserver-84574cdc5d-lvm6s" Sep 4 04:20:04.286312 kubelet[2768]: I0904 04:20:04.285904 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5774771-f398-4b32-a748-6ad0d5e830e7-config\") pod \"goldmane-7988f88666-tcfnr\" (UID: \"a5774771-f398-4b32-a748-6ad0d5e830e7\") " pod="calico-system/goldmane-7988f88666-tcfnr" Sep 4 04:20:04.425867 kubelet[2768]: E0904 04:20:04.425799 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:20:04.426465 containerd[1582]: time="2025-09-04T04:20:04.426421214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vps4h,Uid:358ec66d-ac3e-40ac-8894-fbc7c8591e68,Namespace:kube-system,Attempt:0,}" Sep 4 04:20:04.436570 containerd[1582]: time="2025-09-04T04:20:04.436513861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7db647d8c7-gc5kl,Uid:fef499e2-73e2-47bb-b0c6-be0bc00abfc8,Namespace:calico-system,Attempt:0,}" Sep 4 04:20:04.444062 kubelet[2768]: E0904 04:20:04.443987 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:20:04.444890 containerd[1582]: time="2025-09-04T04:20:04.444728527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4fh5g,Uid:19dae45b-0a4c-4ef8-8765-597e5b94e56b,Namespace:kube-system,Attempt:0,}" Sep 4 04:20:04.451505 containerd[1582]: time="2025-09-04T04:20:04.451446876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84574cdc5d-fdvcg,Uid:555f180c-2b38-4604-ad12-b5ea4891e907,Namespace:calico-apiserver,Attempt:0,}" Sep 4 04:20:04.457401 containerd[1582]: time="2025-09-04T04:20:04.457369378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84574cdc5d-lvm6s,Uid:640dc2d5-cd5d-434f-8f0d-3d5aec52bc0b,Namespace:calico-apiserver,Attempt:0,}" Sep 4 04:20:04.464354 containerd[1582]: time="2025-09-04T04:20:04.464266553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-577d6b859d-996kf,Uid:9be9fbca-716f-4567-931e-2b5c2be64b35,Namespace:calico-system,Attempt:0,}" Sep 4 04:20:04.471462 containerd[1582]: time="2025-09-04T04:20:04.471383373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-tcfnr,Uid:a5774771-f398-4b32-a748-6ad0d5e830e7,Namespace:calico-system,Attempt:0,}" Sep 4 04:20:04.622594 containerd[1582]: time="2025-09-04T04:20:04.622446770Z" level=error msg="Failed to destroy network for sandbox \"338fbc64a5c0fb8c7e9b133ed9bd08f6635254b2b93b31d7df9571d4e5bcb937\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:20:04.632563 containerd[1582]: time="2025-09-04T04:20:04.632488974Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-tcfnr,Uid:a5774771-f398-4b32-a748-6ad0d5e830e7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"338fbc64a5c0fb8c7e9b133ed9bd08f6635254b2b93b31d7df9571d4e5bcb937\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:20:04.634456 containerd[1582]: time="2025-09-04T04:20:04.633375819Z" level=error msg="Failed to destroy network for sandbox \"c669b044a02d2c110b9623c0918085173f57379f9655f1a1c72941e73ee70dbd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:20:04.634456 containerd[1582]: time="2025-09-04T04:20:04.634310610Z" level=error msg="Failed to destroy network for sandbox \"9007e8ffcc796ead023a1b5a3f14f07e5f93ead2495607af940b2ffe9845fb77\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:20:04.635347 kubelet[2768]: E0904 04:20:04.635281 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"338fbc64a5c0fb8c7e9b133ed9bd08f6635254b2b93b31d7df9571d4e5bcb937\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:20:04.635570 kubelet[2768]: E0904 04:20:04.635550 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"338fbc64a5c0fb8c7e9b133ed9bd08f6635254b2b93b31d7df9571d4e5bcb937\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-tcfnr" Sep 4 04:20:04.635877 kubelet[2768]: E0904 04:20:04.635719 2768 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"338fbc64a5c0fb8c7e9b133ed9bd08f6635254b2b93b31d7df9571d4e5bcb937\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-tcfnr" Sep 4 04:20:04.635877 kubelet[2768]: E0904 04:20:04.635829 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-tcfnr_calico-system(a5774771-f398-4b32-a748-6ad0d5e830e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-tcfnr_calico-system(a5774771-f398-4b32-a748-6ad0d5e830e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"338fbc64a5c0fb8c7e9b133ed9bd08f6635254b2b93b31d7df9571d4e5bcb937\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-tcfnr" podUID="a5774771-f398-4b32-a748-6ad0d5e830e7" Sep 4 04:20:04.637724 containerd[1582]: time="2025-09-04T04:20:04.637444517Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7db647d8c7-gc5kl,Uid:fef499e2-73e2-47bb-b0c6-be0bc00abfc8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9007e8ffcc796ead023a1b5a3f14f07e5f93ead2495607af940b2ffe9845fb77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:20:04.639021 containerd[1582]: time="2025-09-04T04:20:04.638920731Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4fh5g,Uid:19dae45b-0a4c-4ef8-8765-597e5b94e56b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c669b044a02d2c110b9623c0918085173f57379f9655f1a1c72941e73ee70dbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:20:04.639314 kubelet[2768]: E0904 04:20:04.639180 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c669b044a02d2c110b9623c0918085173f57379f9655f1a1c72941e73ee70dbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:20:04.639314 kubelet[2768]: E0904 04:20:04.639185 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9007e8ffcc796ead023a1b5a3f14f07e5f93ead2495607af940b2ffe9845fb77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:20:04.639314 kubelet[2768]: E0904 04:20:04.639221 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c669b044a02d2c110b9623c0918085173f57379f9655f1a1c72941e73ee70dbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-4fh5g" Sep 4 04:20:04.639314 kubelet[2768]: E0904 04:20:04.639239 2768 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c669b044a02d2c110b9623c0918085173f57379f9655f1a1c72941e73ee70dbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-4fh5g" Sep 4 04:20:04.639453 kubelet[2768]: E0904 04:20:04.639246 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9007e8ffcc796ead023a1b5a3f14f07e5f93ead2495607af940b2ffe9845fb77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7db647d8c7-gc5kl" Sep 4 04:20:04.639453 kubelet[2768]: E0904 04:20:04.639275 2768 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9007e8ffcc796ead023a1b5a3f14f07e5f93ead2495607af940b2ffe9845fb77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7db647d8c7-gc5kl" Sep 4 04:20:04.639453 kubelet[2768]: E0904 04:20:04.639278 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-4fh5g_kube-system(19dae45b-0a4c-4ef8-8765-597e5b94e56b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-4fh5g_kube-system(19dae45b-0a4c-4ef8-8765-597e5b94e56b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c669b044a02d2c110b9623c0918085173f57379f9655f1a1c72941e73ee70dbd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-4fh5g" podUID="19dae45b-0a4c-4ef8-8765-597e5b94e56b" Sep 4 04:20:04.639566 kubelet[2768]: E0904 04:20:04.639326 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7db647d8c7-gc5kl_calico-system(fef499e2-73e2-47bb-b0c6-be0bc00abfc8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7db647d8c7-gc5kl_calico-system(fef499e2-73e2-47bb-b0c6-be0bc00abfc8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9007e8ffcc796ead023a1b5a3f14f07e5f93ead2495607af940b2ffe9845fb77\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7db647d8c7-gc5kl" podUID="fef499e2-73e2-47bb-b0c6-be0bc00abfc8" Sep 4 04:20:04.640936 containerd[1582]: time="2025-09-04T04:20:04.640881190Z" level=error msg="Failed to destroy network for sandbox \"fad9731cd85852aa13e9af329041764345a99580d765965425ca23f1408504ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:20:04.642890 containerd[1582]: time="2025-09-04T04:20:04.642782509Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84574cdc5d-fdvcg,Uid:555f180c-2b38-4604-ad12-b5ea4891e907,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fad9731cd85852aa13e9af329041764345a99580d765965425ca23f1408504ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:20:04.643189 kubelet[2768]: E0904 04:20:04.643138 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fad9731cd85852aa13e9af329041764345a99580d765965425ca23f1408504ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:20:04.643189 kubelet[2768]: E0904 04:20:04.643188 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fad9731cd85852aa13e9af329041764345a99580d765965425ca23f1408504ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84574cdc5d-fdvcg" Sep 4 04:20:04.643404 kubelet[2768]: E0904 04:20:04.643214 2768 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fad9731cd85852aa13e9af329041764345a99580d765965425ca23f1408504ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84574cdc5d-fdvcg" Sep 4 04:20:04.643404 kubelet[2768]: E0904 04:20:04.643282 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84574cdc5d-fdvcg_calico-apiserver(555f180c-2b38-4604-ad12-b5ea4891e907)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84574cdc5d-fdvcg_calico-apiserver(555f180c-2b38-4604-ad12-b5ea4891e907)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fad9731cd85852aa13e9af329041764345a99580d765965425ca23f1408504ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84574cdc5d-fdvcg" podUID="555f180c-2b38-4604-ad12-b5ea4891e907" Sep 4 04:20:04.644763 containerd[1582]: time="2025-09-04T04:20:04.644708558Z" level=error msg="Failed to destroy network for sandbox \"a74046b6ebe169a22c67d27b2954be33fd1a6fe2ce2184a8a8861ee941207dc3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:20:04.645166 containerd[1582]: time="2025-09-04T04:20:04.645058711Z" level=error msg="Failed to destroy network for sandbox \"257a6a6dd4dc1f6e9c6279bfd6c4a5c84e5954823b7614d00a64d2e2ecb84f2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:20:04.646557 containerd[1582]: time="2025-09-04T04:20:04.646527921Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vps4h,Uid:358ec66d-ac3e-40ac-8894-fbc7c8591e68,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a74046b6ebe169a22c67d27b2954be33fd1a6fe2ce2184a8a8861ee941207dc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:20:04.646983 kubelet[2768]: E0904 04:20:04.646947 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a74046b6ebe169a22c67d27b2954be33fd1a6fe2ce2184a8a8861ee941207dc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:20:04.647205 kubelet[2768]: E0904 04:20:04.646993 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a74046b6ebe169a22c67d27b2954be33fd1a6fe2ce2184a8a8861ee941207dc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-vps4h" Sep 4 04:20:04.647205 kubelet[2768]: E0904 04:20:04.647012 2768 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a74046b6ebe169a22c67d27b2954be33fd1a6fe2ce2184a8a8861ee941207dc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-vps4h" Sep 4 04:20:04.647205 kubelet[2768]: E0904 04:20:04.647063 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-vps4h_kube-system(358ec66d-ac3e-40ac-8894-fbc7c8591e68)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-vps4h_kube-system(358ec66d-ac3e-40ac-8894-fbc7c8591e68)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a74046b6ebe169a22c67d27b2954be33fd1a6fe2ce2184a8a8861ee941207dc3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-vps4h" podUID="358ec66d-ac3e-40ac-8894-fbc7c8591e68" Sep 4 04:20:04.647364 containerd[1582]: time="2025-09-04T04:20:04.647295812Z" level=error msg="Failed to destroy network for sandbox \"ac2fa3ff63cd72a3cdebe5a98296d393e33660ed47a5212e26263a2ba49226ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:20:04.647710 containerd[1582]: time="2025-09-04T04:20:04.647675886Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-577d6b859d-996kf,Uid:9be9fbca-716f-4567-931e-2b5c2be64b35,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"257a6a6dd4dc1f6e9c6279bfd6c4a5c84e5954823b7614d00a64d2e2ecb84f2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:20:04.648094 kubelet[2768]: E0904 04:20:04.648055 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"257a6a6dd4dc1f6e9c6279bfd6c4a5c84e5954823b7614d00a64d2e2ecb84f2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:20:04.648094 kubelet[2768]: E0904 04:20:04.648086 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"257a6a6dd4dc1f6e9c6279bfd6c4a5c84e5954823b7614d00a64d2e2ecb84f2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-577d6b859d-996kf" Sep 4 04:20:04.648266 kubelet[2768]: E0904 04:20:04.648100 2768 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"257a6a6dd4dc1f6e9c6279bfd6c4a5c84e5954823b7614d00a64d2e2ecb84f2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-577d6b859d-996kf" Sep 4 04:20:04.648266 kubelet[2768]: E0904 04:20:04.648213 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-577d6b859d-996kf_calico-system(9be9fbca-716f-4567-931e-2b5c2be64b35)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-577d6b859d-996kf_calico-system(9be9fbca-716f-4567-931e-2b5c2be64b35)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"257a6a6dd4dc1f6e9c6279bfd6c4a5c84e5954823b7614d00a64d2e2ecb84f2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-577d6b859d-996kf" podUID="9be9fbca-716f-4567-931e-2b5c2be64b35" Sep 4 04:20:04.648773 containerd[1582]: time="2025-09-04T04:20:04.648735021Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84574cdc5d-lvm6s,Uid:640dc2d5-cd5d-434f-8f0d-3d5aec52bc0b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac2fa3ff63cd72a3cdebe5a98296d393e33660ed47a5212e26263a2ba49226ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:20:04.648954 kubelet[2768]: E0904 04:20:04.648923 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac2fa3ff63cd72a3cdebe5a98296d393e33660ed47a5212e26263a2ba49226ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:20:04.649022 kubelet[2768]: E0904 04:20:04.648963 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac2fa3ff63cd72a3cdebe5a98296d393e33660ed47a5212e26263a2ba49226ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84574cdc5d-lvm6s" Sep 4 04:20:04.649022 kubelet[2768]: E0904 04:20:04.648979 2768 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac2fa3ff63cd72a3cdebe5a98296d393e33660ed47a5212e26263a2ba49226ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84574cdc5d-lvm6s" Sep 4 04:20:04.649103 kubelet[2768]: E0904 04:20:04.649013 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84574cdc5d-lvm6s_calico-apiserver(640dc2d5-cd5d-434f-8f0d-3d5aec52bc0b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84574cdc5d-lvm6s_calico-apiserver(640dc2d5-cd5d-434f-8f0d-3d5aec52bc0b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac2fa3ff63cd72a3cdebe5a98296d393e33660ed47a5212e26263a2ba49226ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84574cdc5d-lvm6s" podUID="640dc2d5-cd5d-434f-8f0d-3d5aec52bc0b" Sep 4 04:20:05.099379 containerd[1582]: time="2025-09-04T04:20:05.099313083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 4 04:20:06.001609 systemd[1]: Created slice kubepods-besteffort-podc4122301_ffe2_4d53_9323_f407a3657094.slice - libcontainer container kubepods-besteffort-podc4122301_ffe2_4d53_9323_f407a3657094.slice. Sep 4 04:20:06.004966 containerd[1582]: time="2025-09-04T04:20:06.004918823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vxmmz,Uid:c4122301-ffe2-4d53-9323-f407a3657094,Namespace:calico-system,Attempt:0,}" Sep 4 04:20:06.339436 containerd[1582]: time="2025-09-04T04:20:06.339288657Z" level=error msg="Failed to destroy network for sandbox \"ee54e3052b3aa9408ce956ea18569292c7b777fad582b4d9189cb25b261f7f33\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:20:06.342669 systemd[1]: run-netns-cni\x2db30aecbd\x2de99b\x2d1d6e\x2db127\x2db681e2602ae7.mount: Deactivated successfully. Sep 4 04:20:06.395209 containerd[1582]: time="2025-09-04T04:20:06.395085156Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vxmmz,Uid:c4122301-ffe2-4d53-9323-f407a3657094,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee54e3052b3aa9408ce956ea18569292c7b777fad582b4d9189cb25b261f7f33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:20:06.395624 kubelet[2768]: E0904 04:20:06.395525 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee54e3052b3aa9408ce956ea18569292c7b777fad582b4d9189cb25b261f7f33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:20:06.396108 kubelet[2768]: E0904 04:20:06.395646 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee54e3052b3aa9408ce956ea18569292c7b777fad582b4d9189cb25b261f7f33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vxmmz" Sep 4 04:20:06.396108 kubelet[2768]: E0904 04:20:06.395680 2768 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee54e3052b3aa9408ce956ea18569292c7b777fad582b4d9189cb25b261f7f33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vxmmz" Sep 4 04:20:06.396108 kubelet[2768]: E0904 04:20:06.395758 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vxmmz_calico-system(c4122301-ffe2-4d53-9323-f407a3657094)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vxmmz_calico-system(c4122301-ffe2-4d53-9323-f407a3657094)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee54e3052b3aa9408ce956ea18569292c7b777fad582b4d9189cb25b261f7f33\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vxmmz" podUID="c4122301-ffe2-4d53-9323-f407a3657094" Sep 4 04:20:12.734299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1962044114.mount: Deactivated successfully. Sep 4 04:20:13.501790 containerd[1582]: time="2025-09-04T04:20:13.501699562Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:20:13.502889 containerd[1582]: time="2025-09-04T04:20:13.502838979Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 4 04:20:13.506084 containerd[1582]: time="2025-09-04T04:20:13.506027377Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:20:13.508516 containerd[1582]: time="2025-09-04T04:20:13.508456044Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:20:13.509086 containerd[1582]: time="2025-09-04T04:20:13.509048698Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 8.409676846s" Sep 4 04:20:13.509162 containerd[1582]: time="2025-09-04T04:20:13.509093119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 4 04:20:13.520913 containerd[1582]: time="2025-09-04T04:20:13.520864872Z" level=info msg="CreateContainer within sandbox \"99fc060342cb53167236a7c8295955c7c7b024515618301b5e2fca7476807f82\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 04:20:13.546346 containerd[1582]: time="2025-09-04T04:20:13.546269130Z" level=info msg="Container f0bb580d3074029807ad089d77efa363a6137cb8c3b44f6fc2f8f95a0eb750f9: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:20:13.569369 containerd[1582]: time="2025-09-04T04:20:13.569283391Z" level=info msg="CreateContainer within sandbox \"99fc060342cb53167236a7c8295955c7c7b024515618301b5e2fca7476807f82\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f0bb580d3074029807ad089d77efa363a6137cb8c3b44f6fc2f8f95a0eb750f9\"" Sep 4 04:20:13.570109 containerd[1582]: time="2025-09-04T04:20:13.570026539Z" level=info msg="StartContainer for \"f0bb580d3074029807ad089d77efa363a6137cb8c3b44f6fc2f8f95a0eb750f9\"" Sep 4 04:20:13.572199 containerd[1582]: time="2025-09-04T04:20:13.572102504Z" level=info msg="connecting to shim f0bb580d3074029807ad089d77efa363a6137cb8c3b44f6fc2f8f95a0eb750f9" address="unix:///run/containerd/s/dfef4edb9ae2355d128601f2400a9ec18a307337b329e57bf744da7640970606" protocol=ttrpc version=3 Sep 4 04:20:13.600342 systemd[1]: Started cri-containerd-f0bb580d3074029807ad089d77efa363a6137cb8c3b44f6fc2f8f95a0eb750f9.scope - libcontainer container f0bb580d3074029807ad089d77efa363a6137cb8c3b44f6fc2f8f95a0eb750f9. Sep 4 04:20:13.656368 containerd[1582]: time="2025-09-04T04:20:13.656328053Z" level=info msg="StartContainer for \"f0bb580d3074029807ad089d77efa363a6137cb8c3b44f6fc2f8f95a0eb750f9\" returns successfully" Sep 4 04:20:13.740996 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 04:20:13.741875 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 04:20:13.949968 kubelet[2768]: I0904 04:20:13.949802 2768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9be9fbca-716f-4567-931e-2b5c2be64b35-whisker-ca-bundle\") pod \"9be9fbca-716f-4567-931e-2b5c2be64b35\" (UID: \"9be9fbca-716f-4567-931e-2b5c2be64b35\") " Sep 4 04:20:13.949968 kubelet[2768]: I0904 04:20:13.949858 2768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mx96h\" (UniqueName: \"kubernetes.io/projected/9be9fbca-716f-4567-931e-2b5c2be64b35-kube-api-access-mx96h\") pod \"9be9fbca-716f-4567-931e-2b5c2be64b35\" (UID: \"9be9fbca-716f-4567-931e-2b5c2be64b35\") " Sep 4 04:20:13.949968 kubelet[2768]: I0904 04:20:13.949889 2768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9be9fbca-716f-4567-931e-2b5c2be64b35-whisker-backend-key-pair\") pod \"9be9fbca-716f-4567-931e-2b5c2be64b35\" (UID: \"9be9fbca-716f-4567-931e-2b5c2be64b35\") " Sep 4 04:20:13.951600 kubelet[2768]: I0904 04:20:13.951516 2768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9be9fbca-716f-4567-931e-2b5c2be64b35-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9be9fbca-716f-4567-931e-2b5c2be64b35" (UID: "9be9fbca-716f-4567-931e-2b5c2be64b35"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 04:20:13.954719 kubelet[2768]: I0904 04:20:13.954669 2768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9be9fbca-716f-4567-931e-2b5c2be64b35-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9be9fbca-716f-4567-931e-2b5c2be64b35" (UID: "9be9fbca-716f-4567-931e-2b5c2be64b35"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 4 04:20:13.955417 kubelet[2768]: I0904 04:20:13.955386 2768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9be9fbca-716f-4567-931e-2b5c2be64b35-kube-api-access-mx96h" (OuterVolumeSpecName: "kube-api-access-mx96h") pod "9be9fbca-716f-4567-931e-2b5c2be64b35" (UID: "9be9fbca-716f-4567-931e-2b5c2be64b35"). InnerVolumeSpecName "kube-api-access-mx96h". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 04:20:13.956598 systemd[1]: var-lib-kubelet-pods-9be9fbca\x2d716f\x2d4567\x2d931e\x2d2b5c2be64b35-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 4 04:20:13.957091 systemd[1]: var-lib-kubelet-pods-9be9fbca\x2d716f\x2d4567\x2d931e\x2d2b5c2be64b35-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmx96h.mount: Deactivated successfully. Sep 4 04:20:14.007282 systemd[1]: Removed slice kubepods-besteffort-pod9be9fbca_716f_4567_931e_2b5c2be64b35.slice - libcontainer container kubepods-besteffort-pod9be9fbca_716f_4567_931e_2b5c2be64b35.slice. Sep 4 04:20:14.051095 kubelet[2768]: I0904 04:20:14.051036 2768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mx96h\" (UniqueName: \"kubernetes.io/projected/9be9fbca-716f-4567-931e-2b5c2be64b35-kube-api-access-mx96h\") on node \"localhost\" DevicePath \"\"" Sep 4 04:20:14.051095 kubelet[2768]: I0904 04:20:14.051073 2768 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9be9fbca-716f-4567-931e-2b5c2be64b35-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 4 04:20:14.051095 kubelet[2768]: I0904 04:20:14.051083 2768 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9be9fbca-716f-4567-931e-2b5c2be64b35-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 4 04:20:14.163380 kubelet[2768]: I0904 04:20:14.163052 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-d4rr9" podStartSLOduration=1.973721454 podStartE2EDuration="23.163032668s" podCreationTimestamp="2025-09-04 04:19:51 +0000 UTC" firstStartedPulling="2025-09-04 04:19:52.320663632 +0000 UTC m=+16.709591559" lastFinishedPulling="2025-09-04 04:20:13.509974846 +0000 UTC m=+37.898902773" observedRunningTime="2025-09-04 04:20:14.152096595 +0000 UTC m=+38.541024522" watchObservedRunningTime="2025-09-04 04:20:14.163032668 +0000 UTC m=+38.551960595" Sep 4 04:20:14.206957 systemd[1]: Created slice kubepods-besteffort-pod292aa784_2e2e_42a7_89e8_b1a4fdc2cbd8.slice - libcontainer container kubepods-besteffort-pod292aa784_2e2e_42a7_89e8_b1a4fdc2cbd8.slice. Sep 4 04:20:14.254063 kubelet[2768]: I0904 04:20:14.253980 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/292aa784-2e2e-42a7-89e8-b1a4fdc2cbd8-whisker-backend-key-pair\") pod \"whisker-7859658fb9-w9zq4\" (UID: \"292aa784-2e2e-42a7-89e8-b1a4fdc2cbd8\") " pod="calico-system/whisker-7859658fb9-w9zq4" Sep 4 04:20:14.254063 kubelet[2768]: I0904 04:20:14.254059 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5jmc\" (UniqueName: \"kubernetes.io/projected/292aa784-2e2e-42a7-89e8-b1a4fdc2cbd8-kube-api-access-j5jmc\") pod \"whisker-7859658fb9-w9zq4\" (UID: \"292aa784-2e2e-42a7-89e8-b1a4fdc2cbd8\") " pod="calico-system/whisker-7859658fb9-w9zq4" Sep 4 04:20:14.254296 kubelet[2768]: I0904 04:20:14.254085 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/292aa784-2e2e-42a7-89e8-b1a4fdc2cbd8-whisker-ca-bundle\") pod \"whisker-7859658fb9-w9zq4\" (UID: \"292aa784-2e2e-42a7-89e8-b1a4fdc2cbd8\") " pod="calico-system/whisker-7859658fb9-w9zq4" Sep 4 04:20:14.511833 containerd[1582]: time="2025-09-04T04:20:14.511661082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7859658fb9-w9zq4,Uid:292aa784-2e2e-42a7-89e8-b1a4fdc2cbd8,Namespace:calico-system,Attempt:0,}" Sep 4 04:20:14.799288 systemd-networkd[1465]: caliabb39f62177: Link UP Sep 4 04:20:14.799521 systemd-networkd[1465]: caliabb39f62177: Gained carrier Sep 4 04:20:14.819095 containerd[1582]: 2025-09-04 04:20:14.541 [INFO][3891] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 04:20:14.819095 containerd[1582]: 2025-09-04 04:20:14.563 [INFO][3891] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7859658fb9--w9zq4-eth0 whisker-7859658fb9- calico-system 292aa784-2e2e-42a7-89e8-b1a4fdc2cbd8 940 0 2025-09-04 04:20:14 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7859658fb9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7859658fb9-w9zq4 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliabb39f62177 [] [] }} ContainerID="e11d25c2c2f5e6d5f208628c70d4072858d98191d1e556a1c01fc53c555830d4" Namespace="calico-system" Pod="whisker-7859658fb9-w9zq4" WorkloadEndpoint="localhost-k8s-whisker--7859658fb9--w9zq4-" Sep 4 04:20:14.819095 containerd[1582]: 2025-09-04 04:20:14.563 [INFO][3891] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e11d25c2c2f5e6d5f208628c70d4072858d98191d1e556a1c01fc53c555830d4" Namespace="calico-system" Pod="whisker-7859658fb9-w9zq4" WorkloadEndpoint="localhost-k8s-whisker--7859658fb9--w9zq4-eth0" Sep 4 04:20:14.819095 containerd[1582]: 2025-09-04 04:20:14.645 [INFO][3905] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e11d25c2c2f5e6d5f208628c70d4072858d98191d1e556a1c01fc53c555830d4" HandleID="k8s-pod-network.e11d25c2c2f5e6d5f208628c70d4072858d98191d1e556a1c01fc53c555830d4" Workload="localhost-k8s-whisker--7859658fb9--w9zq4-eth0" Sep 4 04:20:14.819458 containerd[1582]: 2025-09-04 04:20:14.646 [INFO][3905] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e11d25c2c2f5e6d5f208628c70d4072858d98191d1e556a1c01fc53c555830d4" HandleID="k8s-pod-network.e11d25c2c2f5e6d5f208628c70d4072858d98191d1e556a1c01fc53c555830d4" Workload="localhost-k8s-whisker--7859658fb9--w9zq4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004cf570), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7859658fb9-w9zq4", "timestamp":"2025-09-04 04:20:14.645362643 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 04:20:14.819458 containerd[1582]: 2025-09-04 04:20:14.647 [INFO][3905] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 04:20:14.819458 containerd[1582]: 2025-09-04 04:20:14.647 [INFO][3905] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 04:20:14.819458 containerd[1582]: 2025-09-04 04:20:14.647 [INFO][3905] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 04:20:14.819458 containerd[1582]: 2025-09-04 04:20:14.740 [INFO][3905] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e11d25c2c2f5e6d5f208628c70d4072858d98191d1e556a1c01fc53c555830d4" host="localhost" Sep 4 04:20:14.819458 containerd[1582]: 2025-09-04 04:20:14.750 [INFO][3905] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 04:20:14.819458 containerd[1582]: 2025-09-04 04:20:14.759 [INFO][3905] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 04:20:14.819458 containerd[1582]: 2025-09-04 04:20:14.761 [INFO][3905] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 04:20:14.819458 containerd[1582]: 2025-09-04 04:20:14.764 [INFO][3905] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 04:20:14.819458 containerd[1582]: 2025-09-04 04:20:14.764 [INFO][3905] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e11d25c2c2f5e6d5f208628c70d4072858d98191d1e556a1c01fc53c555830d4" host="localhost" Sep 4 04:20:14.819793 containerd[1582]: 2025-09-04 04:20:14.766 [INFO][3905] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e11d25c2c2f5e6d5f208628c70d4072858d98191d1e556a1c01fc53c555830d4 Sep 4 04:20:14.819793 containerd[1582]: 2025-09-04 04:20:14.771 [INFO][3905] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e11d25c2c2f5e6d5f208628c70d4072858d98191d1e556a1c01fc53c555830d4" host="localhost" Sep 4 04:20:14.819793 containerd[1582]: 2025-09-04 04:20:14.781 [INFO][3905] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.e11d25c2c2f5e6d5f208628c70d4072858d98191d1e556a1c01fc53c555830d4" host="localhost" Sep 4 04:20:14.819793 containerd[1582]: 2025-09-04 04:20:14.781 [INFO][3905] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.e11d25c2c2f5e6d5f208628c70d4072858d98191d1e556a1c01fc53c555830d4" host="localhost" Sep 4 04:20:14.819793 containerd[1582]: 2025-09-04 04:20:14.781 [INFO][3905] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 04:20:14.819793 containerd[1582]: 2025-09-04 04:20:14.781 [INFO][3905] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="e11d25c2c2f5e6d5f208628c70d4072858d98191d1e556a1c01fc53c555830d4" HandleID="k8s-pod-network.e11d25c2c2f5e6d5f208628c70d4072858d98191d1e556a1c01fc53c555830d4" Workload="localhost-k8s-whisker--7859658fb9--w9zq4-eth0" Sep 4 04:20:14.819969 containerd[1582]: 2025-09-04 04:20:14.786 [INFO][3891] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e11d25c2c2f5e6d5f208628c70d4072858d98191d1e556a1c01fc53c555830d4" Namespace="calico-system" Pod="whisker-7859658fb9-w9zq4" WorkloadEndpoint="localhost-k8s-whisker--7859658fb9--w9zq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7859658fb9--w9zq4-eth0", GenerateName:"whisker-7859658fb9-", Namespace:"calico-system", SelfLink:"", UID:"292aa784-2e2e-42a7-89e8-b1a4fdc2cbd8", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 4, 20, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7859658fb9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7859658fb9-w9zq4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliabb39f62177", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 04:20:14.819969 containerd[1582]: 2025-09-04 04:20:14.786 [INFO][3891] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="e11d25c2c2f5e6d5f208628c70d4072858d98191d1e556a1c01fc53c555830d4" Namespace="calico-system" Pod="whisker-7859658fb9-w9zq4" WorkloadEndpoint="localhost-k8s-whisker--7859658fb9--w9zq4-eth0" Sep 4 04:20:14.820090 containerd[1582]: 2025-09-04 04:20:14.786 [INFO][3891] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliabb39f62177 ContainerID="e11d25c2c2f5e6d5f208628c70d4072858d98191d1e556a1c01fc53c555830d4" Namespace="calico-system" Pod="whisker-7859658fb9-w9zq4" WorkloadEndpoint="localhost-k8s-whisker--7859658fb9--w9zq4-eth0" Sep 4 04:20:14.820090 containerd[1582]: 2025-09-04 04:20:14.799 [INFO][3891] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e11d25c2c2f5e6d5f208628c70d4072858d98191d1e556a1c01fc53c555830d4" Namespace="calico-system" Pod="whisker-7859658fb9-w9zq4" WorkloadEndpoint="localhost-k8s-whisker--7859658fb9--w9zq4-eth0" Sep 4 04:20:14.820192 containerd[1582]: 2025-09-04 04:20:14.802 [INFO][3891] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e11d25c2c2f5e6d5f208628c70d4072858d98191d1e556a1c01fc53c555830d4" Namespace="calico-system" Pod="whisker-7859658fb9-w9zq4" WorkloadEndpoint="localhost-k8s-whisker--7859658fb9--w9zq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7859658fb9--w9zq4-eth0", GenerateName:"whisker-7859658fb9-", Namespace:"calico-system", SelfLink:"", UID:"292aa784-2e2e-42a7-89e8-b1a4fdc2cbd8", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 4, 20, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7859658fb9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e11d25c2c2f5e6d5f208628c70d4072858d98191d1e556a1c01fc53c555830d4", Pod:"whisker-7859658fb9-w9zq4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliabb39f62177", MAC:"6a:73:45:ca:98:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 04:20:14.820270 containerd[1582]: 2025-09-04 04:20:14.815 [INFO][3891] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e11d25c2c2f5e6d5f208628c70d4072858d98191d1e556a1c01fc53c555830d4" Namespace="calico-system" Pod="whisker-7859658fb9-w9zq4" WorkloadEndpoint="localhost-k8s-whisker--7859658fb9--w9zq4-eth0" Sep 4 04:20:15.235453 containerd[1582]: time="2025-09-04T04:20:15.235380901Z" level=info msg="connecting to shim e11d25c2c2f5e6d5f208628c70d4072858d98191d1e556a1c01fc53c555830d4" address="unix:///run/containerd/s/633d00849cfd2db0de21a4c82442c406bacab3df3ee5e3801f410dfe93a56326" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:20:15.285576 systemd[1]: Started cri-containerd-e11d25c2c2f5e6d5f208628c70d4072858d98191d1e556a1c01fc53c555830d4.scope - libcontainer container e11d25c2c2f5e6d5f208628c70d4072858d98191d1e556a1c01fc53c555830d4. Sep 4 04:20:15.324624 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 04:20:15.493579 containerd[1582]: time="2025-09-04T04:20:15.493253716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7859658fb9-w9zq4,Uid:292aa784-2e2e-42a7-89e8-b1a4fdc2cbd8,Namespace:calico-system,Attempt:0,} returns sandbox id \"e11d25c2c2f5e6d5f208628c70d4072858d98191d1e556a1c01fc53c555830d4\"" Sep 4 04:20:15.495675 containerd[1582]: time="2025-09-04T04:20:15.495638508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 4 04:20:15.776883 systemd-networkd[1465]: vxlan.calico: Link UP Sep 4 04:20:15.776918 systemd-networkd[1465]: vxlan.calico: Gained carrier Sep 4 04:20:15.993572 kubelet[2768]: E0904 04:20:15.993526 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:20:15.995527 containerd[1582]: time="2025-09-04T04:20:15.995484788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84574cdc5d-lvm6s,Uid:640dc2d5-cd5d-434f-8f0d-3d5aec52bc0b,Namespace:calico-apiserver,Attempt:0,}" Sep 4 04:20:15.995868 containerd[1582]: time="2025-09-04T04:20:15.995730693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vps4h,Uid:358ec66d-ac3e-40ac-8894-fbc7c8591e68,Namespace:kube-system,Attempt:0,}" Sep 4 04:20:16.002160 kubelet[2768]: I0904 04:20:15.999399 2768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9be9fbca-716f-4567-931e-2b5c2be64b35" path="/var/lib/kubelet/pods/9be9fbca-716f-4567-931e-2b5c2be64b35/volumes" Sep 4 04:20:16.360461 kubelet[2768]: I0904 04:20:16.360385 2768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 04:20:16.374108 systemd-networkd[1465]: calie2f0a9b649c: Link UP Sep 4 04:20:16.374431 systemd-networkd[1465]: calie2f0a9b649c: Gained carrier Sep 4 04:20:16.491093 containerd[1582]: 2025-09-04 04:20:16.071 [INFO][4136] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--84574cdc5d--lvm6s-eth0 calico-apiserver-84574cdc5d- calico-apiserver 640dc2d5-cd5d-434f-8f0d-3d5aec52bc0b 870 0 2025-09-04 04:19:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84574cdc5d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-84574cdc5d-lvm6s eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie2f0a9b649c [] [] }} ContainerID="a84c07937043e844f8bb0fcc10f90e5f6df5ed55a450fc05e9406ceba94f3cd3" Namespace="calico-apiserver" Pod="calico-apiserver-84574cdc5d-lvm6s" WorkloadEndpoint="localhost-k8s-calico--apiserver--84574cdc5d--lvm6s-" Sep 4 04:20:16.491093 containerd[1582]: 2025-09-04 04:20:16.071 [INFO][4136] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a84c07937043e844f8bb0fcc10f90e5f6df5ed55a450fc05e9406ceba94f3cd3" Namespace="calico-apiserver" Pod="calico-apiserver-84574cdc5d-lvm6s" WorkloadEndpoint="localhost-k8s-calico--apiserver--84574cdc5d--lvm6s-eth0" Sep 4 04:20:16.491093 containerd[1582]: 2025-09-04 04:20:16.136 [INFO][4182] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a84c07937043e844f8bb0fcc10f90e5f6df5ed55a450fc05e9406ceba94f3cd3" HandleID="k8s-pod-network.a84c07937043e844f8bb0fcc10f90e5f6df5ed55a450fc05e9406ceba94f3cd3" Workload="localhost-k8s-calico--apiserver--84574cdc5d--lvm6s-eth0" Sep 4 04:20:16.491413 containerd[1582]: 2025-09-04 04:20:16.136 [INFO][4182] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a84c07937043e844f8bb0fcc10f90e5f6df5ed55a450fc05e9406ceba94f3cd3" HandleID="k8s-pod-network.a84c07937043e844f8bb0fcc10f90e5f6df5ed55a450fc05e9406ceba94f3cd3" Workload="localhost-k8s-calico--apiserver--84574cdc5d--lvm6s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00010e730), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-84574cdc5d-lvm6s", "timestamp":"2025-09-04 04:20:16.135988073 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 04:20:16.491413 containerd[1582]: 2025-09-04 04:20:16.136 [INFO][4182] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 04:20:16.491413 containerd[1582]: 2025-09-04 04:20:16.136 [INFO][4182] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 04:20:16.491413 containerd[1582]: 2025-09-04 04:20:16.136 [INFO][4182] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 04:20:16.491413 containerd[1582]: 2025-09-04 04:20:16.287 [INFO][4182] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a84c07937043e844f8bb0fcc10f90e5f6df5ed55a450fc05e9406ceba94f3cd3" host="localhost" Sep 4 04:20:16.491413 containerd[1582]: 2025-09-04 04:20:16.294 [INFO][4182] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 04:20:16.491413 containerd[1582]: 2025-09-04 04:20:16.300 [INFO][4182] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 04:20:16.491413 containerd[1582]: 2025-09-04 04:20:16.303 [INFO][4182] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 04:20:16.491413 containerd[1582]: 2025-09-04 04:20:16.307 [INFO][4182] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 04:20:16.491413 containerd[1582]: 2025-09-04 04:20:16.307 [INFO][4182] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a84c07937043e844f8bb0fcc10f90e5f6df5ed55a450fc05e9406ceba94f3cd3" host="localhost" Sep 4 04:20:16.491739 containerd[1582]: 2025-09-04 04:20:16.309 [INFO][4182] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a84c07937043e844f8bb0fcc10f90e5f6df5ed55a450fc05e9406ceba94f3cd3 Sep 4 04:20:16.491739 containerd[1582]: 2025-09-04 04:20:16.330 [INFO][4182] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a84c07937043e844f8bb0fcc10f90e5f6df5ed55a450fc05e9406ceba94f3cd3" host="localhost" Sep 4 04:20:16.491739 containerd[1582]: 2025-09-04 04:20:16.365 [INFO][4182] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a84c07937043e844f8bb0fcc10f90e5f6df5ed55a450fc05e9406ceba94f3cd3" host="localhost" Sep 4 04:20:16.491739 containerd[1582]: 2025-09-04 04:20:16.366 [INFO][4182] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a84c07937043e844f8bb0fcc10f90e5f6df5ed55a450fc05e9406ceba94f3cd3" host="localhost" Sep 4 04:20:16.491739 containerd[1582]: 2025-09-04 04:20:16.366 [INFO][4182] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 04:20:16.491739 containerd[1582]: 2025-09-04 04:20:16.366 [INFO][4182] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a84c07937043e844f8bb0fcc10f90e5f6df5ed55a450fc05e9406ceba94f3cd3" HandleID="k8s-pod-network.a84c07937043e844f8bb0fcc10f90e5f6df5ed55a450fc05e9406ceba94f3cd3" Workload="localhost-k8s-calico--apiserver--84574cdc5d--lvm6s-eth0" Sep 4 04:20:16.491927 containerd[1582]: 2025-09-04 04:20:16.369 [INFO][4136] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a84c07937043e844f8bb0fcc10f90e5f6df5ed55a450fc05e9406ceba94f3cd3" Namespace="calico-apiserver" Pod="calico-apiserver-84574cdc5d-lvm6s" WorkloadEndpoint="localhost-k8s-calico--apiserver--84574cdc5d--lvm6s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84574cdc5d--lvm6s-eth0", GenerateName:"calico-apiserver-84574cdc5d-", Namespace:"calico-apiserver", SelfLink:"", UID:"640dc2d5-cd5d-434f-8f0d-3d5aec52bc0b", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 4, 19, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84574cdc5d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-84574cdc5d-lvm6s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie2f0a9b649c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 04:20:16.492031 containerd[1582]: 2025-09-04 04:20:16.369 [INFO][4136] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a84c07937043e844f8bb0fcc10f90e5f6df5ed55a450fc05e9406ceba94f3cd3" Namespace="calico-apiserver" Pod="calico-apiserver-84574cdc5d-lvm6s" WorkloadEndpoint="localhost-k8s-calico--apiserver--84574cdc5d--lvm6s-eth0" Sep 4 04:20:16.492031 containerd[1582]: 2025-09-04 04:20:16.369 [INFO][4136] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie2f0a9b649c ContainerID="a84c07937043e844f8bb0fcc10f90e5f6df5ed55a450fc05e9406ceba94f3cd3" Namespace="calico-apiserver" Pod="calico-apiserver-84574cdc5d-lvm6s" WorkloadEndpoint="localhost-k8s-calico--apiserver--84574cdc5d--lvm6s-eth0" Sep 4 04:20:16.492031 containerd[1582]: 2025-09-04 04:20:16.373 [INFO][4136] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a84c07937043e844f8bb0fcc10f90e5f6df5ed55a450fc05e9406ceba94f3cd3" Namespace="calico-apiserver" Pod="calico-apiserver-84574cdc5d-lvm6s" WorkloadEndpoint="localhost-k8s-calico--apiserver--84574cdc5d--lvm6s-eth0" Sep 4 04:20:16.492154 containerd[1582]: 2025-09-04 04:20:16.375 [INFO][4136] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a84c07937043e844f8bb0fcc10f90e5f6df5ed55a450fc05e9406ceba94f3cd3" Namespace="calico-apiserver" Pod="calico-apiserver-84574cdc5d-lvm6s" WorkloadEndpoint="localhost-k8s-calico--apiserver--84574cdc5d--lvm6s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84574cdc5d--lvm6s-eth0", GenerateName:"calico-apiserver-84574cdc5d-", Namespace:"calico-apiserver", SelfLink:"", UID:"640dc2d5-cd5d-434f-8f0d-3d5aec52bc0b", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 4, 19, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84574cdc5d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a84c07937043e844f8bb0fcc10f90e5f6df5ed55a450fc05e9406ceba94f3cd3", Pod:"calico-apiserver-84574cdc5d-lvm6s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie2f0a9b649c", MAC:"ee:d0:6b:57:71:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 04:20:16.492243 containerd[1582]: 2025-09-04 04:20:16.484 [INFO][4136] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a84c07937043e844f8bb0fcc10f90e5f6df5ed55a450fc05e9406ceba94f3cd3" Namespace="calico-apiserver" Pod="calico-apiserver-84574cdc5d-lvm6s" WorkloadEndpoint="localhost-k8s-calico--apiserver--84574cdc5d--lvm6s-eth0" Sep 4 04:20:16.587679 containerd[1582]: time="2025-09-04T04:20:16.587608624Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f0bb580d3074029807ad089d77efa363a6137cb8c3b44f6fc2f8f95a0eb750f9\" id:\"f11463bffce42a452fb9619323e9aad5d430b5c34ae84886319ea8d3d57e92e9\" pid:4239 exit_status:1 exited_at:{seconds:1756959616 nanos:587108789}" Sep 4 04:20:16.660691 systemd-networkd[1465]: cali3b03f886e51: Link UP Sep 4 04:20:16.664830 systemd-networkd[1465]: cali3b03f886e51: Gained carrier Sep 4 04:20:16.703315 systemd-networkd[1465]: caliabb39f62177: Gained IPv6LL Sep 4 04:20:16.727279 containerd[1582]: time="2025-09-04T04:20:16.727213072Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f0bb580d3074029807ad089d77efa363a6137cb8c3b44f6fc2f8f95a0eb750f9\" id:\"062570b24a70c244308d366c1dd0866538f9e3f7bf4beaa6ba4454cd27c6d881\" pid:4265 exit_status:1 exited_at:{seconds:1756959616 nanos:726831495}" Sep 4 04:20:16.742000 containerd[1582]: 2025-09-04 04:20:16.063 [INFO][4146] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--vps4h-eth0 coredns-7c65d6cfc9- kube-system 358ec66d-ac3e-40ac-8894-fbc7c8591e68 861 0 2025-09-04 04:19:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-vps4h eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3b03f886e51 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8b60dbe1be7c2f95fbb3294cceb3184170df0b80a7572cd20e0889123627291c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vps4h" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vps4h-" Sep 4 04:20:16.742000 containerd[1582]: 2025-09-04 04:20:16.063 [INFO][4146] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8b60dbe1be7c2f95fbb3294cceb3184170df0b80a7572cd20e0889123627291c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vps4h" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vps4h-eth0" Sep 4 04:20:16.742000 containerd[1582]: 2025-09-04 04:20:16.147 [INFO][4173] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8b60dbe1be7c2f95fbb3294cceb3184170df0b80a7572cd20e0889123627291c" HandleID="k8s-pod-network.8b60dbe1be7c2f95fbb3294cceb3184170df0b80a7572cd20e0889123627291c" Workload="localhost-k8s-coredns--7c65d6cfc9--vps4h-eth0" Sep 4 04:20:16.742682 containerd[1582]: 2025-09-04 04:20:16.147 [INFO][4173] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8b60dbe1be7c2f95fbb3294cceb3184170df0b80a7572cd20e0889123627291c" HandleID="k8s-pod-network.8b60dbe1be7c2f95fbb3294cceb3184170df0b80a7572cd20e0889123627291c" Workload="localhost-k8s-coredns--7c65d6cfc9--vps4h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031fb50), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-vps4h", "timestamp":"2025-09-04 04:20:16.147240532 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 04:20:16.742682 containerd[1582]: 2025-09-04 04:20:16.147 [INFO][4173] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 04:20:16.742682 containerd[1582]: 2025-09-04 04:20:16.366 [INFO][4173] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 04:20:16.742682 containerd[1582]: 2025-09-04 04:20:16.366 [INFO][4173] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 04:20:16.742682 containerd[1582]: 2025-09-04 04:20:16.482 [INFO][4173] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8b60dbe1be7c2f95fbb3294cceb3184170df0b80a7572cd20e0889123627291c" host="localhost" Sep 4 04:20:16.742682 containerd[1582]: 2025-09-04 04:20:16.501 [INFO][4173] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 04:20:16.742682 containerd[1582]: 2025-09-04 04:20:16.510 [INFO][4173] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 04:20:16.742682 containerd[1582]: 2025-09-04 04:20:16.514 [INFO][4173] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 04:20:16.742682 containerd[1582]: 2025-09-04 04:20:16.517 [INFO][4173] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 04:20:16.742682 containerd[1582]: 2025-09-04 04:20:16.517 [INFO][4173] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8b60dbe1be7c2f95fbb3294cceb3184170df0b80a7572cd20e0889123627291c" host="localhost" Sep 4 04:20:16.743099 containerd[1582]: 2025-09-04 04:20:16.519 [INFO][4173] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8b60dbe1be7c2f95fbb3294cceb3184170df0b80a7572cd20e0889123627291c Sep 4 04:20:16.743099 containerd[1582]: 2025-09-04 04:20:16.543 [INFO][4173] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8b60dbe1be7c2f95fbb3294cceb3184170df0b80a7572cd20e0889123627291c" host="localhost" Sep 4 04:20:16.743099 containerd[1582]: 2025-09-04 04:20:16.651 [INFO][4173] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.8b60dbe1be7c2f95fbb3294cceb3184170df0b80a7572cd20e0889123627291c" host="localhost" Sep 4 04:20:16.743099 containerd[1582]: 2025-09-04 04:20:16.651 [INFO][4173] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.8b60dbe1be7c2f95fbb3294cceb3184170df0b80a7572cd20e0889123627291c" host="localhost" Sep 4 04:20:16.743099 containerd[1582]: 2025-09-04 04:20:16.651 [INFO][4173] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 04:20:16.743099 containerd[1582]: 2025-09-04 04:20:16.652 [INFO][4173] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="8b60dbe1be7c2f95fbb3294cceb3184170df0b80a7572cd20e0889123627291c" HandleID="k8s-pod-network.8b60dbe1be7c2f95fbb3294cceb3184170df0b80a7572cd20e0889123627291c" Workload="localhost-k8s-coredns--7c65d6cfc9--vps4h-eth0" Sep 4 04:20:16.743310 containerd[1582]: 2025-09-04 04:20:16.656 [INFO][4146] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8b60dbe1be7c2f95fbb3294cceb3184170df0b80a7572cd20e0889123627291c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vps4h" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vps4h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--vps4h-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"358ec66d-ac3e-40ac-8894-fbc7c8591e68", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 4, 19, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-vps4h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3b03f886e51", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 04:20:16.743439 containerd[1582]: 2025-09-04 04:20:16.656 [INFO][4146] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="8b60dbe1be7c2f95fbb3294cceb3184170df0b80a7572cd20e0889123627291c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vps4h" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vps4h-eth0" Sep 4 04:20:16.743439 containerd[1582]: 2025-09-04 04:20:16.656 [INFO][4146] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b03f886e51 ContainerID="8b60dbe1be7c2f95fbb3294cceb3184170df0b80a7572cd20e0889123627291c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vps4h" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vps4h-eth0" Sep 4 04:20:16.743439 containerd[1582]: 2025-09-04 04:20:16.666 [INFO][4146] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8b60dbe1be7c2f95fbb3294cceb3184170df0b80a7572cd20e0889123627291c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vps4h" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vps4h-eth0" Sep 4 04:20:16.743551 containerd[1582]: 2025-09-04 04:20:16.668 [INFO][4146] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8b60dbe1be7c2f95fbb3294cceb3184170df0b80a7572cd20e0889123627291c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vps4h" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vps4h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--vps4h-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"358ec66d-ac3e-40ac-8894-fbc7c8591e68", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 4, 19, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8b60dbe1be7c2f95fbb3294cceb3184170df0b80a7572cd20e0889123627291c", Pod:"coredns-7c65d6cfc9-vps4h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3b03f886e51", MAC:"3a:84:61:2d:2a:92", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 04:20:16.743551 containerd[1582]: 2025-09-04 04:20:16.733 [INFO][4146] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8b60dbe1be7c2f95fbb3294cceb3184170df0b80a7572cd20e0889123627291c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vps4h" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vps4h-eth0" Sep 4 04:20:16.771618 containerd[1582]: time="2025-09-04T04:20:16.771541109Z" level=info msg="connecting to shim a84c07937043e844f8bb0fcc10f90e5f6df5ed55a450fc05e9406ceba94f3cd3" address="unix:///run/containerd/s/5ac834b5abee4028ab87240434efe8de89e0785b4a7d41c1c5d0c1b7ed8419fd" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:20:16.796308 containerd[1582]: time="2025-09-04T04:20:16.796235262Z" level=info msg="connecting to shim 8b60dbe1be7c2f95fbb3294cceb3184170df0b80a7572cd20e0889123627291c" address="unix:///run/containerd/s/693f812b4e499e026b446f26612bf66eee5c5f20036964830e5074c584aed72e" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:20:16.825949 systemd[1]: Started cri-containerd-a84c07937043e844f8bb0fcc10f90e5f6df5ed55a450fc05e9406ceba94f3cd3.scope - libcontainer container a84c07937043e844f8bb0fcc10f90e5f6df5ed55a450fc05e9406ceba94f3cd3. Sep 4 04:20:16.831116 systemd[1]: Started cri-containerd-8b60dbe1be7c2f95fbb3294cceb3184170df0b80a7572cd20e0889123627291c.scope - libcontainer container 8b60dbe1be7c2f95fbb3294cceb3184170df0b80a7572cd20e0889123627291c. Sep 4 04:20:16.843190 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 04:20:16.848170 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 04:20:16.893038 containerd[1582]: time="2025-09-04T04:20:16.892803233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84574cdc5d-lvm6s,Uid:640dc2d5-cd5d-434f-8f0d-3d5aec52bc0b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a84c07937043e844f8bb0fcc10f90e5f6df5ed55a450fc05e9406ceba94f3cd3\"" Sep 4 04:20:16.906744 containerd[1582]: time="2025-09-04T04:20:16.906679492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vps4h,Uid:358ec66d-ac3e-40ac-8894-fbc7c8591e68,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b60dbe1be7c2f95fbb3294cceb3184170df0b80a7572cd20e0889123627291c\"" Sep 4 04:20:16.908763 kubelet[2768]: E0904 04:20:16.908592 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:20:16.919270 containerd[1582]: time="2025-09-04T04:20:16.919020802Z" level=info msg="CreateContainer within sandbox \"8b60dbe1be7c2f95fbb3294cceb3184170df0b80a7572cd20e0889123627291c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 04:20:16.961439 containerd[1582]: time="2025-09-04T04:20:16.960389504Z" level=info msg="Container 7af55c2e11077354aba77a0c48328567c0fabf2e739c15d0e1afe58450d3d040: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:20:16.968927 containerd[1582]: time="2025-09-04T04:20:16.968849766Z" level=info msg="CreateContainer within sandbox \"8b60dbe1be7c2f95fbb3294cceb3184170df0b80a7572cd20e0889123627291c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7af55c2e11077354aba77a0c48328567c0fabf2e739c15d0e1afe58450d3d040\"" Sep 4 04:20:16.971157 containerd[1582]: time="2025-09-04T04:20:16.970202587Z" level=info msg="StartContainer for \"7af55c2e11077354aba77a0c48328567c0fabf2e739c15d0e1afe58450d3d040\"" Sep 4 04:20:16.971382 containerd[1582]: time="2025-09-04T04:20:16.971351199Z" level=info msg="connecting to shim 7af55c2e11077354aba77a0c48328567c0fabf2e739c15d0e1afe58450d3d040" address="unix:///run/containerd/s/693f812b4e499e026b446f26612bf66eee5c5f20036964830e5074c584aed72e" protocol=ttrpc version=3 Sep 4 04:20:16.992932 kubelet[2768]: E0904 04:20:16.992882 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:20:16.996799 containerd[1582]: time="2025-09-04T04:20:16.996749379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4fh5g,Uid:19dae45b-0a4c-4ef8-8765-597e5b94e56b,Namespace:kube-system,Attempt:0,}" Sep 4 04:20:17.028440 systemd[1]: Started cri-containerd-7af55c2e11077354aba77a0c48328567c0fabf2e739c15d0e1afe58450d3d040.scope - libcontainer container 7af55c2e11077354aba77a0c48328567c0fabf2e739c15d0e1afe58450d3d040. Sep 4 04:20:17.073662 containerd[1582]: time="2025-09-04T04:20:17.073615000Z" level=info msg="StartContainer for \"7af55c2e11077354aba77a0c48328567c0fabf2e739c15d0e1afe58450d3d040\" returns successfully" Sep 4 04:20:17.148952 systemd-networkd[1465]: cali96278be10af: Link UP Sep 4 04:20:17.150339 kubelet[2768]: E0904 04:20:17.149538 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:20:17.151308 systemd-networkd[1465]: cali96278be10af: Gained carrier Sep 4 04:20:17.161761 kubelet[2768]: I0904 04:20:17.161683 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-vps4h" podStartSLOduration=37.161663072 podStartE2EDuration="37.161663072s" podCreationTimestamp="2025-09-04 04:19:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 04:20:17.160789084 +0000 UTC m=+41.549717011" watchObservedRunningTime="2025-09-04 04:20:17.161663072 +0000 UTC m=+41.550590999" Sep 4 04:20:17.173797 containerd[1582]: 2025-09-04 04:20:17.071 [INFO][4401] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--4fh5g-eth0 coredns-7c65d6cfc9- kube-system 19dae45b-0a4c-4ef8-8765-597e5b94e56b 867 0 2025-09-04 04:19:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-4fh5g eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali96278be10af [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="99ab838ee1d806fc9431c6f4f4d591972de889f6dab9740eb00cb1b2d1d9ee6c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4fh5g" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4fh5g-" Sep 4 04:20:17.173797 containerd[1582]: 2025-09-04 04:20:17.072 [INFO][4401] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="99ab838ee1d806fc9431c6f4f4d591972de889f6dab9740eb00cb1b2d1d9ee6c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4fh5g" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4fh5g-eth0" Sep 4 04:20:17.173797 containerd[1582]: 2025-09-04 04:20:17.109 [INFO][4424] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="99ab838ee1d806fc9431c6f4f4d591972de889f6dab9740eb00cb1b2d1d9ee6c" HandleID="k8s-pod-network.99ab838ee1d806fc9431c6f4f4d591972de889f6dab9740eb00cb1b2d1d9ee6c" Workload="localhost-k8s-coredns--7c65d6cfc9--4fh5g-eth0" Sep 4 04:20:17.173797 containerd[1582]: 2025-09-04 04:20:17.109 [INFO][4424] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="99ab838ee1d806fc9431c6f4f4d591972de889f6dab9740eb00cb1b2d1d9ee6c" HandleID="k8s-pod-network.99ab838ee1d806fc9431c6f4f4d591972de889f6dab9740eb00cb1b2d1d9ee6c" Workload="localhost-k8s-coredns--7c65d6cfc9--4fh5g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7700), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-4fh5g", "timestamp":"2025-09-04 04:20:17.109744798 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 04:20:17.173797 containerd[1582]: 2025-09-04 04:20:17.110 [INFO][4424] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 04:20:17.173797 containerd[1582]: 2025-09-04 04:20:17.110 [INFO][4424] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 04:20:17.173797 containerd[1582]: 2025-09-04 04:20:17.110 [INFO][4424] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 04:20:17.173797 containerd[1582]: 2025-09-04 04:20:17.116 [INFO][4424] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.99ab838ee1d806fc9431c6f4f4d591972de889f6dab9740eb00cb1b2d1d9ee6c" host="localhost" Sep 4 04:20:17.173797 containerd[1582]: 2025-09-04 04:20:17.120 [INFO][4424] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 04:20:17.173797 containerd[1582]: 2025-09-04 04:20:17.124 [INFO][4424] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 04:20:17.173797 containerd[1582]: 2025-09-04 04:20:17.125 [INFO][4424] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 04:20:17.173797 containerd[1582]: 2025-09-04 04:20:17.127 [INFO][4424] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 04:20:17.173797 containerd[1582]: 2025-09-04 04:20:17.127 [INFO][4424] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.99ab838ee1d806fc9431c6f4f4d591972de889f6dab9740eb00cb1b2d1d9ee6c" host="localhost" Sep 4 04:20:17.173797 containerd[1582]: 2025-09-04 04:20:17.128 [INFO][4424] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.99ab838ee1d806fc9431c6f4f4d591972de889f6dab9740eb00cb1b2d1d9ee6c Sep 4 04:20:17.173797 containerd[1582]: 2025-09-04 04:20:17.132 [INFO][4424] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.99ab838ee1d806fc9431c6f4f4d591972de889f6dab9740eb00cb1b2d1d9ee6c" host="localhost" Sep 4 04:20:17.173797 containerd[1582]: 2025-09-04 04:20:17.138 [INFO][4424] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.99ab838ee1d806fc9431c6f4f4d591972de889f6dab9740eb00cb1b2d1d9ee6c" host="localhost" Sep 4 04:20:17.173797 containerd[1582]: 2025-09-04 04:20:17.138 [INFO][4424] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.99ab838ee1d806fc9431c6f4f4d591972de889f6dab9740eb00cb1b2d1d9ee6c" host="localhost" Sep 4 04:20:17.173797 containerd[1582]: 2025-09-04 04:20:17.138 [INFO][4424] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 04:20:17.173797 containerd[1582]: 2025-09-04 04:20:17.138 [INFO][4424] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="99ab838ee1d806fc9431c6f4f4d591972de889f6dab9740eb00cb1b2d1d9ee6c" HandleID="k8s-pod-network.99ab838ee1d806fc9431c6f4f4d591972de889f6dab9740eb00cb1b2d1d9ee6c" Workload="localhost-k8s-coredns--7c65d6cfc9--4fh5g-eth0" Sep 4 04:20:17.176520 containerd[1582]: 2025-09-04 04:20:17.145 [INFO][4401] cni-plugin/k8s.go 418: Populated endpoint ContainerID="99ab838ee1d806fc9431c6f4f4d591972de889f6dab9740eb00cb1b2d1d9ee6c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4fh5g" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4fh5g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--4fh5g-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"19dae45b-0a4c-4ef8-8765-597e5b94e56b", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 4, 19, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-4fh5g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali96278be10af", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 04:20:17.176520 containerd[1582]: 2025-09-04 04:20:17.145 [INFO][4401] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="99ab838ee1d806fc9431c6f4f4d591972de889f6dab9740eb00cb1b2d1d9ee6c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4fh5g" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4fh5g-eth0" Sep 4 04:20:17.176520 containerd[1582]: 2025-09-04 04:20:17.145 [INFO][4401] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali96278be10af ContainerID="99ab838ee1d806fc9431c6f4f4d591972de889f6dab9740eb00cb1b2d1d9ee6c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4fh5g" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4fh5g-eth0" Sep 4 04:20:17.176520 containerd[1582]: 2025-09-04 04:20:17.148 [INFO][4401] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="99ab838ee1d806fc9431c6f4f4d591972de889f6dab9740eb00cb1b2d1d9ee6c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4fh5g" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4fh5g-eth0" Sep 4 04:20:17.176520 containerd[1582]: 2025-09-04 04:20:17.150 [INFO][4401] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="99ab838ee1d806fc9431c6f4f4d591972de889f6dab9740eb00cb1b2d1d9ee6c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4fh5g" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4fh5g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--4fh5g-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"19dae45b-0a4c-4ef8-8765-597e5b94e56b", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 4, 19, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"99ab838ee1d806fc9431c6f4f4d591972de889f6dab9740eb00cb1b2d1d9ee6c", Pod:"coredns-7c65d6cfc9-4fh5g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali96278be10af", MAC:"b2:c8:cc:9a:30:be", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 04:20:17.176520 containerd[1582]: 2025-09-04 04:20:17.164 [INFO][4401] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="99ab838ee1d806fc9431c6f4f4d591972de889f6dab9740eb00cb1b2d1d9ee6c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4fh5g" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4fh5g-eth0" Sep 4 04:20:17.214575 containerd[1582]: time="2025-09-04T04:20:17.214372266Z" level=info msg="connecting to shim 99ab838ee1d806fc9431c6f4f4d591972de889f6dab9740eb00cb1b2d1d9ee6c" address="unix:///run/containerd/s/636b10e6f326716bbd2fc9aa0e492d663e5deaaa993140e1175ecd22d20e3099" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:20:17.249260 systemd[1]: Started cri-containerd-99ab838ee1d806fc9431c6f4f4d591972de889f6dab9740eb00cb1b2d1d9ee6c.scope - libcontainer container 99ab838ee1d806fc9431c6f4f4d591972de889f6dab9740eb00cb1b2d1d9ee6c. Sep 4 04:20:17.271861 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 04:20:17.308660 containerd[1582]: time="2025-09-04T04:20:17.308543009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4fh5g,Uid:19dae45b-0a4c-4ef8-8765-597e5b94e56b,Namespace:kube-system,Attempt:0,} returns sandbox id \"99ab838ee1d806fc9431c6f4f4d591972de889f6dab9740eb00cb1b2d1d9ee6c\"" Sep 4 04:20:17.309653 kubelet[2768]: E0904 04:20:17.309599 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:20:17.311972 containerd[1582]: time="2025-09-04T04:20:17.311913416Z" level=info msg="CreateContainer within sandbox \"99ab838ee1d806fc9431c6f4f4d591972de889f6dab9740eb00cb1b2d1d9ee6c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 04:20:17.331170 containerd[1582]: time="2025-09-04T04:20:17.330605861Z" level=info msg="Container 07769a8c533ccb4112cc032919e302a7cec07dda3d3c2e8b7fdd474e3337d954: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:20:17.342167 containerd[1582]: time="2025-09-04T04:20:17.342089602Z" level=info msg="CreateContainer within sandbox \"99ab838ee1d806fc9431c6f4f4d591972de889f6dab9740eb00cb1b2d1d9ee6c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"07769a8c533ccb4112cc032919e302a7cec07dda3d3c2e8b7fdd474e3337d954\"" Sep 4 04:20:17.344824 containerd[1582]: time="2025-09-04T04:20:17.343383775Z" level=info msg="StartContainer for \"07769a8c533ccb4112cc032919e302a7cec07dda3d3c2e8b7fdd474e3337d954\"" Sep 4 04:20:17.344824 containerd[1582]: time="2025-09-04T04:20:17.344450930Z" level=info msg="connecting to shim 07769a8c533ccb4112cc032919e302a7cec07dda3d3c2e8b7fdd474e3337d954" address="unix:///run/containerd/s/636b10e6f326716bbd2fc9aa0e492d663e5deaaa993140e1175ecd22d20e3099" protocol=ttrpc version=3 Sep 4 04:20:17.355062 systemd[1]: Started sshd@9-10.0.0.55:22-10.0.0.1:50770.service - OpenSSH per-connection server daemon (10.0.0.1:50770). Sep 4 04:20:17.370921 systemd[1]: Started cri-containerd-07769a8c533ccb4112cc032919e302a7cec07dda3d3c2e8b7fdd474e3337d954.scope - libcontainer container 07769a8c533ccb4112cc032919e302a7cec07dda3d3c2e8b7fdd474e3337d954. Sep 4 04:20:17.421718 containerd[1582]: time="2025-09-04T04:20:17.421661430Z" level=info msg="StartContainer for \"07769a8c533ccb4112cc032919e302a7cec07dda3d3c2e8b7fdd474e3337d954\" returns successfully" Sep 4 04:20:17.431233 sshd[4511]: Accepted publickey for core from 10.0.0.1 port 50770 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:20:17.433530 sshd-session[4511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:20:17.440555 containerd[1582]: time="2025-09-04T04:20:17.440510510Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:20:17.444039 containerd[1582]: time="2025-09-04T04:20:17.443520603Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:20:17.447649 containerd[1582]: time="2025-09-04T04:20:17.446433120Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:20:17.447341 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 04:20:17.448179 systemd-logind[1513]: New session 10 of user core. Sep 4 04:20:17.449550 containerd[1582]: time="2025-09-04T04:20:17.449513573Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.953835797s" Sep 4 04:20:17.449550 containerd[1582]: time="2025-09-04T04:20:17.449549405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 4 04:20:17.456485 containerd[1582]: time="2025-09-04T04:20:17.456429520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 4 04:20:17.457198 containerd[1582]: time="2025-09-04T04:20:17.457095558Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 4 04:20:17.459981 containerd[1582]: time="2025-09-04T04:20:17.459207466Z" level=info msg="CreateContainer within sandbox \"e11d25c2c2f5e6d5f208628c70d4072858d98191d1e556a1c01fc53c555830d4\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 4 04:20:17.468988 containerd[1582]: time="2025-09-04T04:20:17.468923011Z" level=info msg="Container b3857fd49f0d2310678910df2558fdeb766b9f7b6f2be838d3945a62acae2cad: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:20:17.485615 containerd[1582]: time="2025-09-04T04:20:17.485544913Z" level=info msg="CreateContainer within sandbox \"e11d25c2c2f5e6d5f208628c70d4072858d98191d1e556a1c01fc53c555830d4\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"b3857fd49f0d2310678910df2558fdeb766b9f7b6f2be838d3945a62acae2cad\"" Sep 4 04:20:17.486137 containerd[1582]: time="2025-09-04T04:20:17.486093436Z" level=info msg="StartContainer for \"b3857fd49f0d2310678910df2558fdeb766b9f7b6f2be838d3945a62acae2cad\"" Sep 4 04:20:17.487652 containerd[1582]: time="2025-09-04T04:20:17.487559113Z" level=info msg="connecting to shim b3857fd49f0d2310678910df2558fdeb766b9f7b6f2be838d3945a62acae2cad" address="unix:///run/containerd/s/633d00849cfd2db0de21a4c82442c406bacab3df3ee5e3801f410dfe93a56326" protocol=ttrpc version=3 Sep 4 04:20:17.531363 systemd[1]: Started cri-containerd-b3857fd49f0d2310678910df2558fdeb766b9f7b6f2be838d3945a62acae2cad.scope - libcontainer container b3857fd49f0d2310678910df2558fdeb766b9f7b6f2be838d3945a62acae2cad. Sep 4 04:20:17.535338 systemd-networkd[1465]: calie2f0a9b649c: Gained IPv6LL Sep 4 04:20:17.599393 systemd-networkd[1465]: vxlan.calico: Gained IPv6LL Sep 4 04:20:17.622813 containerd[1582]: time="2025-09-04T04:20:17.622751238Z" level=info msg="StartContainer for \"b3857fd49f0d2310678910df2558fdeb766b9f7b6f2be838d3945a62acae2cad\" returns successfully" Sep 4 04:20:17.643772 sshd[4542]: Connection closed by 10.0.0.1 port 50770 Sep 4 04:20:17.644272 sshd-session[4511]: pam_unix(sshd:session): session closed for user core Sep 4 04:20:17.649151 systemd[1]: sshd@9-10.0.0.55:22-10.0.0.1:50770.service: Deactivated successfully. Sep 4 04:20:17.651841 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 04:20:17.653683 systemd-logind[1513]: Session 10 logged out. Waiting for processes to exit. Sep 4 04:20:17.655016 systemd-logind[1513]: Removed session 10. Sep 4 04:20:17.993991 containerd[1582]: time="2025-09-04T04:20:17.993910362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-tcfnr,Uid:a5774771-f398-4b32-a748-6ad0d5e830e7,Namespace:calico-system,Attempt:0,}" Sep 4 04:20:18.004875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2027345162.mount: Deactivated successfully. Sep 4 04:20:18.047390 systemd-networkd[1465]: cali3b03f886e51: Gained IPv6LL Sep 4 04:20:18.151745 systemd-networkd[1465]: cali1af36f7dba1: Link UP Sep 4 04:20:18.153720 systemd-networkd[1465]: cali1af36f7dba1: Gained carrier Sep 4 04:20:18.160088 kubelet[2768]: E0904 04:20:18.160052 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:20:18.161931 kubelet[2768]: E0904 04:20:18.160794 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:20:18.174616 containerd[1582]: 2025-09-04 04:20:18.046 [INFO][4619] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--tcfnr-eth0 goldmane-7988f88666- calico-system a5774771-f398-4b32-a748-6ad0d5e830e7 868 0 2025-09-04 04:19:51 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-tcfnr eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1af36f7dba1 [] [] }} ContainerID="91d389476a246d7b8249817ddbdda335198a914172c1ad242a92f1befbc8dcb2" Namespace="calico-system" Pod="goldmane-7988f88666-tcfnr" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--tcfnr-" Sep 4 04:20:18.174616 containerd[1582]: 2025-09-04 04:20:18.046 [INFO][4619] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="91d389476a246d7b8249817ddbdda335198a914172c1ad242a92f1befbc8dcb2" Namespace="calico-system" Pod="goldmane-7988f88666-tcfnr" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--tcfnr-eth0" Sep 4 04:20:18.174616 containerd[1582]: 2025-09-04 04:20:18.082 [INFO][4628] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="91d389476a246d7b8249817ddbdda335198a914172c1ad242a92f1befbc8dcb2" HandleID="k8s-pod-network.91d389476a246d7b8249817ddbdda335198a914172c1ad242a92f1befbc8dcb2" Workload="localhost-k8s-goldmane--7988f88666--tcfnr-eth0" Sep 4 04:20:18.174616 containerd[1582]: 2025-09-04 04:20:18.083 [INFO][4628] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="91d389476a246d7b8249817ddbdda335198a914172c1ad242a92f1befbc8dcb2" HandleID="k8s-pod-network.91d389476a246d7b8249817ddbdda335198a914172c1ad242a92f1befbc8dcb2" Workload="localhost-k8s-goldmane--7988f88666--tcfnr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7010), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-tcfnr", "timestamp":"2025-09-04 04:20:18.082645572 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 04:20:18.174616 containerd[1582]: 2025-09-04 04:20:18.083 [INFO][4628] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 04:20:18.174616 containerd[1582]: 2025-09-04 04:20:18.083 [INFO][4628] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 04:20:18.174616 containerd[1582]: 2025-09-04 04:20:18.084 [INFO][4628] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 04:20:18.174616 containerd[1582]: 2025-09-04 04:20:18.096 [INFO][4628] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.91d389476a246d7b8249817ddbdda335198a914172c1ad242a92f1befbc8dcb2" host="localhost" Sep 4 04:20:18.174616 containerd[1582]: 2025-09-04 04:20:18.108 [INFO][4628] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 04:20:18.174616 containerd[1582]: 2025-09-04 04:20:18.114 [INFO][4628] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 04:20:18.174616 containerd[1582]: 2025-09-04 04:20:18.117 [INFO][4628] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 04:20:18.174616 containerd[1582]: 2025-09-04 04:20:18.121 [INFO][4628] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 04:20:18.174616 containerd[1582]: 2025-09-04 04:20:18.121 [INFO][4628] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.91d389476a246d7b8249817ddbdda335198a914172c1ad242a92f1befbc8dcb2" host="localhost" Sep 4 04:20:18.174616 containerd[1582]: 2025-09-04 04:20:18.127 [INFO][4628] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.91d389476a246d7b8249817ddbdda335198a914172c1ad242a92f1befbc8dcb2 Sep 4 04:20:18.174616 containerd[1582]: 2025-09-04 04:20:18.132 [INFO][4628] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.91d389476a246d7b8249817ddbdda335198a914172c1ad242a92f1befbc8dcb2" host="localhost" Sep 4 04:20:18.174616 containerd[1582]: 2025-09-04 04:20:18.143 [INFO][4628] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.91d389476a246d7b8249817ddbdda335198a914172c1ad242a92f1befbc8dcb2" host="localhost" Sep 4 04:20:18.174616 containerd[1582]: 2025-09-04 04:20:18.143 [INFO][4628] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.91d389476a246d7b8249817ddbdda335198a914172c1ad242a92f1befbc8dcb2" host="localhost" Sep 4 04:20:18.174616 containerd[1582]: 2025-09-04 04:20:18.143 [INFO][4628] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 04:20:18.174616 containerd[1582]: 2025-09-04 04:20:18.143 [INFO][4628] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="91d389476a246d7b8249817ddbdda335198a914172c1ad242a92f1befbc8dcb2" HandleID="k8s-pod-network.91d389476a246d7b8249817ddbdda335198a914172c1ad242a92f1befbc8dcb2" Workload="localhost-k8s-goldmane--7988f88666--tcfnr-eth0" Sep 4 04:20:18.175815 containerd[1582]: 2025-09-04 04:20:18.148 [INFO][4619] cni-plugin/k8s.go 418: Populated endpoint ContainerID="91d389476a246d7b8249817ddbdda335198a914172c1ad242a92f1befbc8dcb2" Namespace="calico-system" Pod="goldmane-7988f88666-tcfnr" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--tcfnr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--tcfnr-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"a5774771-f398-4b32-a748-6ad0d5e830e7", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 4, 19, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-tcfnr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1af36f7dba1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 04:20:18.175815 containerd[1582]: 2025-09-04 04:20:18.148 [INFO][4619] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="91d389476a246d7b8249817ddbdda335198a914172c1ad242a92f1befbc8dcb2" Namespace="calico-system" Pod="goldmane-7988f88666-tcfnr" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--tcfnr-eth0" Sep 4 04:20:18.175815 containerd[1582]: 2025-09-04 04:20:18.148 [INFO][4619] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1af36f7dba1 ContainerID="91d389476a246d7b8249817ddbdda335198a914172c1ad242a92f1befbc8dcb2" Namespace="calico-system" Pod="goldmane-7988f88666-tcfnr" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--tcfnr-eth0" Sep 4 04:20:18.175815 containerd[1582]: 2025-09-04 04:20:18.154 [INFO][4619] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="91d389476a246d7b8249817ddbdda335198a914172c1ad242a92f1befbc8dcb2" Namespace="calico-system" Pod="goldmane-7988f88666-tcfnr" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--tcfnr-eth0" Sep 4 04:20:18.175815 containerd[1582]: 2025-09-04 04:20:18.155 [INFO][4619] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="91d389476a246d7b8249817ddbdda335198a914172c1ad242a92f1befbc8dcb2" Namespace="calico-system" Pod="goldmane-7988f88666-tcfnr" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--tcfnr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--tcfnr-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"a5774771-f398-4b32-a748-6ad0d5e830e7", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 4, 19, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"91d389476a246d7b8249817ddbdda335198a914172c1ad242a92f1befbc8dcb2", Pod:"goldmane-7988f88666-tcfnr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1af36f7dba1", MAC:"26:b4:99:7e:f8:83", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 04:20:18.175815 containerd[1582]: 2025-09-04 04:20:18.169 [INFO][4619] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="91d389476a246d7b8249817ddbdda335198a914172c1ad242a92f1befbc8dcb2" Namespace="calico-system" Pod="goldmane-7988f88666-tcfnr" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--tcfnr-eth0" Sep 4 04:20:18.204160 kubelet[2768]: I0904 04:20:18.204035 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-4fh5g" podStartSLOduration=38.20400773 podStartE2EDuration="38.20400773s" podCreationTimestamp="2025-09-04 04:19:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 04:20:18.183494387 +0000 UTC m=+42.572422314" watchObservedRunningTime="2025-09-04 04:20:18.20400773 +0000 UTC m=+42.592935657" Sep 4 04:20:18.209628 containerd[1582]: time="2025-09-04T04:20:18.209538384Z" level=info msg="connecting to shim 91d389476a246d7b8249817ddbdda335198a914172c1ad242a92f1befbc8dcb2" address="unix:///run/containerd/s/84edba6455b3be4360c59ed44f6ece22e715509b2e1b231895598e1d70f95cf3" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:20:18.253428 systemd[1]: Started cri-containerd-91d389476a246d7b8249817ddbdda335198a914172c1ad242a92f1befbc8dcb2.scope - libcontainer container 91d389476a246d7b8249817ddbdda335198a914172c1ad242a92f1befbc8dcb2. Sep 4 04:20:18.271185 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 04:20:18.305099 containerd[1582]: time="2025-09-04T04:20:18.305046587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-tcfnr,Uid:a5774771-f398-4b32-a748-6ad0d5e830e7,Namespace:calico-system,Attempt:0,} returns sandbox id \"91d389476a246d7b8249817ddbdda335198a914172c1ad242a92f1befbc8dcb2\"" Sep 4 04:20:18.431596 systemd-networkd[1465]: cali96278be10af: Gained IPv6LL Sep 4 04:20:18.994298 containerd[1582]: time="2025-09-04T04:20:18.994188481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84574cdc5d-fdvcg,Uid:555f180c-2b38-4604-ad12-b5ea4891e907,Namespace:calico-apiserver,Attempt:0,}" Sep 4 04:20:19.143333 systemd-networkd[1465]: cali7a69a1c7119: Link UP Sep 4 04:20:19.145051 systemd-networkd[1465]: cali7a69a1c7119: Gained carrier Sep 4 04:20:19.165625 containerd[1582]: 2025-09-04 04:20:19.048 [INFO][4693] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--84574cdc5d--fdvcg-eth0 calico-apiserver-84574cdc5d- calico-apiserver 555f180c-2b38-4604-ad12-b5ea4891e907 869 0 2025-09-04 04:19:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84574cdc5d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-84574cdc5d-fdvcg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7a69a1c7119 [] [] }} ContainerID="063bfc46bff286bef8c951f678ccf497dc745e423c887414d4b7792cd5675eea" Namespace="calico-apiserver" Pod="calico-apiserver-84574cdc5d-fdvcg" WorkloadEndpoint="localhost-k8s-calico--apiserver--84574cdc5d--fdvcg-" Sep 4 04:20:19.165625 containerd[1582]: 2025-09-04 04:20:19.048 [INFO][4693] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="063bfc46bff286bef8c951f678ccf497dc745e423c887414d4b7792cd5675eea" Namespace="calico-apiserver" Pod="calico-apiserver-84574cdc5d-fdvcg" WorkloadEndpoint="localhost-k8s-calico--apiserver--84574cdc5d--fdvcg-eth0" Sep 4 04:20:19.165625 containerd[1582]: 2025-09-04 04:20:19.083 [INFO][4708] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="063bfc46bff286bef8c951f678ccf497dc745e423c887414d4b7792cd5675eea" HandleID="k8s-pod-network.063bfc46bff286bef8c951f678ccf497dc745e423c887414d4b7792cd5675eea" Workload="localhost-k8s-calico--apiserver--84574cdc5d--fdvcg-eth0" Sep 4 04:20:19.165625 containerd[1582]: 2025-09-04 04:20:19.083 [INFO][4708] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="063bfc46bff286bef8c951f678ccf497dc745e423c887414d4b7792cd5675eea" HandleID="k8s-pod-network.063bfc46bff286bef8c951f678ccf497dc745e423c887414d4b7792cd5675eea" Workload="localhost-k8s-calico--apiserver--84574cdc5d--fdvcg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00012d5f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-84574cdc5d-fdvcg", "timestamp":"2025-09-04 04:20:19.083701037 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 04:20:19.165625 containerd[1582]: 2025-09-04 04:20:19.083 [INFO][4708] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 04:20:19.165625 containerd[1582]: 2025-09-04 04:20:19.084 [INFO][4708] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 04:20:19.165625 containerd[1582]: 2025-09-04 04:20:19.084 [INFO][4708] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 04:20:19.165625 containerd[1582]: 2025-09-04 04:20:19.099 [INFO][4708] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.063bfc46bff286bef8c951f678ccf497dc745e423c887414d4b7792cd5675eea" host="localhost" Sep 4 04:20:19.165625 containerd[1582]: 2025-09-04 04:20:19.106 [INFO][4708] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 04:20:19.165625 containerd[1582]: 2025-09-04 04:20:19.110 [INFO][4708] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 04:20:19.165625 containerd[1582]: 2025-09-04 04:20:19.113 [INFO][4708] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 04:20:19.165625 containerd[1582]: 2025-09-04 04:20:19.116 [INFO][4708] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 04:20:19.165625 containerd[1582]: 2025-09-04 04:20:19.116 [INFO][4708] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.063bfc46bff286bef8c951f678ccf497dc745e423c887414d4b7792cd5675eea" host="localhost" Sep 4 04:20:19.165625 containerd[1582]: 2025-09-04 04:20:19.118 [INFO][4708] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.063bfc46bff286bef8c951f678ccf497dc745e423c887414d4b7792cd5675eea Sep 4 04:20:19.165625 containerd[1582]: 2025-09-04 04:20:19.125 [INFO][4708] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.063bfc46bff286bef8c951f678ccf497dc745e423c887414d4b7792cd5675eea" host="localhost" Sep 4 04:20:19.165625 containerd[1582]: 2025-09-04 04:20:19.134 [INFO][4708] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.063bfc46bff286bef8c951f678ccf497dc745e423c887414d4b7792cd5675eea" host="localhost" Sep 4 04:20:19.165625 containerd[1582]: 2025-09-04 04:20:19.134 [INFO][4708] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.063bfc46bff286bef8c951f678ccf497dc745e423c887414d4b7792cd5675eea" host="localhost" Sep 4 04:20:19.165625 containerd[1582]: 2025-09-04 04:20:19.134 [INFO][4708] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 04:20:19.165625 containerd[1582]: 2025-09-04 04:20:19.134 [INFO][4708] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="063bfc46bff286bef8c951f678ccf497dc745e423c887414d4b7792cd5675eea" HandleID="k8s-pod-network.063bfc46bff286bef8c951f678ccf497dc745e423c887414d4b7792cd5675eea" Workload="localhost-k8s-calico--apiserver--84574cdc5d--fdvcg-eth0" Sep 4 04:20:19.166503 containerd[1582]: 2025-09-04 04:20:19.138 [INFO][4693] cni-plugin/k8s.go 418: Populated endpoint ContainerID="063bfc46bff286bef8c951f678ccf497dc745e423c887414d4b7792cd5675eea" Namespace="calico-apiserver" Pod="calico-apiserver-84574cdc5d-fdvcg" WorkloadEndpoint="localhost-k8s-calico--apiserver--84574cdc5d--fdvcg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84574cdc5d--fdvcg-eth0", GenerateName:"calico-apiserver-84574cdc5d-", Namespace:"calico-apiserver", SelfLink:"", UID:"555f180c-2b38-4604-ad12-b5ea4891e907", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 4, 19, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84574cdc5d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-84574cdc5d-fdvcg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a69a1c7119", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 04:20:19.166503 containerd[1582]: 2025-09-04 04:20:19.138 [INFO][4693] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="063bfc46bff286bef8c951f678ccf497dc745e423c887414d4b7792cd5675eea" Namespace="calico-apiserver" Pod="calico-apiserver-84574cdc5d-fdvcg" WorkloadEndpoint="localhost-k8s-calico--apiserver--84574cdc5d--fdvcg-eth0" Sep 4 04:20:19.166503 containerd[1582]: 2025-09-04 04:20:19.138 [INFO][4693] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7a69a1c7119 ContainerID="063bfc46bff286bef8c951f678ccf497dc745e423c887414d4b7792cd5675eea" Namespace="calico-apiserver" Pod="calico-apiserver-84574cdc5d-fdvcg" WorkloadEndpoint="localhost-k8s-calico--apiserver--84574cdc5d--fdvcg-eth0" Sep 4 04:20:19.166503 containerd[1582]: 2025-09-04 04:20:19.144 [INFO][4693] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="063bfc46bff286bef8c951f678ccf497dc745e423c887414d4b7792cd5675eea" Namespace="calico-apiserver" Pod="calico-apiserver-84574cdc5d-fdvcg" WorkloadEndpoint="localhost-k8s-calico--apiserver--84574cdc5d--fdvcg-eth0" Sep 4 04:20:19.166503 containerd[1582]: 2025-09-04 04:20:19.145 [INFO][4693] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="063bfc46bff286bef8c951f678ccf497dc745e423c887414d4b7792cd5675eea" Namespace="calico-apiserver" Pod="calico-apiserver-84574cdc5d-fdvcg" WorkloadEndpoint="localhost-k8s-calico--apiserver--84574cdc5d--fdvcg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84574cdc5d--fdvcg-eth0", GenerateName:"calico-apiserver-84574cdc5d-", Namespace:"calico-apiserver", SelfLink:"", UID:"555f180c-2b38-4604-ad12-b5ea4891e907", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 4, 19, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84574cdc5d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"063bfc46bff286bef8c951f678ccf497dc745e423c887414d4b7792cd5675eea", Pod:"calico-apiserver-84574cdc5d-fdvcg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a69a1c7119", MAC:"16:f8:ce:a8:61:1f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 04:20:19.166503 containerd[1582]: 2025-09-04 04:20:19.157 [INFO][4693] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="063bfc46bff286bef8c951f678ccf497dc745e423c887414d4b7792cd5675eea" Namespace="calico-apiserver" Pod="calico-apiserver-84574cdc5d-fdvcg" WorkloadEndpoint="localhost-k8s-calico--apiserver--84574cdc5d--fdvcg-eth0" Sep 4 04:20:19.166871 kubelet[2768]: E0904 04:20:19.166836 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:20:19.167704 kubelet[2768]: E0904 04:20:19.167677 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:20:19.274706 containerd[1582]: time="2025-09-04T04:20:19.274169436Z" level=info msg="connecting to shim 063bfc46bff286bef8c951f678ccf497dc745e423c887414d4b7792cd5675eea" address="unix:///run/containerd/s/8557ee1475f7f788be39b40a0bbcdb6b6dc9f7c9013cec0ae849b964b46c65c9" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:20:19.322902 systemd[1]: Started cri-containerd-063bfc46bff286bef8c951f678ccf497dc745e423c887414d4b7792cd5675eea.scope - libcontainer container 063bfc46bff286bef8c951f678ccf497dc745e423c887414d4b7792cd5675eea. Sep 4 04:20:19.346432 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 04:20:19.390338 containerd[1582]: time="2025-09-04T04:20:19.390255537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84574cdc5d-fdvcg,Uid:555f180c-2b38-4604-ad12-b5ea4891e907,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"063bfc46bff286bef8c951f678ccf497dc745e423c887414d4b7792cd5675eea\"" Sep 4 04:20:19.903465 systemd-networkd[1465]: cali1af36f7dba1: Gained IPv6LL Sep 4 04:20:19.995671 containerd[1582]: time="2025-09-04T04:20:19.995591799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7db647d8c7-gc5kl,Uid:fef499e2-73e2-47bb-b0c6-be0bc00abfc8,Namespace:calico-system,Attempt:0,}" Sep 4 04:20:20.169582 kubelet[2768]: E0904 04:20:20.169435 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:20:20.242706 systemd-networkd[1465]: cali4830c991cf7: Link UP Sep 4 04:20:20.242905 systemd-networkd[1465]: cali4830c991cf7: Gained carrier Sep 4 04:20:20.281588 containerd[1582]: 2025-09-04 04:20:20.086 [INFO][4778] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7db647d8c7--gc5kl-eth0 calico-kube-controllers-7db647d8c7- calico-system fef499e2-73e2-47bb-b0c6-be0bc00abfc8 866 0 2025-09-04 04:19:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7db647d8c7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7db647d8c7-gc5kl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4830c991cf7 [] [] }} ContainerID="e2d6e06d6ba03cc3260db3c7acef2403cba4a9251910a85f2043176de0e4160f" Namespace="calico-system" Pod="calico-kube-controllers-7db647d8c7-gc5kl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7db647d8c7--gc5kl-" Sep 4 04:20:20.281588 containerd[1582]: 2025-09-04 04:20:20.086 [INFO][4778] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e2d6e06d6ba03cc3260db3c7acef2403cba4a9251910a85f2043176de0e4160f" Namespace="calico-system" Pod="calico-kube-controllers-7db647d8c7-gc5kl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7db647d8c7--gc5kl-eth0" Sep 4 04:20:20.281588 containerd[1582]: 2025-09-04 04:20:20.121 [INFO][4792] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e2d6e06d6ba03cc3260db3c7acef2403cba4a9251910a85f2043176de0e4160f" HandleID="k8s-pod-network.e2d6e06d6ba03cc3260db3c7acef2403cba4a9251910a85f2043176de0e4160f" Workload="localhost-k8s-calico--kube--controllers--7db647d8c7--gc5kl-eth0" Sep 4 04:20:20.281588 containerd[1582]: 2025-09-04 04:20:20.121 [INFO][4792] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e2d6e06d6ba03cc3260db3c7acef2403cba4a9251910a85f2043176de0e4160f" HandleID="k8s-pod-network.e2d6e06d6ba03cc3260db3c7acef2403cba4a9251910a85f2043176de0e4160f" Workload="localhost-k8s-calico--kube--controllers--7db647d8c7--gc5kl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c71c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7db647d8c7-gc5kl", "timestamp":"2025-09-04 04:20:20.121076577 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 04:20:20.281588 containerd[1582]: 2025-09-04 04:20:20.121 [INFO][4792] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 04:20:20.281588 containerd[1582]: 2025-09-04 04:20:20.121 [INFO][4792] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 04:20:20.281588 containerd[1582]: 2025-09-04 04:20:20.121 [INFO][4792] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 04:20:20.281588 containerd[1582]: 2025-09-04 04:20:20.128 [INFO][4792] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e2d6e06d6ba03cc3260db3c7acef2403cba4a9251910a85f2043176de0e4160f" host="localhost" Sep 4 04:20:20.281588 containerd[1582]: 2025-09-04 04:20:20.134 [INFO][4792] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 04:20:20.281588 containerd[1582]: 2025-09-04 04:20:20.145 [INFO][4792] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 04:20:20.281588 containerd[1582]: 2025-09-04 04:20:20.147 [INFO][4792] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 04:20:20.281588 containerd[1582]: 2025-09-04 04:20:20.151 [INFO][4792] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 04:20:20.281588 containerd[1582]: 2025-09-04 04:20:20.151 [INFO][4792] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e2d6e06d6ba03cc3260db3c7acef2403cba4a9251910a85f2043176de0e4160f" host="localhost" Sep 4 04:20:20.281588 containerd[1582]: 2025-09-04 04:20:20.152 [INFO][4792] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e2d6e06d6ba03cc3260db3c7acef2403cba4a9251910a85f2043176de0e4160f Sep 4 04:20:20.281588 containerd[1582]: 2025-09-04 04:20:20.163 [INFO][4792] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e2d6e06d6ba03cc3260db3c7acef2403cba4a9251910a85f2043176de0e4160f" host="localhost" Sep 4 04:20:20.281588 containerd[1582]: 2025-09-04 04:20:20.232 [INFO][4792] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.e2d6e06d6ba03cc3260db3c7acef2403cba4a9251910a85f2043176de0e4160f" host="localhost" Sep 4 04:20:20.281588 containerd[1582]: 2025-09-04 04:20:20.232 [INFO][4792] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.e2d6e06d6ba03cc3260db3c7acef2403cba4a9251910a85f2043176de0e4160f" host="localhost" Sep 4 04:20:20.281588 containerd[1582]: 2025-09-04 04:20:20.232 [INFO][4792] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 04:20:20.281588 containerd[1582]: 2025-09-04 04:20:20.232 [INFO][4792] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="e2d6e06d6ba03cc3260db3c7acef2403cba4a9251910a85f2043176de0e4160f" HandleID="k8s-pod-network.e2d6e06d6ba03cc3260db3c7acef2403cba4a9251910a85f2043176de0e4160f" Workload="localhost-k8s-calico--kube--controllers--7db647d8c7--gc5kl-eth0" Sep 4 04:20:20.283396 containerd[1582]: 2025-09-04 04:20:20.239 [INFO][4778] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e2d6e06d6ba03cc3260db3c7acef2403cba4a9251910a85f2043176de0e4160f" Namespace="calico-system" Pod="calico-kube-controllers-7db647d8c7-gc5kl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7db647d8c7--gc5kl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7db647d8c7--gc5kl-eth0", GenerateName:"calico-kube-controllers-7db647d8c7-", Namespace:"calico-system", SelfLink:"", UID:"fef499e2-73e2-47bb-b0c6-be0bc00abfc8", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 4, 19, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7db647d8c7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7db647d8c7-gc5kl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4830c991cf7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 04:20:20.283396 containerd[1582]: 2025-09-04 04:20:20.239 [INFO][4778] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="e2d6e06d6ba03cc3260db3c7acef2403cba4a9251910a85f2043176de0e4160f" Namespace="calico-system" Pod="calico-kube-controllers-7db647d8c7-gc5kl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7db647d8c7--gc5kl-eth0" Sep 4 04:20:20.283396 containerd[1582]: 2025-09-04 04:20:20.239 [INFO][4778] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4830c991cf7 ContainerID="e2d6e06d6ba03cc3260db3c7acef2403cba4a9251910a85f2043176de0e4160f" Namespace="calico-system" Pod="calico-kube-controllers-7db647d8c7-gc5kl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7db647d8c7--gc5kl-eth0" Sep 4 04:20:20.283396 containerd[1582]: 2025-09-04 04:20:20.252 [INFO][4778] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e2d6e06d6ba03cc3260db3c7acef2403cba4a9251910a85f2043176de0e4160f" Namespace="calico-system" Pod="calico-kube-controllers-7db647d8c7-gc5kl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7db647d8c7--gc5kl-eth0" Sep 4 04:20:20.283396 containerd[1582]: 2025-09-04 04:20:20.258 [INFO][4778] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e2d6e06d6ba03cc3260db3c7acef2403cba4a9251910a85f2043176de0e4160f" Namespace="calico-system" Pod="calico-kube-controllers-7db647d8c7-gc5kl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7db647d8c7--gc5kl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7db647d8c7--gc5kl-eth0", GenerateName:"calico-kube-controllers-7db647d8c7-", Namespace:"calico-system", SelfLink:"", UID:"fef499e2-73e2-47bb-b0c6-be0bc00abfc8", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 4, 19, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7db647d8c7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e2d6e06d6ba03cc3260db3c7acef2403cba4a9251910a85f2043176de0e4160f", Pod:"calico-kube-controllers-7db647d8c7-gc5kl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4830c991cf7", MAC:"2e:b7:dc:6a:29:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 04:20:20.283396 containerd[1582]: 2025-09-04 04:20:20.271 [INFO][4778] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e2d6e06d6ba03cc3260db3c7acef2403cba4a9251910a85f2043176de0e4160f" Namespace="calico-system" Pod="calico-kube-controllers-7db647d8c7-gc5kl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7db647d8c7--gc5kl-eth0" Sep 4 04:20:20.543371 systemd-networkd[1465]: cali7a69a1c7119: Gained IPv6LL Sep 4 04:20:20.617954 containerd[1582]: time="2025-09-04T04:20:20.617023545Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:20:20.619313 containerd[1582]: time="2025-09-04T04:20:20.619184485Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 4 04:20:20.621011 containerd[1582]: time="2025-09-04T04:20:20.620949376Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:20:20.625701 containerd[1582]: time="2025-09-04T04:20:20.624374102Z" level=info msg="connecting to shim e2d6e06d6ba03cc3260db3c7acef2403cba4a9251910a85f2043176de0e4160f" address="unix:///run/containerd/s/ff100c8a30013ec740f5647fa0d0a8ac1813e10de1de043407aa41b95ecc66ca" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:20:20.625701 containerd[1582]: time="2025-09-04T04:20:20.624559643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:20:20.625898 containerd[1582]: time="2025-09-04T04:20:20.625810006Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.169293523s" Sep 4 04:20:20.625955 containerd[1582]: time="2025-09-04T04:20:20.625913681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 4 04:20:20.631340 containerd[1582]: time="2025-09-04T04:20:20.631282869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 4 04:20:20.645220 containerd[1582]: time="2025-09-04T04:20:20.644548388Z" level=info msg="CreateContainer within sandbox \"a84c07937043e844f8bb0fcc10f90e5f6df5ed55a450fc05e9406ceba94f3cd3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 04:20:20.660183 containerd[1582]: time="2025-09-04T04:20:20.658412876Z" level=info msg="Container dafdb8b4ad644790dc4af5699f93c1810c8204322592c8ea618b3355adedda17: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:20:20.679347 containerd[1582]: time="2025-09-04T04:20:20.678991243Z" level=info msg="CreateContainer within sandbox \"a84c07937043e844f8bb0fcc10f90e5f6df5ed55a450fc05e9406ceba94f3cd3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"dafdb8b4ad644790dc4af5699f93c1810c8204322592c8ea618b3355adedda17\"" Sep 4 04:20:20.680116 containerd[1582]: time="2025-09-04T04:20:20.680038875Z" level=info msg="StartContainer for \"dafdb8b4ad644790dc4af5699f93c1810c8204322592c8ea618b3355adedda17\"" Sep 4 04:20:20.681513 systemd[1]: Started cri-containerd-e2d6e06d6ba03cc3260db3c7acef2403cba4a9251910a85f2043176de0e4160f.scope - libcontainer container e2d6e06d6ba03cc3260db3c7acef2403cba4a9251910a85f2043176de0e4160f. Sep 4 04:20:20.682399 containerd[1582]: time="2025-09-04T04:20:20.681719064Z" level=info msg="connecting to shim dafdb8b4ad644790dc4af5699f93c1810c8204322592c8ea618b3355adedda17" address="unix:///run/containerd/s/5ac834b5abee4028ab87240434efe8de89e0785b4a7d41c1c5d0c1b7ed8419fd" protocol=ttrpc version=3 Sep 4 04:20:20.711369 systemd[1]: Started cri-containerd-dafdb8b4ad644790dc4af5699f93c1810c8204322592c8ea618b3355adedda17.scope - libcontainer container dafdb8b4ad644790dc4af5699f93c1810c8204322592c8ea618b3355adedda17. Sep 4 04:20:20.720401 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 04:20:20.775700 containerd[1582]: time="2025-09-04T04:20:20.775493661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7db647d8c7-gc5kl,Uid:fef499e2-73e2-47bb-b0c6-be0bc00abfc8,Namespace:calico-system,Attempt:0,} returns sandbox id \"e2d6e06d6ba03cc3260db3c7acef2403cba4a9251910a85f2043176de0e4160f\"" Sep 4 04:20:21.003279 containerd[1582]: time="2025-09-04T04:20:21.003225081Z" level=info msg="StartContainer for \"dafdb8b4ad644790dc4af5699f93c1810c8204322592c8ea618b3355adedda17\" returns successfully" Sep 4 04:20:21.385353 kubelet[2768]: I0904 04:20:21.384947 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-84574cdc5d-lvm6s" podStartSLOduration=28.658343213 podStartE2EDuration="32.384925097s" podCreationTimestamp="2025-09-04 04:19:49 +0000 UTC" firstStartedPulling="2025-09-04 04:20:16.900700602 +0000 UTC m=+41.289628529" lastFinishedPulling="2025-09-04 04:20:20.627282486 +0000 UTC m=+45.016210413" observedRunningTime="2025-09-04 04:20:21.384350738 +0000 UTC m=+45.773278685" watchObservedRunningTime="2025-09-04 04:20:21.384925097 +0000 UTC m=+45.773853024" Sep 4 04:20:21.567419 systemd-networkd[1465]: cali4830c991cf7: Gained IPv6LL Sep 4 04:20:21.993869 containerd[1582]: time="2025-09-04T04:20:21.993794552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vxmmz,Uid:c4122301-ffe2-4d53-9323-f407a3657094,Namespace:calico-system,Attempt:0,}" Sep 4 04:20:22.184286 kubelet[2768]: I0904 04:20:22.184237 2768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 04:20:22.405558 systemd-networkd[1465]: cali8551530bbf1: Link UP Sep 4 04:20:22.407716 systemd-networkd[1465]: cali8551530bbf1: Gained carrier Sep 4 04:20:22.531648 containerd[1582]: 2025-09-04 04:20:22.206 [INFO][4905] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--vxmmz-eth0 csi-node-driver- calico-system c4122301-ffe2-4d53-9323-f407a3657094 748 0 2025-09-04 04:19:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-vxmmz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8551530bbf1 [] [] }} ContainerID="09dce9ac3f8e2a80dddbfcafc48c98281e06825cb2ad11c88d879353990d61fc" Namespace="calico-system" Pod="csi-node-driver-vxmmz" WorkloadEndpoint="localhost-k8s-csi--node--driver--vxmmz-" Sep 4 04:20:22.531648 containerd[1582]: 2025-09-04 04:20:22.206 [INFO][4905] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="09dce9ac3f8e2a80dddbfcafc48c98281e06825cb2ad11c88d879353990d61fc" Namespace="calico-system" Pod="csi-node-driver-vxmmz" WorkloadEndpoint="localhost-k8s-csi--node--driver--vxmmz-eth0" Sep 4 04:20:22.531648 containerd[1582]: 2025-09-04 04:20:22.246 [INFO][4920] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="09dce9ac3f8e2a80dddbfcafc48c98281e06825cb2ad11c88d879353990d61fc" HandleID="k8s-pod-network.09dce9ac3f8e2a80dddbfcafc48c98281e06825cb2ad11c88d879353990d61fc" Workload="localhost-k8s-csi--node--driver--vxmmz-eth0" Sep 4 04:20:22.531648 containerd[1582]: 2025-09-04 04:20:22.246 [INFO][4920] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="09dce9ac3f8e2a80dddbfcafc48c98281e06825cb2ad11c88d879353990d61fc" HandleID="k8s-pod-network.09dce9ac3f8e2a80dddbfcafc48c98281e06825cb2ad11c88d879353990d61fc" Workload="localhost-k8s-csi--node--driver--vxmmz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00010fd70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-vxmmz", "timestamp":"2025-09-04 04:20:22.246266793 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 04:20:22.531648 containerd[1582]: 2025-09-04 04:20:22.246 [INFO][4920] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 04:20:22.531648 containerd[1582]: 2025-09-04 04:20:22.246 [INFO][4920] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 04:20:22.531648 containerd[1582]: 2025-09-04 04:20:22.246 [INFO][4920] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 04:20:22.531648 containerd[1582]: 2025-09-04 04:20:22.257 [INFO][4920] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.09dce9ac3f8e2a80dddbfcafc48c98281e06825cb2ad11c88d879353990d61fc" host="localhost" Sep 4 04:20:22.531648 containerd[1582]: 2025-09-04 04:20:22.266 [INFO][4920] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 04:20:22.531648 containerd[1582]: 2025-09-04 04:20:22.273 [INFO][4920] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 04:20:22.531648 containerd[1582]: 2025-09-04 04:20:22.276 [INFO][4920] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 04:20:22.531648 containerd[1582]: 2025-09-04 04:20:22.279 [INFO][4920] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 04:20:22.531648 containerd[1582]: 2025-09-04 04:20:22.279 [INFO][4920] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.09dce9ac3f8e2a80dddbfcafc48c98281e06825cb2ad11c88d879353990d61fc" host="localhost" Sep 4 04:20:22.531648 containerd[1582]: 2025-09-04 04:20:22.281 [INFO][4920] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.09dce9ac3f8e2a80dddbfcafc48c98281e06825cb2ad11c88d879353990d61fc Sep 4 04:20:22.531648 containerd[1582]: 2025-09-04 04:20:22.288 [INFO][4920] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.09dce9ac3f8e2a80dddbfcafc48c98281e06825cb2ad11c88d879353990d61fc" host="localhost" Sep 4 04:20:22.531648 containerd[1582]: 2025-09-04 04:20:22.398 [INFO][4920] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.09dce9ac3f8e2a80dddbfcafc48c98281e06825cb2ad11c88d879353990d61fc" host="localhost" Sep 4 04:20:22.531648 containerd[1582]: 2025-09-04 04:20:22.399 [INFO][4920] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.09dce9ac3f8e2a80dddbfcafc48c98281e06825cb2ad11c88d879353990d61fc" host="localhost" Sep 4 04:20:22.531648 containerd[1582]: 2025-09-04 04:20:22.399 [INFO][4920] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 04:20:22.531648 containerd[1582]: 2025-09-04 04:20:22.399 [INFO][4920] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="09dce9ac3f8e2a80dddbfcafc48c98281e06825cb2ad11c88d879353990d61fc" HandleID="k8s-pod-network.09dce9ac3f8e2a80dddbfcafc48c98281e06825cb2ad11c88d879353990d61fc" Workload="localhost-k8s-csi--node--driver--vxmmz-eth0" Sep 4 04:20:22.532460 containerd[1582]: 2025-09-04 04:20:22.402 [INFO][4905] cni-plugin/k8s.go 418: Populated endpoint ContainerID="09dce9ac3f8e2a80dddbfcafc48c98281e06825cb2ad11c88d879353990d61fc" Namespace="calico-system" Pod="csi-node-driver-vxmmz" WorkloadEndpoint="localhost-k8s-csi--node--driver--vxmmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vxmmz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c4122301-ffe2-4d53-9323-f407a3657094", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 4, 19, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-vxmmz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8551530bbf1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 04:20:22.532460 containerd[1582]: 2025-09-04 04:20:22.402 [INFO][4905] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="09dce9ac3f8e2a80dddbfcafc48c98281e06825cb2ad11c88d879353990d61fc" Namespace="calico-system" Pod="csi-node-driver-vxmmz" WorkloadEndpoint="localhost-k8s-csi--node--driver--vxmmz-eth0" Sep 4 04:20:22.532460 containerd[1582]: 2025-09-04 04:20:22.403 [INFO][4905] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8551530bbf1 ContainerID="09dce9ac3f8e2a80dddbfcafc48c98281e06825cb2ad11c88d879353990d61fc" Namespace="calico-system" Pod="csi-node-driver-vxmmz" WorkloadEndpoint="localhost-k8s-csi--node--driver--vxmmz-eth0" Sep 4 04:20:22.532460 containerd[1582]: 2025-09-04 04:20:22.407 [INFO][4905] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="09dce9ac3f8e2a80dddbfcafc48c98281e06825cb2ad11c88d879353990d61fc" Namespace="calico-system" Pod="csi-node-driver-vxmmz" WorkloadEndpoint="localhost-k8s-csi--node--driver--vxmmz-eth0" Sep 4 04:20:22.532460 containerd[1582]: 2025-09-04 04:20:22.408 [INFO][4905] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="09dce9ac3f8e2a80dddbfcafc48c98281e06825cb2ad11c88d879353990d61fc" Namespace="calico-system" Pod="csi-node-driver-vxmmz" WorkloadEndpoint="localhost-k8s-csi--node--driver--vxmmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vxmmz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c4122301-ffe2-4d53-9323-f407a3657094", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 4, 19, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"09dce9ac3f8e2a80dddbfcafc48c98281e06825cb2ad11c88d879353990d61fc", Pod:"csi-node-driver-vxmmz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8551530bbf1", MAC:"82:69:01:bb:20:a1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 04:20:22.532460 containerd[1582]: 2025-09-04 04:20:22.526 [INFO][4905] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="09dce9ac3f8e2a80dddbfcafc48c98281e06825cb2ad11c88d879353990d61fc" Namespace="calico-system" Pod="csi-node-driver-vxmmz" WorkloadEndpoint="localhost-k8s-csi--node--driver--vxmmz-eth0" Sep 4 04:20:22.565487 containerd[1582]: time="2025-09-04T04:20:22.565416447Z" level=info msg="connecting to shim 09dce9ac3f8e2a80dddbfcafc48c98281e06825cb2ad11c88d879353990d61fc" address="unix:///run/containerd/s/c4b3dc2dd0eef86c690cd94923d74213d5029e40234e702e789e48ba08dbc7a1" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:20:22.607628 systemd[1]: Started cri-containerd-09dce9ac3f8e2a80dddbfcafc48c98281e06825cb2ad11c88d879353990d61fc.scope - libcontainer container 09dce9ac3f8e2a80dddbfcafc48c98281e06825cb2ad11c88d879353990d61fc. Sep 4 04:20:22.636430 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 04:20:22.662113 systemd[1]: Started sshd@10-10.0.0.55:22-10.0.0.1:52360.service - OpenSSH per-connection server daemon (10.0.0.1:52360). Sep 4 04:20:22.663421 containerd[1582]: time="2025-09-04T04:20:22.662852523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vxmmz,Uid:c4122301-ffe2-4d53-9323-f407a3657094,Namespace:calico-system,Attempt:0,} returns sandbox id \"09dce9ac3f8e2a80dddbfcafc48c98281e06825cb2ad11c88d879353990d61fc\"" Sep 4 04:20:22.788494 sshd[4984]: Accepted publickey for core from 10.0.0.1 port 52360 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:20:22.793158 sshd-session[4984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:20:22.805280 systemd-logind[1513]: New session 11 of user core. Sep 4 04:20:22.810633 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 04:20:23.101022 sshd[4987]: Connection closed by 10.0.0.1 port 52360 Sep 4 04:20:23.102364 sshd-session[4984]: pam_unix(sshd:session): session closed for user core Sep 4 04:20:23.106214 systemd[1]: sshd@10-10.0.0.55:22-10.0.0.1:52360.service: Deactivated successfully. Sep 4 04:20:23.109247 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 04:20:23.113006 systemd-logind[1513]: Session 11 logged out. Waiting for processes to exit. Sep 4 04:20:23.114443 systemd-logind[1513]: Removed session 11. Sep 4 04:20:23.412826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1287371999.mount: Deactivated successfully. Sep 4 04:20:23.488873 systemd-networkd[1465]: cali8551530bbf1: Gained IPv6LL Sep 4 04:20:23.533642 containerd[1582]: time="2025-09-04T04:20:23.533563193Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:20:23.535409 containerd[1582]: time="2025-09-04T04:20:23.535353971Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 4 04:20:23.537674 containerd[1582]: time="2025-09-04T04:20:23.537620586Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:20:23.541248 containerd[1582]: time="2025-09-04T04:20:23.541182466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:20:23.541836 containerd[1582]: time="2025-09-04T04:20:23.541780053Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.910442435s" Sep 4 04:20:23.541836 containerd[1582]: time="2025-09-04T04:20:23.541819434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 4 04:20:23.544734 containerd[1582]: time="2025-09-04T04:20:23.544680992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 4 04:20:23.546055 containerd[1582]: time="2025-09-04T04:20:23.546009286Z" level=info msg="CreateContainer within sandbox \"e11d25c2c2f5e6d5f208628c70d4072858d98191d1e556a1c01fc53c555830d4\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 4 04:20:23.567929 containerd[1582]: time="2025-09-04T04:20:23.567868095Z" level=info msg="Container 752d4d215e08e0122af3011e6b7de03f46736992eebd5ba14499f2ac5931a3e8: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:20:23.586436 containerd[1582]: time="2025-09-04T04:20:23.586287434Z" level=info msg="CreateContainer within sandbox \"e11d25c2c2f5e6d5f208628c70d4072858d98191d1e556a1c01fc53c555830d4\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"752d4d215e08e0122af3011e6b7de03f46736992eebd5ba14499f2ac5931a3e8\"" Sep 4 04:20:23.590242 containerd[1582]: time="2025-09-04T04:20:23.587230504Z" level=info msg="StartContainer for \"752d4d215e08e0122af3011e6b7de03f46736992eebd5ba14499f2ac5931a3e8\"" Sep 4 04:20:23.590242 containerd[1582]: time="2025-09-04T04:20:23.589878336Z" level=info msg="connecting to shim 752d4d215e08e0122af3011e6b7de03f46736992eebd5ba14499f2ac5931a3e8" address="unix:///run/containerd/s/633d00849cfd2db0de21a4c82442c406bacab3df3ee5e3801f410dfe93a56326" protocol=ttrpc version=3 Sep 4 04:20:23.638516 systemd[1]: Started cri-containerd-752d4d215e08e0122af3011e6b7de03f46736992eebd5ba14499f2ac5931a3e8.scope - libcontainer container 752d4d215e08e0122af3011e6b7de03f46736992eebd5ba14499f2ac5931a3e8. Sep 4 04:20:23.724358 containerd[1582]: time="2025-09-04T04:20:23.724298449Z" level=info msg="StartContainer for \"752d4d215e08e0122af3011e6b7de03f46736992eebd5ba14499f2ac5931a3e8\" returns successfully" Sep 4 04:20:27.239706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2338147418.mount: Deactivated successfully. Sep 4 04:20:27.937987 containerd[1582]: time="2025-09-04T04:20:27.937901593Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:20:27.939283 containerd[1582]: time="2025-09-04T04:20:27.939244400Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 4 04:20:27.940704 containerd[1582]: time="2025-09-04T04:20:27.940659028Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:20:27.942876 containerd[1582]: time="2025-09-04T04:20:27.942834002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:20:27.943480 containerd[1582]: time="2025-09-04T04:20:27.943444975Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 4.398718471s" Sep 4 04:20:27.943480 containerd[1582]: time="2025-09-04T04:20:27.943477494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 4 04:20:27.944770 containerd[1582]: time="2025-09-04T04:20:27.944735678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 4 04:20:27.952242 containerd[1582]: time="2025-09-04T04:20:27.952191925Z" level=info msg="CreateContainer within sandbox \"91d389476a246d7b8249817ddbdda335198a914172c1ad242a92f1befbc8dcb2\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 4 04:20:27.962709 containerd[1582]: time="2025-09-04T04:20:27.962657457Z" level=info msg="Container 7fca42bed212684b3cd03e8d7849f86b7798233583abac4e2c7e5b23645f43b9: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:20:27.975917 containerd[1582]: time="2025-09-04T04:20:27.975858604Z" level=info msg="CreateContainer within sandbox \"91d389476a246d7b8249817ddbdda335198a914172c1ad242a92f1befbc8dcb2\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"7fca42bed212684b3cd03e8d7849f86b7798233583abac4e2c7e5b23645f43b9\"" Sep 4 04:20:27.977180 containerd[1582]: time="2025-09-04T04:20:27.976558149Z" level=info msg="StartContainer for \"7fca42bed212684b3cd03e8d7849f86b7798233583abac4e2c7e5b23645f43b9\"" Sep 4 04:20:27.977991 containerd[1582]: time="2025-09-04T04:20:27.977941059Z" level=info msg="connecting to shim 7fca42bed212684b3cd03e8d7849f86b7798233583abac4e2c7e5b23645f43b9" address="unix:///run/containerd/s/84edba6455b3be4360c59ed44f6ece22e715509b2e1b231895598e1d70f95cf3" protocol=ttrpc version=3 Sep 4 04:20:28.046626 systemd[1]: Started cri-containerd-7fca42bed212684b3cd03e8d7849f86b7798233583abac4e2c7e5b23645f43b9.scope - libcontainer container 7fca42bed212684b3cd03e8d7849f86b7798233583abac4e2c7e5b23645f43b9. Sep 4 04:20:28.116235 containerd[1582]: time="2025-09-04T04:20:28.116169008Z" level=info msg="StartContainer for \"7fca42bed212684b3cd03e8d7849f86b7798233583abac4e2c7e5b23645f43b9\" returns successfully" Sep 4 04:20:28.118836 systemd[1]: Started sshd@11-10.0.0.55:22-10.0.0.1:52376.service - OpenSSH per-connection server daemon (10.0.0.1:52376). Sep 4 04:20:28.207473 sshd[5098]: Accepted publickey for core from 10.0.0.1 port 52376 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:20:28.209256 sshd-session[5098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:20:28.214237 systemd-logind[1513]: New session 12 of user core. Sep 4 04:20:28.223463 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 04:20:28.239862 kubelet[2768]: I0904 04:20:28.239768 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7859658fb9-w9zq4" podStartSLOduration=6.191377226 podStartE2EDuration="14.239746016s" podCreationTimestamp="2025-09-04 04:20:14 +0000 UTC" firstStartedPulling="2025-09-04 04:20:15.495021545 +0000 UTC m=+39.883949462" lastFinishedPulling="2025-09-04 04:20:23.543390315 +0000 UTC m=+47.932318252" observedRunningTime="2025-09-04 04:20:24.205379389 +0000 UTC m=+48.594307316" watchObservedRunningTime="2025-09-04 04:20:28.239746016 +0000 UTC m=+52.628673943" Sep 4 04:20:28.327181 containerd[1582]: time="2025-09-04T04:20:28.327100208Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7fca42bed212684b3cd03e8d7849f86b7798233583abac4e2c7e5b23645f43b9\" id:\"fc1cdc38390b916bde2b14cd3ce104bf2857b7ec6ad1c1e1eb7c77dfd476fd09\" pid:5119 exit_status:1 exited_at:{seconds:1756959628 nanos:326655826}" Sep 4 04:20:28.387770 containerd[1582]: time="2025-09-04T04:20:28.387697606Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:20:28.390260 sshd[5112]: Connection closed by 10.0.0.1 port 52376 Sep 4 04:20:28.390666 sshd-session[5098]: pam_unix(sshd:session): session closed for user core Sep 4 04:20:28.395257 systemd[1]: sshd@11-10.0.0.55:22-10.0.0.1:52376.service: Deactivated successfully. Sep 4 04:20:28.397712 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 04:20:28.399964 systemd-logind[1513]: Session 12 logged out. Waiting for processes to exit. Sep 4 04:20:28.401162 systemd-logind[1513]: Removed session 12. Sep 4 04:20:28.520953 containerd[1582]: time="2025-09-04T04:20:28.520764703Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 4 04:20:28.523617 containerd[1582]: time="2025-09-04T04:20:28.523568145Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 578.785101ms" Sep 4 04:20:28.523617 containerd[1582]: time="2025-09-04T04:20:28.523616102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 4 04:20:28.524721 containerd[1582]: time="2025-09-04T04:20:28.524693510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 4 04:20:28.526004 containerd[1582]: time="2025-09-04T04:20:28.525966486Z" level=info msg="CreateContainer within sandbox \"063bfc46bff286bef8c951f678ccf497dc745e423c887414d4b7792cd5675eea\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 04:20:28.552102 containerd[1582]: time="2025-09-04T04:20:28.552026327Z" level=info msg="Container fc820fbf1620fec37b63e3db0beb5cde4fcfe1cb48473658e486693f17dbdb67: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:20:28.564161 containerd[1582]: time="2025-09-04T04:20:28.562914608Z" level=info msg="CreateContainer within sandbox \"063bfc46bff286bef8c951f678ccf497dc745e423c887414d4b7792cd5675eea\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fc820fbf1620fec37b63e3db0beb5cde4fcfe1cb48473658e486693f17dbdb67\"" Sep 4 04:20:28.564930 containerd[1582]: time="2025-09-04T04:20:28.564899014Z" level=info msg="StartContainer for \"fc820fbf1620fec37b63e3db0beb5cde4fcfe1cb48473658e486693f17dbdb67\"" Sep 4 04:20:28.566649 containerd[1582]: time="2025-09-04T04:20:28.566577611Z" level=info msg="connecting to shim fc820fbf1620fec37b63e3db0beb5cde4fcfe1cb48473658e486693f17dbdb67" address="unix:///run/containerd/s/8557ee1475f7f788be39b40a0bbcdb6b6dc9f7c9013cec0ae849b964b46c65c9" protocol=ttrpc version=3 Sep 4 04:20:28.590473 systemd[1]: Started cri-containerd-fc820fbf1620fec37b63e3db0beb5cde4fcfe1cb48473658e486693f17dbdb67.scope - libcontainer container fc820fbf1620fec37b63e3db0beb5cde4fcfe1cb48473658e486693f17dbdb67. Sep 4 04:20:28.654488 containerd[1582]: time="2025-09-04T04:20:28.654426807Z" level=info msg="StartContainer for \"fc820fbf1620fec37b63e3db0beb5cde4fcfe1cb48473658e486693f17dbdb67\" returns successfully" Sep 4 04:20:29.242999 kubelet[2768]: I0904 04:20:29.242708 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-tcfnr" podStartSLOduration=28.604593131 podStartE2EDuration="38.242683065s" podCreationTimestamp="2025-09-04 04:19:51 +0000 UTC" firstStartedPulling="2025-09-04 04:20:18.306450057 +0000 UTC m=+42.695377984" lastFinishedPulling="2025-09-04 04:20:27.944539981 +0000 UTC m=+52.333467918" observedRunningTime="2025-09-04 04:20:28.239919463 +0000 UTC m=+52.628847400" watchObservedRunningTime="2025-09-04 04:20:29.242683065 +0000 UTC m=+53.631610992" Sep 4 04:20:29.244587 kubelet[2768]: I0904 04:20:29.244206 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-84574cdc5d-fdvcg" podStartSLOduration=31.111512623 podStartE2EDuration="40.244191798s" podCreationTimestamp="2025-09-04 04:19:49 +0000 UTC" firstStartedPulling="2025-09-04 04:20:19.391771602 +0000 UTC m=+43.780699529" lastFinishedPulling="2025-09-04 04:20:28.524450777 +0000 UTC m=+52.913378704" observedRunningTime="2025-09-04 04:20:29.241022054 +0000 UTC m=+53.629950001" watchObservedRunningTime="2025-09-04 04:20:29.244191798 +0000 UTC m=+53.633119725" Sep 4 04:20:29.340623 containerd[1582]: time="2025-09-04T04:20:29.340455737Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7fca42bed212684b3cd03e8d7849f86b7798233583abac4e2c7e5b23645f43b9\" id:\"914caf7e5d5d3eb6e8d61fd8de216735a849636198edb971377e8ccdb7af2cfa\" pid:5190 exit_status:1 exited_at:{seconds:1756959629 nanos:339664808}" Sep 4 04:20:30.225846 kubelet[2768]: I0904 04:20:30.225789 2768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 04:20:30.630776 containerd[1582]: time="2025-09-04T04:20:30.630589734Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7fca42bed212684b3cd03e8d7849f86b7798233583abac4e2c7e5b23645f43b9\" id:\"f6c4e4951070f483cdd85c1766bd3ab57f2f066ad95eda4867bfde39c1131d41\" pid:5220 exited_at:{seconds:1756959630 nanos:630178720}" Sep 4 04:20:31.568715 containerd[1582]: time="2025-09-04T04:20:31.568614914Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:20:31.585449 containerd[1582]: time="2025-09-04T04:20:31.585338734Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 4 04:20:31.593928 containerd[1582]: time="2025-09-04T04:20:31.593806364Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:20:31.685309 containerd[1582]: time="2025-09-04T04:20:31.685217872Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:20:31.685805 containerd[1582]: time="2025-09-04T04:20:31.685719866Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.160990419s" Sep 4 04:20:31.685805 containerd[1582]: time="2025-09-04T04:20:31.685766120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 4 04:20:31.687802 containerd[1582]: time="2025-09-04T04:20:31.687542509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 4 04:20:31.706156 containerd[1582]: time="2025-09-04T04:20:31.706092609Z" level=info msg="CreateContainer within sandbox \"e2d6e06d6ba03cc3260db3c7acef2403cba4a9251910a85f2043176de0e4160f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 04:20:31.923007 containerd[1582]: time="2025-09-04T04:20:31.922378804Z" level=info msg="Container e56c95fbe93bee3a018ea485c28431f3af0686dd6a99a72f0dcc4b090636852d: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:20:32.039241 containerd[1582]: time="2025-09-04T04:20:32.039176572Z" level=info msg="CreateContainer within sandbox \"e2d6e06d6ba03cc3260db3c7acef2403cba4a9251910a85f2043176de0e4160f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e56c95fbe93bee3a018ea485c28431f3af0686dd6a99a72f0dcc4b090636852d\"" Sep 4 04:20:32.040279 containerd[1582]: time="2025-09-04T04:20:32.039689678Z" level=info msg="StartContainer for \"e56c95fbe93bee3a018ea485c28431f3af0686dd6a99a72f0dcc4b090636852d\"" Sep 4 04:20:32.040869 containerd[1582]: time="2025-09-04T04:20:32.040838196Z" level=info msg="connecting to shim e56c95fbe93bee3a018ea485c28431f3af0686dd6a99a72f0dcc4b090636852d" address="unix:///run/containerd/s/ff100c8a30013ec740f5647fa0d0a8ac1813e10de1de043407aa41b95ecc66ca" protocol=ttrpc version=3 Sep 4 04:20:32.072406 systemd[1]: Started cri-containerd-e56c95fbe93bee3a018ea485c28431f3af0686dd6a99a72f0dcc4b090636852d.scope - libcontainer container e56c95fbe93bee3a018ea485c28431f3af0686dd6a99a72f0dcc4b090636852d. Sep 4 04:20:32.177088 kubelet[2768]: I0904 04:20:32.176646 2768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 04:20:32.271315 containerd[1582]: time="2025-09-04T04:20:32.271229344Z" level=info msg="StartContainer for \"e56c95fbe93bee3a018ea485c28431f3af0686dd6a99a72f0dcc4b090636852d\" returns successfully" Sep 4 04:20:33.300009 kubelet[2768]: I0904 04:20:33.299854 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7db647d8c7-gc5kl" podStartSLOduration=30.391323977 podStartE2EDuration="41.29981444s" podCreationTimestamp="2025-09-04 04:19:52 +0000 UTC" firstStartedPulling="2025-09-04 04:20:20.778856586 +0000 UTC m=+45.167784514" lastFinishedPulling="2025-09-04 04:20:31.68734705 +0000 UTC m=+56.076274977" observedRunningTime="2025-09-04 04:20:33.297819304 +0000 UTC m=+57.686747231" watchObservedRunningTime="2025-09-04 04:20:33.29981444 +0000 UTC m=+57.688742367" Sep 4 04:20:33.328489 containerd[1582]: time="2025-09-04T04:20:33.328379334Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e56c95fbe93bee3a018ea485c28431f3af0686dd6a99a72f0dcc4b090636852d\" id:\"2e9c09dbcfbdfd8fd6e20fb3065f755085df6193a0e99bd0859e492b15dfddae\" pid:5297 exited_at:{seconds:1756959633 nanos:327183946}" Sep 4 04:20:33.407210 systemd[1]: Started sshd@12-10.0.0.55:22-10.0.0.1:40120.service - OpenSSH per-connection server daemon (10.0.0.1:40120). Sep 4 04:20:33.621089 sshd[5309]: Accepted publickey for core from 10.0.0.1 port 40120 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:20:33.622904 sshd-session[5309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:20:33.628058 systemd-logind[1513]: New session 13 of user core. Sep 4 04:20:33.636355 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 04:20:33.798731 sshd[5313]: Connection closed by 10.0.0.1 port 40120 Sep 4 04:20:33.799186 sshd-session[5309]: pam_unix(sshd:session): session closed for user core Sep 4 04:20:33.811442 systemd[1]: sshd@12-10.0.0.55:22-10.0.0.1:40120.service: Deactivated successfully. Sep 4 04:20:33.814264 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 04:20:33.815715 systemd-logind[1513]: Session 13 logged out. Waiting for processes to exit. Sep 4 04:20:33.817868 systemd-logind[1513]: Removed session 13. Sep 4 04:20:33.819899 systemd[1]: Started sshd@13-10.0.0.55:22-10.0.0.1:40130.service - OpenSSH per-connection server daemon (10.0.0.1:40130). Sep 4 04:20:33.883449 sshd[5327]: Accepted publickey for core from 10.0.0.1 port 40130 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:20:33.885356 sshd-session[5327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:20:33.890921 systemd-logind[1513]: New session 14 of user core. Sep 4 04:20:33.904389 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 04:20:34.092427 sshd[5330]: Connection closed by 10.0.0.1 port 40130 Sep 4 04:20:34.093558 sshd-session[5327]: pam_unix(sshd:session): session closed for user core Sep 4 04:20:34.108413 systemd[1]: sshd@13-10.0.0.55:22-10.0.0.1:40130.service: Deactivated successfully. Sep 4 04:20:34.114896 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 04:20:34.119408 systemd-logind[1513]: Session 14 logged out. Waiting for processes to exit. Sep 4 04:20:34.125232 systemd[1]: Started sshd@14-10.0.0.55:22-10.0.0.1:40132.service - OpenSSH per-connection server daemon (10.0.0.1:40132). Sep 4 04:20:34.126856 systemd-logind[1513]: Removed session 14. Sep 4 04:20:34.189541 sshd[5345]: Accepted publickey for core from 10.0.0.1 port 40132 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:20:34.192672 sshd-session[5345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:20:34.199111 systemd-logind[1513]: New session 15 of user core. Sep 4 04:20:34.207366 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 04:20:34.245267 containerd[1582]: time="2025-09-04T04:20:34.245177023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:20:34.246687 containerd[1582]: time="2025-09-04T04:20:34.246652695Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 4 04:20:34.248351 containerd[1582]: time="2025-09-04T04:20:34.248320743Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:20:34.250477 containerd[1582]: time="2025-09-04T04:20:34.250452591Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:20:34.251281 containerd[1582]: time="2025-09-04T04:20:34.251252483Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.563666252s" Sep 4 04:20:34.251362 containerd[1582]: time="2025-09-04T04:20:34.251288880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 4 04:20:34.253616 containerd[1582]: time="2025-09-04T04:20:34.253583649Z" level=info msg="CreateContainer within sandbox \"09dce9ac3f8e2a80dddbfcafc48c98281e06825cb2ad11c88d879353990d61fc\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 04:20:34.270185 containerd[1582]: time="2025-09-04T04:20:34.269844669Z" level=info msg="Container 41887a2a0b6fb5627d9b72c24c57d0b6a7fac384ea415d29f5914054d2d2a807: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:20:34.300712 containerd[1582]: time="2025-09-04T04:20:34.300632758Z" level=info msg="CreateContainer within sandbox \"09dce9ac3f8e2a80dddbfcafc48c98281e06825cb2ad11c88d879353990d61fc\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"41887a2a0b6fb5627d9b72c24c57d0b6a7fac384ea415d29f5914054d2d2a807\"" Sep 4 04:20:34.300712 containerd[1582]: time="2025-09-04T04:20:34.301505665Z" level=info msg="StartContainer for \"41887a2a0b6fb5627d9b72c24c57d0b6a7fac384ea415d29f5914054d2d2a807\"" Sep 4 04:20:34.300712 containerd[1582]: time="2025-09-04T04:20:34.303006073Z" level=info msg="connecting to shim 41887a2a0b6fb5627d9b72c24c57d0b6a7fac384ea415d29f5914054d2d2a807" address="unix:///run/containerd/s/c4b3dc2dd0eef86c690cd94923d74213d5029e40234e702e789e48ba08dbc7a1" protocol=ttrpc version=3 Sep 4 04:20:34.335428 systemd[1]: Started cri-containerd-41887a2a0b6fb5627d9b72c24c57d0b6a7fac384ea415d29f5914054d2d2a807.scope - libcontainer container 41887a2a0b6fb5627d9b72c24c57d0b6a7fac384ea415d29f5914054d2d2a807. Sep 4 04:20:34.347352 sshd[5348]: Connection closed by 10.0.0.1 port 40132 Sep 4 04:20:34.347860 sshd-session[5345]: pam_unix(sshd:session): session closed for user core Sep 4 04:20:34.353810 systemd[1]: sshd@14-10.0.0.55:22-10.0.0.1:40132.service: Deactivated successfully. Sep 4 04:20:34.356222 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 04:20:34.357277 systemd-logind[1513]: Session 15 logged out. Waiting for processes to exit. Sep 4 04:20:34.359642 systemd-logind[1513]: Removed session 15. Sep 4 04:20:34.435750 containerd[1582]: time="2025-09-04T04:20:34.435680572Z" level=info msg="StartContainer for \"41887a2a0b6fb5627d9b72c24c57d0b6a7fac384ea415d29f5914054d2d2a807\" returns successfully" Sep 4 04:20:34.436971 containerd[1582]: time="2025-09-04T04:20:34.436934443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 4 04:20:36.261059 containerd[1582]: time="2025-09-04T04:20:36.260985014Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:20:36.262944 containerd[1582]: time="2025-09-04T04:20:36.262874757Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 4 04:20:36.269839 containerd[1582]: time="2025-09-04T04:20:36.269795763Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:20:36.273053 containerd[1582]: time="2025-09-04T04:20:36.272990060Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:20:36.273772 containerd[1582]: time="2025-09-04T04:20:36.273726951Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.836746922s" Sep 4 04:20:36.273855 containerd[1582]: time="2025-09-04T04:20:36.273775330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 4 04:20:36.277075 containerd[1582]: time="2025-09-04T04:20:36.277018510Z" level=info msg="CreateContainer within sandbox \"09dce9ac3f8e2a80dddbfcafc48c98281e06825cb2ad11c88d879353990d61fc\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 04:20:36.287475 containerd[1582]: time="2025-09-04T04:20:36.287417179Z" level=info msg="Container dd6c11986b8219c7095e8783dded58991dfa798e3068e5230756f6e595290f0f: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:20:36.303820 containerd[1582]: time="2025-09-04T04:20:36.303750141Z" level=info msg="CreateContainer within sandbox \"09dce9ac3f8e2a80dddbfcafc48c98281e06825cb2ad11c88d879353990d61fc\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"dd6c11986b8219c7095e8783dded58991dfa798e3068e5230756f6e595290f0f\"" Sep 4 04:20:36.304647 containerd[1582]: time="2025-09-04T04:20:36.304597165Z" level=info msg="StartContainer for \"dd6c11986b8219c7095e8783dded58991dfa798e3068e5230756f6e595290f0f\"" Sep 4 04:20:36.306519 containerd[1582]: time="2025-09-04T04:20:36.306496106Z" level=info msg="connecting to shim dd6c11986b8219c7095e8783dded58991dfa798e3068e5230756f6e595290f0f" address="unix:///run/containerd/s/c4b3dc2dd0eef86c690cd94923d74213d5029e40234e702e789e48ba08dbc7a1" protocol=ttrpc version=3 Sep 4 04:20:36.330513 systemd[1]: Started cri-containerd-dd6c11986b8219c7095e8783dded58991dfa798e3068e5230756f6e595290f0f.scope - libcontainer container dd6c11986b8219c7095e8783dded58991dfa798e3068e5230756f6e595290f0f. Sep 4 04:20:36.409002 containerd[1582]: time="2025-09-04T04:20:36.408940551Z" level=info msg="StartContainer for \"dd6c11986b8219c7095e8783dded58991dfa798e3068e5230756f6e595290f0f\" returns successfully" Sep 4 04:20:37.104461 kubelet[2768]: I0904 04:20:37.104415 2768 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 04:20:37.104461 kubelet[2768]: I0904 04:20:37.104459 2768 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 04:20:39.366380 systemd[1]: Started sshd@15-10.0.0.55:22-10.0.0.1:40148.service - OpenSSH per-connection server daemon (10.0.0.1:40148). Sep 4 04:20:39.444929 sshd[5442]: Accepted publickey for core from 10.0.0.1 port 40148 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:20:39.447566 sshd-session[5442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:20:39.452922 systemd-logind[1513]: New session 16 of user core. Sep 4 04:20:39.462285 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 04:20:39.607691 sshd[5445]: Connection closed by 10.0.0.1 port 40148 Sep 4 04:20:39.608143 sshd-session[5442]: pam_unix(sshd:session): session closed for user core Sep 4 04:20:39.614783 systemd[1]: sshd@15-10.0.0.55:22-10.0.0.1:40148.service: Deactivated successfully. Sep 4 04:20:39.617397 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 04:20:39.618607 systemd-logind[1513]: Session 16 logged out. Waiting for processes to exit. Sep 4 04:20:39.620592 systemd-logind[1513]: Removed session 16. Sep 4 04:20:40.431056 kubelet[2768]: I0904 04:20:40.431000 2768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 04:20:40.517800 kubelet[2768]: I0904 04:20:40.517676 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-vxmmz" podStartSLOduration=34.911974224 podStartE2EDuration="48.517271092s" podCreationTimestamp="2025-09-04 04:19:52 +0000 UTC" firstStartedPulling="2025-09-04 04:20:22.669296974 +0000 UTC m=+47.058224901" lastFinishedPulling="2025-09-04 04:20:36.274593842 +0000 UTC m=+60.663521769" observedRunningTime="2025-09-04 04:20:37.309167592 +0000 UTC m=+61.698095519" watchObservedRunningTime="2025-09-04 04:20:40.517271092 +0000 UTC m=+64.906199019" Sep 4 04:20:44.665922 systemd[1]: Started sshd@16-10.0.0.55:22-10.0.0.1:34570.service - OpenSSH per-connection server daemon (10.0.0.1:34570). Sep 4 04:20:44.861444 sshd[5469]: Accepted publickey for core from 10.0.0.1 port 34570 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:20:44.860386 sshd-session[5469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:20:44.883204 systemd-logind[1513]: New session 17 of user core. Sep 4 04:20:44.894505 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 04:20:45.190182 kernel: hrtimer: interrupt took 8029177 ns Sep 4 04:20:45.252262 sshd[5472]: Connection closed by 10.0.0.1 port 34570 Sep 4 04:20:45.253486 sshd-session[5469]: pam_unix(sshd:session): session closed for user core Sep 4 04:20:45.266032 systemd[1]: sshd@16-10.0.0.55:22-10.0.0.1:34570.service: Deactivated successfully. Sep 4 04:20:45.270816 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 04:20:45.274618 systemd-logind[1513]: Session 17 logged out. Waiting for processes to exit. Sep 4 04:20:45.277203 systemd-logind[1513]: Removed session 17. Sep 4 04:20:46.554076 containerd[1582]: time="2025-09-04T04:20:46.554008757Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f0bb580d3074029807ad089d77efa363a6137cb8c3b44f6fc2f8f95a0eb750f9\" id:\"7eaa23402957edfac0bbcfd7a6b83a9353a0d2d1a606502f9fd85414130a2f89\" pid:5497 exited_at:{seconds:1756959646 nanos:553587591}" Sep 4 04:20:48.992842 kubelet[2768]: E0904 04:20:48.992773 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:20:50.276916 systemd[1]: Started sshd@17-10.0.0.55:22-10.0.0.1:42084.service - OpenSSH per-connection server daemon (10.0.0.1:42084). Sep 4 04:20:50.334631 sshd[5511]: Accepted publickey for core from 10.0.0.1 port 42084 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:20:50.336921 sshd-session[5511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:20:50.341633 systemd-logind[1513]: New session 18 of user core. Sep 4 04:20:50.351289 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 04:20:50.499443 sshd[5514]: Connection closed by 10.0.0.1 port 42084 Sep 4 04:20:50.499937 sshd-session[5511]: pam_unix(sshd:session): session closed for user core Sep 4 04:20:50.501084 containerd[1582]: time="2025-09-04T04:20:50.500924740Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7fca42bed212684b3cd03e8d7849f86b7798233583abac4e2c7e5b23645f43b9\" id:\"77f68c62ddcc5f753c1881dd0489ce4596796dabbfa2075ea52803a8035efb2d\" pid:5534 exited_at:{seconds:1756959650 nanos:500206506}" Sep 4 04:20:50.505933 systemd[1]: sshd@17-10.0.0.55:22-10.0.0.1:42084.service: Deactivated successfully. Sep 4 04:20:50.508095 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 04:20:50.509089 systemd-logind[1513]: Session 18 logged out. Waiting for processes to exit. Sep 4 04:20:50.511296 systemd-logind[1513]: Removed session 18. Sep 4 04:20:55.375477 containerd[1582]: time="2025-09-04T04:20:55.375413974Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e56c95fbe93bee3a018ea485c28431f3af0686dd6a99a72f0dcc4b090636852d\" id:\"548892c5e77727d073507fb06012870701b21ccc48e013dbcef5b76b43bcdb74\" pid:5564 exited_at:{seconds:1756959655 nanos:374974625}" Sep 4 04:20:55.516631 systemd[1]: Started sshd@18-10.0.0.55:22-10.0.0.1:42086.service - OpenSSH per-connection server daemon (10.0.0.1:42086). Sep 4 04:20:55.591506 sshd[5575]: Accepted publickey for core from 10.0.0.1 port 42086 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:20:55.595417 sshd-session[5575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:20:55.602930 systemd-logind[1513]: New session 19 of user core. Sep 4 04:20:55.617247 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 04:20:55.817388 sshd[5578]: Connection closed by 10.0.0.1 port 42086 Sep 4 04:20:55.817778 sshd-session[5575]: pam_unix(sshd:session): session closed for user core Sep 4 04:20:55.823373 systemd[1]: sshd@18-10.0.0.55:22-10.0.0.1:42086.service: Deactivated successfully. Sep 4 04:20:55.825955 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 04:20:55.826937 systemd-logind[1513]: Session 19 logged out. Waiting for processes to exit. Sep 4 04:20:55.828958 systemd-logind[1513]: Removed session 19. Sep 4 04:21:00.836843 systemd[1]: Started sshd@19-10.0.0.55:22-10.0.0.1:45870.service - OpenSSH per-connection server daemon (10.0.0.1:45870). Sep 4 04:21:00.894916 sshd[5598]: Accepted publickey for core from 10.0.0.1 port 45870 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:21:00.896477 sshd-session[5598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:21:00.901512 systemd-logind[1513]: New session 20 of user core. Sep 4 04:21:00.919430 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 04:21:01.055612 sshd[5601]: Connection closed by 10.0.0.1 port 45870 Sep 4 04:21:01.056159 sshd-session[5598]: pam_unix(sshd:session): session closed for user core Sep 4 04:21:01.065659 systemd[1]: sshd@19-10.0.0.55:22-10.0.0.1:45870.service: Deactivated successfully. Sep 4 04:21:01.067781 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 04:21:01.068589 systemd-logind[1513]: Session 20 logged out. Waiting for processes to exit. Sep 4 04:21:01.071452 systemd[1]: Started sshd@20-10.0.0.55:22-10.0.0.1:45878.service - OpenSSH per-connection server daemon (10.0.0.1:45878). Sep 4 04:21:01.072288 systemd-logind[1513]: Removed session 20. Sep 4 04:21:01.127965 sshd[5614]: Accepted publickey for core from 10.0.0.1 port 45878 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:21:01.129864 sshd-session[5614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:21:01.134940 systemd-logind[1513]: New session 21 of user core. Sep 4 04:21:01.145456 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 04:21:01.480206 sshd[5617]: Connection closed by 10.0.0.1 port 45878 Sep 4 04:21:01.482750 sshd-session[5614]: pam_unix(sshd:session): session closed for user core Sep 4 04:21:01.490662 systemd[1]: sshd@20-10.0.0.55:22-10.0.0.1:45878.service: Deactivated successfully. Sep 4 04:21:01.493417 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 04:21:01.494658 systemd-logind[1513]: Session 21 logged out. Waiting for processes to exit. Sep 4 04:21:01.498551 systemd-logind[1513]: Removed session 21. Sep 4 04:21:01.500455 systemd[1]: Started sshd@21-10.0.0.55:22-10.0.0.1:45884.service - OpenSSH per-connection server daemon (10.0.0.1:45884). Sep 4 04:21:01.583092 sshd[5628]: Accepted publickey for core from 10.0.0.1 port 45884 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:21:01.585355 sshd-session[5628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:21:01.592737 systemd-logind[1513]: New session 22 of user core. Sep 4 04:21:01.606499 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 04:21:02.624958 containerd[1582]: time="2025-09-04T04:21:02.624887616Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e56c95fbe93bee3a018ea485c28431f3af0686dd6a99a72f0dcc4b090636852d\" id:\"849424db8fecadd2ea33fc0c009f33d84f2c90c8f9e64007b7b42ef7e5c5da13\" pid:5658 exited_at:{seconds:1756959662 nanos:624538235}" Sep 4 04:21:03.285894 sshd[5632]: Connection closed by 10.0.0.1 port 45884 Sep 4 04:21:03.286403 sshd-session[5628]: pam_unix(sshd:session): session closed for user core Sep 4 04:21:03.302997 systemd[1]: Started sshd@22-10.0.0.55:22-10.0.0.1:45900.service - OpenSSH per-connection server daemon (10.0.0.1:45900). Sep 4 04:21:03.303797 systemd[1]: sshd@21-10.0.0.55:22-10.0.0.1:45884.service: Deactivated successfully. Sep 4 04:21:03.310526 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 04:21:03.311921 systemd[1]: session-22.scope: Consumed 715ms CPU time, 72.9M memory peak. Sep 4 04:21:03.313248 systemd-logind[1513]: Session 22 logged out. Waiting for processes to exit. Sep 4 04:21:03.317822 systemd-logind[1513]: Removed session 22. Sep 4 04:21:03.372653 sshd[5672]: Accepted publickey for core from 10.0.0.1 port 45900 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:21:03.374418 sshd-session[5672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:21:03.380466 systemd-logind[1513]: New session 23 of user core. Sep 4 04:21:03.394288 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 04:21:03.808887 sshd[5679]: Connection closed by 10.0.0.1 port 45900 Sep 4 04:21:03.811299 sshd-session[5672]: pam_unix(sshd:session): session closed for user core Sep 4 04:21:03.822539 systemd[1]: sshd@22-10.0.0.55:22-10.0.0.1:45900.service: Deactivated successfully. Sep 4 04:21:03.825239 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 04:21:03.826486 systemd-logind[1513]: Session 23 logged out. Waiting for processes to exit. Sep 4 04:21:03.829755 systemd-logind[1513]: Removed session 23. Sep 4 04:21:03.832073 systemd[1]: Started sshd@23-10.0.0.55:22-10.0.0.1:45908.service - OpenSSH per-connection server daemon (10.0.0.1:45908). Sep 4 04:21:03.883752 sshd[5690]: Accepted publickey for core from 10.0.0.1 port 45908 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:21:03.885617 sshd-session[5690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:21:03.891356 systemd-logind[1513]: New session 24 of user core. Sep 4 04:21:03.901307 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 04:21:03.993829 kubelet[2768]: E0904 04:21:03.993771 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:21:04.035793 sshd[5693]: Connection closed by 10.0.0.1 port 45908 Sep 4 04:21:04.036920 sshd-session[5690]: pam_unix(sshd:session): session closed for user core Sep 4 04:21:04.042435 systemd[1]: sshd@23-10.0.0.55:22-10.0.0.1:45908.service: Deactivated successfully. Sep 4 04:21:04.046551 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 04:21:04.049679 systemd-logind[1513]: Session 24 logged out. Waiting for processes to exit. Sep 4 04:21:04.052486 systemd-logind[1513]: Removed session 24. Sep 4 04:21:07.993912 kubelet[2768]: E0904 04:21:07.993838 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:21:08.993978 kubelet[2768]: E0904 04:21:08.993894 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:21:09.055149 systemd[1]: Started sshd@24-10.0.0.55:22-10.0.0.1:45918.service - OpenSSH per-connection server daemon (10.0.0.1:45918). Sep 4 04:21:09.122063 sshd[5706]: Accepted publickey for core from 10.0.0.1 port 45918 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:21:09.125889 sshd-session[5706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:21:09.133260 systemd-logind[1513]: New session 25 of user core. Sep 4 04:21:09.142976 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 04:21:09.289014 sshd[5709]: Connection closed by 10.0.0.1 port 45918 Sep 4 04:21:09.289440 sshd-session[5706]: pam_unix(sshd:session): session closed for user core Sep 4 04:21:09.295545 systemd[1]: sshd@24-10.0.0.55:22-10.0.0.1:45918.service: Deactivated successfully. Sep 4 04:21:09.298607 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 04:21:09.299782 systemd-logind[1513]: Session 25 logged out. Waiting for processes to exit. Sep 4 04:21:09.301478 systemd-logind[1513]: Removed session 25. Sep 4 04:21:14.308420 systemd[1]: Started sshd@25-10.0.0.55:22-10.0.0.1:39514.service - OpenSSH per-connection server daemon (10.0.0.1:39514). Sep 4 04:21:14.369881 sshd[5727]: Accepted publickey for core from 10.0.0.1 port 39514 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:21:14.371732 sshd-session[5727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:21:14.377600 systemd-logind[1513]: New session 26 of user core. Sep 4 04:21:14.393458 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 04:21:14.527721 sshd[5730]: Connection closed by 10.0.0.1 port 39514 Sep 4 04:21:14.528168 sshd-session[5727]: pam_unix(sshd:session): session closed for user core Sep 4 04:21:14.534501 systemd[1]: sshd@25-10.0.0.55:22-10.0.0.1:39514.service: Deactivated successfully. Sep 4 04:21:14.537828 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 04:21:14.539834 systemd-logind[1513]: Session 26 logged out. Waiting for processes to exit. Sep 4 04:21:14.541939 systemd-logind[1513]: Removed session 26. Sep 4 04:21:16.475898 containerd[1582]: time="2025-09-04T04:21:16.475828112Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f0bb580d3074029807ad089d77efa363a6137cb8c3b44f6fc2f8f95a0eb750f9\" id:\"3a9a4a1780ff494e185c8aab4382cb52e2bc7f39ba2013c30ac4ca0a221f4f5e\" pid:5754 exited_at:{seconds:1756959676 nanos:475383931}" Sep 4 04:21:19.553939 systemd[1]: Started sshd@26-10.0.0.55:22-10.0.0.1:39516.service - OpenSSH per-connection server daemon (10.0.0.1:39516). Sep 4 04:21:19.634862 sshd[5768]: Accepted publickey for core from 10.0.0.1 port 39516 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:21:19.637305 sshd-session[5768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:21:19.644610 systemd-logind[1513]: New session 27 of user core. Sep 4 04:21:19.649093 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 04:21:19.937871 sshd[5771]: Connection closed by 10.0.0.1 port 39516 Sep 4 04:21:19.938640 sshd-session[5768]: pam_unix(sshd:session): session closed for user core Sep 4 04:21:19.945082 systemd[1]: sshd@26-10.0.0.55:22-10.0.0.1:39516.service: Deactivated successfully. Sep 4 04:21:19.949271 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 04:21:19.950669 systemd-logind[1513]: Session 27 logged out. Waiting for processes to exit. Sep 4 04:21:19.953094 systemd-logind[1513]: Removed session 27. Sep 4 04:21:20.588962 containerd[1582]: time="2025-09-04T04:21:20.588372591Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7fca42bed212684b3cd03e8d7849f86b7798233583abac4e2c7e5b23645f43b9\" id:\"48fbccb80f0223369c86c9be151bb5144e57b4814a021bf5d93537d7d196c70d\" pid:5797 exited_at:{seconds:1756959680 nanos:587900604}" Sep 4 04:21:23.993753 kubelet[2768]: E0904 04:21:23.993706 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:21:24.960097 systemd[1]: Started sshd@27-10.0.0.55:22-10.0.0.1:41870.service - OpenSSH per-connection server daemon (10.0.0.1:41870). Sep 4 04:21:25.023884 sshd[5810]: Accepted publickey for core from 10.0.0.1 port 41870 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:21:25.026173 sshd-session[5810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:21:25.031598 systemd-logind[1513]: New session 28 of user core. Sep 4 04:21:25.043477 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 4 04:21:25.207826 sshd[5814]: Connection closed by 10.0.0.1 port 41870 Sep 4 04:21:25.204931 sshd-session[5810]: pam_unix(sshd:session): session closed for user core Sep 4 04:21:25.215250 systemd[1]: sshd@27-10.0.0.55:22-10.0.0.1:41870.service: Deactivated successfully. Sep 4 04:21:25.235255 systemd[1]: session-28.scope: Deactivated successfully. Sep 4 04:21:25.251804 systemd-logind[1513]: Session 28 logged out. Waiting for processes to exit. Sep 4 04:21:25.258370 systemd-logind[1513]: Removed session 28.