Aug 13 01:38:21.855975 kernel: Linux version 6.12.40-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:42:48 -00 2025 Aug 13 01:38:21.855995 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 01:38:21.856004 kernel: BIOS-provided physical RAM map: Aug 13 01:38:21.856013 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Aug 13 01:38:21.856019 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Aug 13 01:38:21.856024 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 01:38:21.856031 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Aug 13 01:38:21.856037 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Aug 13 01:38:21.856042 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 01:38:21.856048 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 13 01:38:21.856054 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 01:38:21.856060 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 01:38:21.856067 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Aug 13 01:38:21.856073 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 13 01:38:21.856080 kernel: NX (Execute Disable) protection: active Aug 13 01:38:21.856086 kernel: APIC: Static calls initialized Aug 13 01:38:21.856092 kernel: SMBIOS 2.8 present. Aug 13 01:38:21.856100 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Aug 13 01:38:21.856106 kernel: DMI: Memory slots populated: 1/1 Aug 13 01:38:21.856113 kernel: Hypervisor detected: KVM Aug 13 01:38:21.856119 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 01:38:21.856125 kernel: kvm-clock: using sched offset of 5649264018 cycles Aug 13 01:38:21.856131 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 01:38:21.856138 kernel: tsc: Detected 1999.999 MHz processor Aug 13 01:38:21.856144 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 01:38:21.856151 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 01:38:21.856157 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Aug 13 01:38:21.856165 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 01:38:21.856172 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 01:38:21.856178 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Aug 13 01:38:21.856184 kernel: Using GB pages for direct mapping Aug 13 01:38:21.856191 kernel: ACPI: Early table checksum verification disabled Aug 13 01:38:21.856197 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Aug 13 01:38:21.856203 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:38:21.856209 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:38:21.856216 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:38:21.856224 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 13 01:38:21.856230 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:38:21.856237 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:38:21.856243 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:38:21.856252 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:38:21.856258 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Aug 13 01:38:21.856267 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Aug 13 01:38:21.856273 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 13 01:38:21.856280 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Aug 13 01:38:21.856287 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Aug 13 01:38:21.856293 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Aug 13 01:38:21.856300 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Aug 13 01:38:21.856306 kernel: No NUMA configuration found Aug 13 01:38:21.856313 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Aug 13 01:38:21.856321 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] Aug 13 01:38:21.856328 kernel: Zone ranges: Aug 13 01:38:21.856355 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 01:38:21.856362 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 01:38:21.856369 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Aug 13 01:38:21.856375 kernel: Device empty Aug 13 01:38:21.856382 kernel: Movable zone start for each node Aug 13 01:38:21.856388 kernel: Early memory node ranges Aug 13 01:38:21.856395 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 01:38:21.856401 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Aug 13 01:38:21.856410 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Aug 13 01:38:21.856417 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Aug 13 01:38:21.856423 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 01:38:21.856430 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 01:38:21.856437 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Aug 13 01:38:21.856443 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 01:38:21.856450 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 01:38:21.856456 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 01:38:21.856463 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 01:38:21.856471 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 01:38:21.856478 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 01:38:21.856485 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 01:38:21.856491 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 01:38:21.856498 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 01:38:21.856504 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 01:38:21.856511 kernel: TSC deadline timer available Aug 13 01:38:21.856517 kernel: CPU topo: Max. logical packages: 1 Aug 13 01:38:21.856524 kernel: CPU topo: Max. logical dies: 1 Aug 13 01:38:21.856532 kernel: CPU topo: Max. dies per package: 1 Aug 13 01:38:21.856539 kernel: CPU topo: Max. threads per core: 1 Aug 13 01:38:21.856545 kernel: CPU topo: Num. cores per package: 2 Aug 13 01:38:21.856552 kernel: CPU topo: Num. threads per package: 2 Aug 13 01:38:21.856558 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Aug 13 01:38:21.856565 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 01:38:21.856571 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 01:38:21.856578 kernel: kvm-guest: setup PV sched yield Aug 13 01:38:21.856585 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 13 01:38:21.856593 kernel: Booting paravirtualized kernel on KVM Aug 13 01:38:21.856600 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 01:38:21.856606 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 01:38:21.856613 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Aug 13 01:38:21.856620 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Aug 13 01:38:21.856626 kernel: pcpu-alloc: [0] 0 1 Aug 13 01:38:21.856633 kernel: kvm-guest: PV spinlocks enabled Aug 13 01:38:21.856639 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 01:38:21.856647 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 01:38:21.856656 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 01:38:21.856662 kernel: random: crng init done Aug 13 01:38:21.856669 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 01:38:21.856676 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 01:38:21.856682 kernel: Fallback order for Node 0: 0 Aug 13 01:38:21.856689 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Aug 13 01:38:21.856696 kernel: Policy zone: Normal Aug 13 01:38:21.856702 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 01:38:21.856710 kernel: software IO TLB: area num 2. Aug 13 01:38:21.856717 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 01:38:21.856724 kernel: ftrace: allocating 40098 entries in 157 pages Aug 13 01:38:21.856730 kernel: ftrace: allocated 157 pages with 5 groups Aug 13 01:38:21.856737 kernel: Dynamic Preempt: voluntary Aug 13 01:38:21.856743 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 01:38:21.856751 kernel: rcu: RCU event tracing is enabled. Aug 13 01:38:21.856758 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 01:38:21.856764 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 01:38:21.856773 kernel: Rude variant of Tasks RCU enabled. Aug 13 01:38:21.856780 kernel: Tracing variant of Tasks RCU enabled. Aug 13 01:38:21.856786 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 01:38:21.856793 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 01:38:21.856800 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:38:21.856812 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:38:21.856821 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:38:21.856828 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 01:38:21.856835 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 01:38:21.856842 kernel: Console: colour VGA+ 80x25 Aug 13 01:38:21.856848 kernel: printk: legacy console [tty0] enabled Aug 13 01:38:21.856855 kernel: printk: legacy console [ttyS0] enabled Aug 13 01:38:21.856864 kernel: ACPI: Core revision 20240827 Aug 13 01:38:21.856871 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 01:38:21.856878 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 01:38:21.856885 kernel: x2apic enabled Aug 13 01:38:21.856892 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 01:38:21.856901 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 13 01:38:21.856908 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 13 01:38:21.856915 kernel: kvm-guest: setup PV IPIs Aug 13 01:38:21.856922 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 01:38:21.856929 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Aug 13 01:38:21.856936 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) Aug 13 01:38:21.856943 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 01:38:21.856950 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 01:38:21.856956 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 01:38:21.856965 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 01:38:21.856972 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 01:38:21.856978 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 01:38:21.856985 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 01:38:21.856992 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 01:38:21.856999 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 01:38:21.857005 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 13 01:38:21.857013 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 13 01:38:21.857021 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 13 01:38:21.857028 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Aug 13 01:38:21.857034 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 01:38:21.857041 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 01:38:21.857048 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 01:38:21.857054 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 01:38:21.857061 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Aug 13 01:38:21.857068 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 01:38:21.857074 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Aug 13 01:38:21.857083 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Aug 13 01:38:21.857089 kernel: Freeing SMP alternatives memory: 32K Aug 13 01:38:21.857096 kernel: pid_max: default: 32768 minimum: 301 Aug 13 01:38:21.857103 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 13 01:38:21.857109 kernel: landlock: Up and running. Aug 13 01:38:21.857116 kernel: SELinux: Initializing. Aug 13 01:38:21.857123 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 01:38:21.857130 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 01:38:21.857137 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Aug 13 01:38:21.857145 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 01:38:21.857152 kernel: ... version: 0 Aug 13 01:38:21.857158 kernel: ... bit width: 48 Aug 13 01:38:21.857165 kernel: ... generic registers: 6 Aug 13 01:38:21.857172 kernel: ... value mask: 0000ffffffffffff Aug 13 01:38:21.857178 kernel: ... max period: 00007fffffffffff Aug 13 01:38:21.857185 kernel: ... fixed-purpose events: 0 Aug 13 01:38:21.857191 kernel: ... event mask: 000000000000003f Aug 13 01:38:21.857198 kernel: signal: max sigframe size: 3376 Aug 13 01:38:21.857206 kernel: rcu: Hierarchical SRCU implementation. Aug 13 01:38:21.857213 kernel: rcu: Max phase no-delay instances is 400. Aug 13 01:38:21.857220 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 13 01:38:21.857226 kernel: smp: Bringing up secondary CPUs ... Aug 13 01:38:21.857233 kernel: smpboot: x86: Booting SMP configuration: Aug 13 01:38:21.857240 kernel: .... node #0, CPUs: #1 Aug 13 01:38:21.857246 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 01:38:21.857253 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Aug 13 01:38:21.857260 kernel: Memory: 3961048K/4193772K available (14336K kernel code, 2430K rwdata, 9960K rodata, 54444K init, 2524K bss, 227296K reserved, 0K cma-reserved) Aug 13 01:38:21.857268 kernel: devtmpfs: initialized Aug 13 01:38:21.857274 kernel: x86/mm: Memory block size: 128MB Aug 13 01:38:21.857281 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 01:38:21.857288 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 01:38:21.857295 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 01:38:21.857301 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 01:38:21.857308 kernel: audit: initializing netlink subsys (disabled) Aug 13 01:38:21.857314 kernel: audit: type=2000 audit(1755049099.708:1): state=initialized audit_enabled=0 res=1 Aug 13 01:38:21.857321 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 01:38:21.857330 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 01:38:21.857353 kernel: cpuidle: using governor menu Aug 13 01:38:21.857360 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 01:38:21.857366 kernel: dca service started, version 1.12.1 Aug 13 01:38:21.857373 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Aug 13 01:38:21.857380 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 13 01:38:21.857387 kernel: PCI: Using configuration type 1 for base access Aug 13 01:38:21.857393 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 01:38:21.857400 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 01:38:21.857409 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 01:38:21.857416 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 01:38:21.857422 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 01:38:21.857429 kernel: ACPI: Added _OSI(Module Device) Aug 13 01:38:21.857436 kernel: ACPI: Added _OSI(Processor Device) Aug 13 01:38:21.857442 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 01:38:21.857449 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 01:38:21.857456 kernel: ACPI: Interpreter enabled Aug 13 01:38:21.857462 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 01:38:21.857471 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 01:38:21.857477 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 01:38:21.857484 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 01:38:21.857491 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 01:38:21.857497 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 01:38:21.857666 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 01:38:21.857781 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 01:38:21.857894 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 01:38:21.857904 kernel: PCI host bridge to bus 0000:00 Aug 13 01:38:21.858020 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 01:38:21.858121 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 01:38:21.858238 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 01:38:21.858380 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Aug 13 01:38:21.858484 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 01:38:21.858582 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Aug 13 01:38:21.858685 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 01:38:21.858810 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Aug 13 01:38:21.858933 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Aug 13 01:38:21.859045 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Aug 13 01:38:21.859154 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Aug 13 01:38:21.860585 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Aug 13 01:38:21.860712 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 01:38:21.860831 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Aug 13 01:38:21.860940 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Aug 13 01:38:21.861049 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Aug 13 01:38:21.861156 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Aug 13 01:38:21.861277 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Aug 13 01:38:21.861410 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Aug 13 01:38:21.861527 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Aug 13 01:38:21.861634 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Aug 13 01:38:21.861742 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Aug 13 01:38:21.861858 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Aug 13 01:38:21.861965 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 01:38:21.862079 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Aug 13 01:38:21.862200 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Aug 13 01:38:21.862306 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Aug 13 01:38:21.862499 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Aug 13 01:38:21.862611 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Aug 13 01:38:21.862626 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 01:38:21.862633 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 01:38:21.862640 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 01:38:21.862650 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 01:38:21.862657 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 01:38:21.862664 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 01:38:21.862670 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 01:38:21.862677 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 01:38:21.862684 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 01:38:21.862690 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 01:38:21.862697 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 01:38:21.862703 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 01:38:21.862712 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 01:38:21.862718 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 01:38:21.862725 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 01:38:21.862732 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 01:38:21.862738 kernel: iommu: Default domain type: Translated Aug 13 01:38:21.862745 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 01:38:21.862752 kernel: PCI: Using ACPI for IRQ routing Aug 13 01:38:21.862758 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 01:38:21.862765 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Aug 13 01:38:21.862773 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Aug 13 01:38:21.862879 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 01:38:21.862985 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 01:38:21.863091 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 01:38:21.863100 kernel: vgaarb: loaded Aug 13 01:38:21.863107 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 01:38:21.863114 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 01:38:21.863121 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 01:38:21.863128 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 01:38:21.863137 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 01:38:21.863144 kernel: pnp: PnP ACPI init Aug 13 01:38:21.863264 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 01:38:21.863275 kernel: pnp: PnP ACPI: found 5 devices Aug 13 01:38:21.863282 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 01:38:21.863289 kernel: NET: Registered PF_INET protocol family Aug 13 01:38:21.863295 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 01:38:21.863302 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 01:38:21.863311 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 01:38:21.863318 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 01:38:21.863325 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 01:38:21.863332 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 01:38:21.863394 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 01:38:21.863401 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 01:38:21.863408 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 01:38:21.863415 kernel: NET: Registered PF_XDP protocol family Aug 13 01:38:21.865455 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 01:38:21.865637 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 01:38:21.865739 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 01:38:21.865838 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Aug 13 01:38:21.865936 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 01:38:21.866033 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Aug 13 01:38:21.866042 kernel: PCI: CLS 0 bytes, default 64 Aug 13 01:38:21.866049 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 01:38:21.866057 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Aug 13 01:38:21.866067 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Aug 13 01:38:21.866074 kernel: Initialise system trusted keyrings Aug 13 01:38:21.866081 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 01:38:21.866088 kernel: Key type asymmetric registered Aug 13 01:38:21.866095 kernel: Asymmetric key parser 'x509' registered Aug 13 01:38:21.866102 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 01:38:21.866109 kernel: io scheduler mq-deadline registered Aug 13 01:38:21.866116 kernel: io scheduler kyber registered Aug 13 01:38:21.866122 kernel: io scheduler bfq registered Aug 13 01:38:21.866131 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 01:38:21.866138 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 01:38:21.866145 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 01:38:21.866152 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 01:38:21.866159 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 01:38:21.866166 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 01:38:21.866173 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 01:38:21.866180 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 01:38:21.867384 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 01:38:21.867404 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 01:38:21.867524 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 01:38:21.867630 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T01:38:21 UTC (1755049101) Aug 13 01:38:21.867759 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 01:38:21.867771 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 13 01:38:21.867778 kernel: NET: Registered PF_INET6 protocol family Aug 13 01:38:21.867785 kernel: Segment Routing with IPv6 Aug 13 01:38:21.867791 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 01:38:21.867801 kernel: NET: Registered PF_PACKET protocol family Aug 13 01:38:21.867808 kernel: Key type dns_resolver registered Aug 13 01:38:21.867815 kernel: IPI shorthand broadcast: enabled Aug 13 01:38:21.867821 kernel: sched_clock: Marking stable (2739003942, 213391941)->(2990595054, -38199171) Aug 13 01:38:21.867828 kernel: registered taskstats version 1 Aug 13 01:38:21.867835 kernel: Loading compiled-in X.509 certificates Aug 13 01:38:21.867842 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.40-flatcar: dee0b464d3f7f8d09744a2392f69dde258bc95c0' Aug 13 01:38:21.867848 kernel: Demotion targets for Node 0: null Aug 13 01:38:21.867855 kernel: Key type .fscrypt registered Aug 13 01:38:21.867863 kernel: Key type fscrypt-provisioning registered Aug 13 01:38:21.867870 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 01:38:21.867877 kernel: ima: Allocated hash algorithm: sha1 Aug 13 01:38:21.867884 kernel: ima: No architecture policies found Aug 13 01:38:21.867890 kernel: clk: Disabling unused clocks Aug 13 01:38:21.867897 kernel: Warning: unable to open an initial console. Aug 13 01:38:21.867904 kernel: Freeing unused kernel image (initmem) memory: 54444K Aug 13 01:38:21.867911 kernel: Write protecting the kernel read-only data: 24576k Aug 13 01:38:21.867917 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Aug 13 01:38:21.867926 kernel: Run /init as init process Aug 13 01:38:21.867932 kernel: with arguments: Aug 13 01:38:21.867939 kernel: /init Aug 13 01:38:21.867946 kernel: with environment: Aug 13 01:38:21.867952 kernel: HOME=/ Aug 13 01:38:21.867970 kernel: TERM=linux Aug 13 01:38:21.867979 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 01:38:21.867988 systemd[1]: Successfully made /usr/ read-only. Aug 13 01:38:21.868000 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 01:38:21.868008 systemd[1]: Detected virtualization kvm. Aug 13 01:38:21.868015 systemd[1]: Detected architecture x86-64. Aug 13 01:38:21.868022 systemd[1]: Running in initrd. Aug 13 01:38:21.868031 systemd[1]: No hostname configured, using default hostname. Aug 13 01:38:21.868045 systemd[1]: Hostname set to . Aug 13 01:38:21.868057 systemd[1]: Initializing machine ID from random generator. Aug 13 01:38:21.868067 systemd[1]: Queued start job for default target initrd.target. Aug 13 01:38:21.868076 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:38:21.868084 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:38:21.868092 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 01:38:21.868099 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 01:38:21.868107 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 01:38:21.868115 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 01:38:21.868125 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 01:38:21.868132 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 01:38:21.868140 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:38:21.868147 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:38:21.868154 systemd[1]: Reached target paths.target - Path Units. Aug 13 01:38:21.868161 systemd[1]: Reached target slices.target - Slice Units. Aug 13 01:38:21.868169 systemd[1]: Reached target swap.target - Swaps. Aug 13 01:38:21.868176 systemd[1]: Reached target timers.target - Timer Units. Aug 13 01:38:21.868183 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 01:38:21.868192 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 01:38:21.868200 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 01:38:21.868207 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 13 01:38:21.868214 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:38:21.868221 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 01:38:21.868229 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:38:21.868236 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 01:38:21.868245 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 01:38:21.868252 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 01:38:21.868260 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 01:38:21.868267 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 13 01:38:21.868275 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 01:38:21.868282 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 01:38:21.868289 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 01:38:21.868298 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:38:21.868305 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 01:38:21.868313 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:38:21.868321 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 01:38:21.868330 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 01:38:21.869408 systemd-journald[206]: Collecting audit messages is disabled. Aug 13 01:38:21.869429 systemd-journald[206]: Journal started Aug 13 01:38:21.869450 systemd-journald[206]: Runtime Journal (/run/log/journal/827344a9f2604f5bb16f8408a4316bdb) is 8M, max 78.5M, 70.5M free. Aug 13 01:38:21.866512 systemd-modules-load[207]: Inserted module 'overlay' Aug 13 01:38:21.941183 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 01:38:21.941210 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 01:38:21.941229 kernel: Bridge firewalling registered Aug 13 01:38:21.891800 systemd-modules-load[207]: Inserted module 'br_netfilter' Aug 13 01:38:21.941846 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 01:38:21.942673 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:38:21.943895 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 01:38:21.947290 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 01:38:21.950439 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:38:21.954440 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 01:38:21.961455 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 01:38:21.969235 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:38:21.972599 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:38:21.979093 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 01:38:21.981312 systemd-tmpfiles[223]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 13 01:38:21.981481 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 01:38:21.985966 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:38:21.990443 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 01:38:22.003132 dracut-cmdline[243]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 01:38:22.033411 systemd-resolved[245]: Positive Trust Anchors: Aug 13 01:38:22.034063 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:38:22.034090 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 01:38:22.039265 systemd-resolved[245]: Defaulting to hostname 'linux'. Aug 13 01:38:22.040393 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 01:38:22.041371 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:38:22.089367 kernel: SCSI subsystem initialized Aug 13 01:38:22.097365 kernel: Loading iSCSI transport class v2.0-870. Aug 13 01:38:22.109374 kernel: iscsi: registered transport (tcp) Aug 13 01:38:22.130932 kernel: iscsi: registered transport (qla4xxx) Aug 13 01:38:22.130987 kernel: QLogic iSCSI HBA Driver Aug 13 01:38:22.151592 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 01:38:22.173501 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:38:22.175984 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 01:38:22.226202 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 01:38:22.228034 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 01:38:22.276361 kernel: raid6: avx2x4 gen() 28026 MB/s Aug 13 01:38:22.294360 kernel: raid6: avx2x2 gen() 28337 MB/s Aug 13 01:38:22.312872 kernel: raid6: avx2x1 gen() 20118 MB/s Aug 13 01:38:22.312947 kernel: raid6: using algorithm avx2x2 gen() 28337 MB/s Aug 13 01:38:22.331711 kernel: raid6: .... xor() 29794 MB/s, rmw enabled Aug 13 01:38:22.331748 kernel: raid6: using avx2x2 recovery algorithm Aug 13 01:38:22.350363 kernel: xor: automatically using best checksumming function avx Aug 13 01:38:22.482373 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 01:38:22.490162 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 01:38:22.492784 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:38:22.514557 systemd-udevd[454]: Using default interface naming scheme 'v255'. Aug 13 01:38:22.519513 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:38:22.522439 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 01:38:22.540890 dracut-pre-trigger[460]: rd.md=0: removing MD RAID activation Aug 13 01:38:22.570770 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 01:38:22.572452 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 01:38:22.642198 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:38:22.646096 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 01:38:22.710493 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 01:38:22.712352 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Aug 13 01:38:22.721374 kernel: libata version 3.00 loaded. Aug 13 01:38:22.725384 kernel: scsi host0: Virtio SCSI HBA Aug 13 01:38:22.735275 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Aug 13 01:38:22.740381 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Aug 13 01:38:22.743956 kernel: AES CTR mode by8 optimization enabled Aug 13 01:38:22.764583 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 01:38:22.764761 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 01:38:22.772298 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Aug 13 01:38:22.772489 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Aug 13 01:38:22.772627 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 01:38:22.785364 kernel: scsi host1: ahci Aug 13 01:38:22.790849 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 01:38:22.792845 kernel: scsi host2: ahci Aug 13 01:38:22.793012 kernel: scsi host3: ahci Aug 13 01:38:22.791045 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:38:22.796491 kernel: scsi host4: ahci Aug 13 01:38:22.796531 kernel: sd 0:0:0:0: Power-on or device reset occurred Aug 13 01:38:22.796702 kernel: sd 0:0:0:0: [sda] 9297920 512-byte logical blocks: (4.76 GB/4.43 GiB) Aug 13 01:38:22.795778 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:38:22.801392 kernel: scsi host5: ahci Aug 13 01:38:22.801551 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 01:38:22.803353 kernel: scsi host6: ahci Aug 13 01:38:22.803505 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Aug 13 01:38:22.892501 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 29 lpm-pol 0 Aug 13 01:38:22.894795 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 01:38:22.895035 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 29 lpm-pol 0 Aug 13 01:38:22.899970 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 29 lpm-pol 0 Aug 13 01:38:22.904169 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 29 lpm-pol 0 Aug 13 01:38:22.904389 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 29 lpm-pol 0 Aug 13 01:38:22.916595 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:38:22.919683 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 29 lpm-pol 0 Aug 13 01:38:22.923418 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:38:22.956097 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 01:38:22.956114 kernel: GPT:9289727 != 9297919 Aug 13 01:38:22.956125 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 01:38:22.956135 kernel: GPT:9289727 != 9297919 Aug 13 01:38:22.956145 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 01:38:22.956155 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:38:22.956170 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 01:38:23.016164 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:38:23.226925 kernel: ata3: SATA link down (SStatus 0 SControl 300) Aug 13 01:38:23.226999 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 01:38:23.227442 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 01:38:23.235134 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 01:38:23.235208 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 01:38:23.235229 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 01:38:23.282515 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Aug 13 01:38:23.298444 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Aug 13 01:38:23.310471 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Aug 13 01:38:23.311083 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Aug 13 01:38:23.312716 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 01:38:23.322314 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 01:38:23.323781 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 01:38:23.324433 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:38:23.325758 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 01:38:23.327670 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 01:38:23.330612 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 01:38:23.345112 disk-uuid[634]: Primary Header is updated. Aug 13 01:38:23.345112 disk-uuid[634]: Secondary Entries is updated. Aug 13 01:38:23.345112 disk-uuid[634]: Secondary Header is updated. Aug 13 01:38:23.351628 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 01:38:23.355365 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:38:23.367367 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:38:24.372362 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:38:24.373311 disk-uuid[637]: The operation has completed successfully. Aug 13 01:38:24.422993 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 01:38:24.423109 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 01:38:24.446039 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 01:38:24.461182 sh[656]: Success Aug 13 01:38:24.478987 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 01:38:24.479013 kernel: device-mapper: uevent: version 1.0.3 Aug 13 01:38:24.479627 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 13 01:38:24.489369 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Aug 13 01:38:24.531080 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 01:38:24.534402 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 01:38:24.544873 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 01:38:24.556814 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Aug 13 01:38:24.556837 kernel: BTRFS: device fsid 0c0338fb-9434-41c1-99a2-737cbe2351c4 devid 1 transid 44 /dev/mapper/usr (254:0) scanned by mount (668) Aug 13 01:38:24.563586 kernel: BTRFS info (device dm-0): first mount of filesystem 0c0338fb-9434-41c1-99a2-737cbe2351c4 Aug 13 01:38:24.563609 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:38:24.563619 kernel: BTRFS info (device dm-0): using free-space-tree Aug 13 01:38:24.571945 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 01:38:24.572781 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 13 01:38:24.573647 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 01:38:24.574259 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 01:38:24.576834 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 01:38:24.600379 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (701) Aug 13 01:38:24.605995 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:38:24.606021 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:38:24.606032 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 01:38:24.614519 kernel: BTRFS info (device sda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:38:24.614764 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 01:38:24.616694 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 01:38:24.680166 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 01:38:24.684448 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 01:38:24.721589 systemd-networkd[838]: lo: Link UP Aug 13 01:38:24.722324 systemd-networkd[838]: lo: Gained carrier Aug 13 01:38:24.724285 systemd-networkd[838]: Enumeration completed Aug 13 01:38:24.724947 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 01:38:24.725235 systemd-networkd[838]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:38:24.725239 systemd-networkd[838]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:38:24.729302 ignition[766]: Ignition 2.21.0 Aug 13 01:38:24.727542 systemd-networkd[838]: eth0: Link UP Aug 13 01:38:24.729308 ignition[766]: Stage: fetch-offline Aug 13 01:38:24.727737 systemd-networkd[838]: eth0: Gained carrier Aug 13 01:38:24.729353 ignition[766]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:38:24.727746 systemd-networkd[838]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:38:24.729363 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:38:24.728262 systemd[1]: Reached target network.target - Network. Aug 13 01:38:24.729436 ignition[766]: parsed url from cmdline: "" Aug 13 01:38:24.731481 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 01:38:24.729440 ignition[766]: no config URL provided Aug 13 01:38:24.733833 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 01:38:24.729445 ignition[766]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 01:38:24.729452 ignition[766]: no config at "/usr/lib/ignition/user.ign" Aug 13 01:38:24.729459 ignition[766]: failed to fetch config: resource requires networking Aug 13 01:38:24.729601 ignition[766]: Ignition finished successfully Aug 13 01:38:24.751099 ignition[847]: Ignition 2.21.0 Aug 13 01:38:24.751113 ignition[847]: Stage: fetch Aug 13 01:38:24.751211 ignition[847]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:38:24.751220 ignition[847]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:38:24.751276 ignition[847]: parsed url from cmdline: "" Aug 13 01:38:24.751280 ignition[847]: no config URL provided Aug 13 01:38:24.751284 ignition[847]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 01:38:24.751292 ignition[847]: no config at "/usr/lib/ignition/user.ign" Aug 13 01:38:24.751319 ignition[847]: PUT http://169.254.169.254/v1/token: attempt #1 Aug 13 01:38:24.751503 ignition[847]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 01:38:24.952107 ignition[847]: PUT http://169.254.169.254/v1/token: attempt #2 Aug 13 01:38:24.952625 ignition[847]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 01:38:25.268419 systemd-networkd[838]: eth0: DHCPv4 address 172.236.100.188/24, gateway 172.236.100.1 acquired from 23.33.176.48 Aug 13 01:38:25.352824 ignition[847]: PUT http://169.254.169.254/v1/token: attempt #3 Aug 13 01:38:25.445731 ignition[847]: PUT result: OK Aug 13 01:38:25.445806 ignition[847]: GET http://169.254.169.254/v1/user-data: attempt #1 Aug 13 01:38:25.568962 ignition[847]: GET result: OK Aug 13 01:38:25.569019 ignition[847]: parsing config with SHA512: 65ddc4e4b1ec389307b844780ba934fe725901dca348114a23ba9c9c80f82d4cfdc221c1832e298f7edb989100eb8d95d34158bdc9371688ef0c8e9f4074ce08 Aug 13 01:38:25.571281 unknown[847]: fetched base config from "system" Aug 13 01:38:25.571489 unknown[847]: fetched base config from "system" Aug 13 01:38:25.571654 ignition[847]: fetch: fetch complete Aug 13 01:38:25.571496 unknown[847]: fetched user config from "akamai" Aug 13 01:38:25.571658 ignition[847]: fetch: fetch passed Aug 13 01:38:25.571698 ignition[847]: Ignition finished successfully Aug 13 01:38:25.575381 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 01:38:25.598252 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 01:38:25.625626 ignition[855]: Ignition 2.21.0 Aug 13 01:38:25.625641 ignition[855]: Stage: kargs Aug 13 01:38:25.625751 ignition[855]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:38:25.625762 ignition[855]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:38:25.628525 ignition[855]: kargs: kargs passed Aug 13 01:38:25.628894 ignition[855]: Ignition finished successfully Aug 13 01:38:25.631098 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 01:38:25.632899 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 01:38:25.655905 ignition[861]: Ignition 2.21.0 Aug 13 01:38:25.655918 ignition[861]: Stage: disks Aug 13 01:38:25.656051 ignition[861]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:38:25.656061 ignition[861]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:38:25.657070 ignition[861]: disks: disks passed Aug 13 01:38:25.657112 ignition[861]: Ignition finished successfully Aug 13 01:38:25.658988 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 01:38:25.660284 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 01:38:25.661144 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 01:38:25.662387 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 01:38:25.663636 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 01:38:25.664654 systemd[1]: Reached target basic.target - Basic System. Aug 13 01:38:25.666657 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 01:38:25.692993 systemd-fsck[871]: ROOT: clean, 15/553520 files, 52789/553472 blocks Aug 13 01:38:25.696635 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 01:38:25.699334 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 01:38:25.804360 kernel: EXT4-fs (sda9): mounted filesystem 069caac6-7833-4acd-8940-01a7ff7d1281 r/w with ordered data mode. Quota mode: none. Aug 13 01:38:25.804818 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 01:38:25.805787 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 01:38:25.807687 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 01:38:25.810404 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 01:38:25.811704 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 01:38:25.812625 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 01:38:25.813289 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 01:38:25.818794 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 01:38:25.821304 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 01:38:25.829357 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (879) Aug 13 01:38:25.833662 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:38:25.833692 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:38:25.833703 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 01:38:25.839877 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 01:38:25.871970 initrd-setup-root[903]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 01:38:25.876459 initrd-setup-root[910]: cut: /sysroot/etc/group: No such file or directory Aug 13 01:38:25.880608 initrd-setup-root[917]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 01:38:25.884090 initrd-setup-root[924]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 01:38:25.970394 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 01:38:25.973118 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 01:38:25.974839 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 01:38:25.987490 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 01:38:25.989828 kernel: BTRFS info (device sda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:38:26.004380 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 01:38:26.015228 ignition[993]: INFO : Ignition 2.21.0 Aug 13 01:38:26.015228 ignition[993]: INFO : Stage: mount Aug 13 01:38:26.015228 ignition[993]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:38:26.015228 ignition[993]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:38:26.015228 ignition[993]: INFO : mount: mount passed Aug 13 01:38:26.015228 ignition[993]: INFO : Ignition finished successfully Aug 13 01:38:26.017851 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 01:38:26.020138 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 01:38:26.345565 systemd-networkd[838]: eth0: Gained IPv6LL Aug 13 01:38:26.806755 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 01:38:26.833371 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1004) Aug 13 01:38:26.836650 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:38:26.836674 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:38:26.839358 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 01:38:26.843199 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 01:38:26.868145 ignition[1020]: INFO : Ignition 2.21.0 Aug 13 01:38:26.868145 ignition[1020]: INFO : Stage: files Aug 13 01:38:26.869324 ignition[1020]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:38:26.869324 ignition[1020]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:38:26.869324 ignition[1020]: DEBUG : files: compiled without relabeling support, skipping Aug 13 01:38:26.871468 ignition[1020]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 01:38:26.871468 ignition[1020]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 01:38:26.873444 ignition[1020]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 01:38:26.874296 ignition[1020]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 01:38:26.874296 ignition[1020]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 01:38:26.873847 unknown[1020]: wrote ssh authorized keys file for user: core Aug 13 01:38:26.876539 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Aug 13 01:38:26.876539 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 01:38:26.878249 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:38:26.878249 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:38:26.878249 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 01:38:26.878249 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 01:38:26.878249 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 01:38:26.883010 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Aug 13 01:38:27.166256 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Aug 13 01:38:27.460709 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 01:38:27.460709 ignition[1020]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Aug 13 01:38:27.462860 ignition[1020]: INFO : files: op(7): op(8): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 01:38:27.463944 ignition[1020]: INFO : files: op(7): op(8): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 01:38:27.463944 ignition[1020]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Aug 13 01:38:27.463944 ignition[1020]: INFO : files: createResultFile: createFiles: op(9): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:38:27.463944 ignition[1020]: INFO : files: createResultFile: createFiles: op(9): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:38:27.463944 ignition[1020]: INFO : files: files passed Aug 13 01:38:27.463944 ignition[1020]: INFO : Ignition finished successfully Aug 13 01:38:27.465536 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 01:38:27.469448 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 01:38:27.489880 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 01:38:27.498208 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 01:38:27.498326 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 01:38:27.505829 initrd-setup-root-after-ignition[1051]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:38:27.505829 initrd-setup-root-after-ignition[1051]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:38:27.508322 initrd-setup-root-after-ignition[1055]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:38:27.509423 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 01:38:27.511365 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 01:38:27.513654 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 01:38:27.555817 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 01:38:27.555938 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 01:38:27.557490 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 01:38:27.558355 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 01:38:27.559605 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 01:38:27.560274 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 01:38:27.577472 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 01:38:27.580487 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 01:38:27.597354 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:38:27.598676 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:38:27.599296 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 01:38:27.599915 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 01:38:27.600024 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 01:38:27.601505 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 01:38:27.602267 systemd[1]: Stopped target basic.target - Basic System. Aug 13 01:38:27.603280 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 01:38:27.604549 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 01:38:27.605647 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 01:38:27.606731 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 13 01:38:27.607992 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 01:38:27.609197 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 01:38:27.610486 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 01:38:27.611699 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 01:38:27.612845 systemd[1]: Stopped target swap.target - Swaps. Aug 13 01:38:27.614066 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 01:38:27.614199 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 01:38:27.615656 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:38:27.616477 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:38:27.617459 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 01:38:27.617553 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:38:27.618553 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 01:38:27.618685 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 01:38:27.620182 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 01:38:27.620330 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 01:38:27.621024 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 01:38:27.621113 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 01:38:27.624413 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 01:38:27.625179 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 01:38:27.625362 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:38:27.628302 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 01:38:27.628850 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 01:38:27.628959 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:38:27.630050 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 01:38:27.630424 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 01:38:27.637812 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 01:38:27.637906 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 01:38:27.653884 ignition[1075]: INFO : Ignition 2.21.0 Aug 13 01:38:27.653884 ignition[1075]: INFO : Stage: umount Aug 13 01:38:27.653884 ignition[1075]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:38:27.653884 ignition[1075]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:38:27.657159 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 01:38:27.659620 ignition[1075]: INFO : umount: umount passed Aug 13 01:38:27.661380 ignition[1075]: INFO : Ignition finished successfully Aug 13 01:38:27.661980 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 01:38:27.662105 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 01:38:27.663249 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 01:38:27.663296 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 01:38:27.664708 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 01:38:27.664756 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 01:38:27.665791 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 01:38:27.665834 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 01:38:27.666833 systemd[1]: Stopped target network.target - Network. Aug 13 01:38:27.667869 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 01:38:27.667917 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 01:38:27.668979 systemd[1]: Stopped target paths.target - Path Units. Aug 13 01:38:27.669948 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 01:38:27.669999 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:38:27.671042 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 01:38:27.672029 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 01:38:27.673210 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 01:38:27.673250 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 01:38:27.674422 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 01:38:27.674459 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 01:38:27.675455 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 01:38:27.675506 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 01:38:27.676652 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 01:38:27.676695 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 01:38:27.699415 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 01:38:27.700777 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 01:38:27.701709 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 01:38:27.701814 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 01:38:27.702822 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 01:38:27.702894 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 01:38:27.706551 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 01:38:27.707501 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 01:38:27.710991 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 13 01:38:27.711224 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 01:38:27.711381 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 01:38:27.713322 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 13 01:38:27.713868 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 13 01:38:27.714845 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 01:38:27.714887 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:38:27.718444 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 01:38:27.719236 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 01:38:27.719286 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 01:38:27.720511 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 01:38:27.720556 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:38:27.722486 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 01:38:27.722532 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 01:38:27.723283 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 01:38:27.723328 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:38:27.724801 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:38:27.743809 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 01:38:27.744587 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:38:27.746134 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 01:38:27.746248 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 01:38:27.748122 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 01:38:27.748192 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 01:38:27.749037 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 01:38:27.749077 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:38:27.750255 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 01:38:27.750303 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 01:38:27.751969 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 01:38:27.752017 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 01:38:27.753188 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 01:38:27.753239 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 01:38:27.754957 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 01:38:27.757166 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 13 01:38:27.757219 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:38:27.760438 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 01:38:27.760486 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:38:27.761870 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 01:38:27.761917 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:38:27.770513 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 01:38:27.770618 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 01:38:27.772198 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 01:38:27.774495 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 01:38:27.790544 systemd[1]: Switching root. Aug 13 01:38:27.826703 systemd-journald[206]: Journal stopped Aug 13 01:38:28.869365 systemd-journald[206]: Received SIGTERM from PID 1 (systemd). Aug 13 01:38:28.869394 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 01:38:28.869406 kernel: SELinux: policy capability open_perms=1 Aug 13 01:38:28.869418 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 01:38:28.869426 kernel: SELinux: policy capability always_check_network=0 Aug 13 01:38:28.869435 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 01:38:28.869444 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 01:38:28.869453 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 01:38:28.869462 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 01:38:28.869471 kernel: SELinux: policy capability userspace_initial_context=0 Aug 13 01:38:28.869489 kernel: audit: type=1403 audit(1755049107.958:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 01:38:28.869499 systemd[1]: Successfully loaded SELinux policy in 53.276ms. Aug 13 01:38:28.869509 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.692ms. Aug 13 01:38:28.869520 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 01:38:28.869531 systemd[1]: Detected virtualization kvm. Aug 13 01:38:28.869542 systemd[1]: Detected architecture x86-64. Aug 13 01:38:28.869551 systemd[1]: Detected first boot. Aug 13 01:38:28.869561 systemd[1]: Initializing machine ID from random generator. Aug 13 01:38:28.869571 zram_generator::config[1119]: No configuration found. Aug 13 01:38:28.869581 kernel: Guest personality initialized and is inactive Aug 13 01:38:28.869590 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 13 01:38:28.869599 kernel: Initialized host personality Aug 13 01:38:28.869610 kernel: NET: Registered PF_VSOCK protocol family Aug 13 01:38:28.869619 systemd[1]: Populated /etc with preset unit settings. Aug 13 01:38:28.869630 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 13 01:38:28.869639 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 01:38:28.869649 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 01:38:28.869658 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 01:38:28.869755 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 01:38:28.869771 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 01:38:28.869784 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 01:38:28.869794 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 01:38:28.869804 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 01:38:28.869814 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 01:38:28.869824 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 01:38:28.869833 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 01:38:28.869845 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:38:28.869855 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:38:28.869865 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 01:38:28.869874 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 01:38:28.869887 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 01:38:28.869897 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 01:38:28.869907 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 01:38:28.869917 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:38:28.869929 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:38:28.869939 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 01:38:28.869980 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 01:38:28.869990 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 01:38:28.870000 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 01:38:28.870010 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:38:28.870022 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 01:38:28.870032 systemd[1]: Reached target slices.target - Slice Units. Aug 13 01:38:28.870044 systemd[1]: Reached target swap.target - Swaps. Aug 13 01:38:28.870055 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 01:38:28.870064 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 01:38:28.870074 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 13 01:38:28.870084 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:38:28.870096 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 01:38:28.870106 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:38:28.870116 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 01:38:28.870126 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 01:38:28.870137 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 01:38:28.870146 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 01:38:28.870157 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:38:28.870167 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 01:38:28.870178 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 01:38:28.870188 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 01:38:28.870198 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 01:38:28.870208 systemd[1]: Reached target machines.target - Containers. Aug 13 01:38:28.870218 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 01:38:28.870229 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:38:28.870240 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 01:38:28.870251 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 01:38:28.870262 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:38:28.870273 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 01:38:28.870283 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:38:28.870293 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 01:38:28.870302 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 01:38:28.870313 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 01:38:28.870323 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 01:38:28.870376 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 01:38:28.870389 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 01:38:28.870402 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 01:38:28.870413 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:38:28.870422 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 01:38:28.870432 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 01:38:28.870442 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 01:38:28.870452 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 01:38:28.870462 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 13 01:38:28.870472 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 01:38:28.870484 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 01:38:28.870494 systemd[1]: Stopped verity-setup.service. Aug 13 01:38:28.870504 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:38:28.870514 kernel: ACPI: bus type drm_connector registered Aug 13 01:38:28.870524 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 01:38:28.870534 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 01:38:28.870543 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 01:38:28.870553 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 01:38:28.870565 kernel: fuse: init (API version 7.41) Aug 13 01:38:28.870574 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 01:38:28.870584 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 01:38:28.870594 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 01:38:28.870626 systemd-journald[1210]: Collecting audit messages is disabled. Aug 13 01:38:28.870650 systemd-journald[1210]: Journal started Aug 13 01:38:28.870670 systemd-journald[1210]: Runtime Journal (/run/log/journal/8a8334d6d91b46cd82d3bf3c63c76fb7) is 8M, max 78.5M, 70.5M free. Aug 13 01:38:28.505493 systemd[1]: Queued start job for default target multi-user.target. Aug 13 01:38:28.515070 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 13 01:38:28.515631 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 01:38:28.874419 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 01:38:28.874443 kernel: loop: module loaded Aug 13 01:38:28.877315 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:38:28.878768 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 01:38:28.878995 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 01:38:28.880722 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:38:28.880978 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:38:28.882774 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:38:28.883004 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 01:38:28.883938 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:38:28.884159 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:38:28.886902 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 01:38:28.887122 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 01:38:28.888085 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:38:28.888296 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 01:38:28.889243 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 01:38:28.890458 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:38:28.891536 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 01:38:28.892556 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 13 01:38:28.909292 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 01:38:28.912431 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 01:38:28.916069 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 01:38:28.917747 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 01:38:28.917832 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 01:38:28.919229 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 13 01:38:28.923961 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 01:38:28.925669 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:38:28.928072 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 01:38:28.930468 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 01:38:28.931075 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:38:28.935568 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 01:38:28.936634 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 01:38:28.939463 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:38:28.950954 systemd-journald[1210]: Time spent on flushing to /var/log/journal/8a8334d6d91b46cd82d3bf3c63c76fb7 is 60.853ms for 973 entries. Aug 13 01:38:28.950954 systemd-journald[1210]: System Journal (/var/log/journal/8a8334d6d91b46cd82d3bf3c63c76fb7) is 8M, max 195.6M, 187.6M free. Aug 13 01:38:29.024775 systemd-journald[1210]: Received client request to flush runtime journal. Aug 13 01:38:29.025212 kernel: loop0: detected capacity change from 0 to 146240 Aug 13 01:38:29.025247 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 01:38:28.943282 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 01:38:28.946502 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 01:38:28.948790 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:38:28.949730 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 01:38:28.954578 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 01:38:28.982784 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 01:38:28.986572 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 01:38:28.991020 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 13 01:38:29.033220 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 01:38:29.043870 kernel: loop1: detected capacity change from 0 to 8 Aug 13 01:38:29.047950 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 13 01:38:29.050052 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:38:29.068864 kernel: loop2: detected capacity change from 0 to 224512 Aug 13 01:38:29.075570 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 01:38:29.079521 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 01:38:29.109384 kernel: loop3: detected capacity change from 0 to 113872 Aug 13 01:38:29.117102 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Aug 13 01:38:29.117120 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Aug 13 01:38:29.123806 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:38:29.165388 kernel: loop4: detected capacity change from 0 to 146240 Aug 13 01:38:29.188369 kernel: loop5: detected capacity change from 0 to 8 Aug 13 01:38:29.193375 kernel: loop6: detected capacity change from 0 to 224512 Aug 13 01:38:29.216386 kernel: loop7: detected capacity change from 0 to 113872 Aug 13 01:38:29.231135 (sd-merge)[1267]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Aug 13 01:38:29.231739 (sd-merge)[1267]: Merged extensions into '/usr'. Aug 13 01:38:29.236407 systemd[1]: Reload requested from client PID 1244 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 01:38:29.236587 systemd[1]: Reloading... Aug 13 01:38:29.349373 zram_generator::config[1296]: No configuration found. Aug 13 01:38:29.463232 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:38:29.517513 ldconfig[1239]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 01:38:29.543318 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 01:38:29.544068 systemd[1]: Reloading finished in 306 ms. Aug 13 01:38:29.559844 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 01:38:29.561182 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 01:38:29.573616 systemd[1]: Starting ensure-sysext.service... Aug 13 01:38:29.575577 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 01:38:29.605299 systemd[1]: Reload requested from client PID 1336 ('systemctl') (unit ensure-sysext.service)... Aug 13 01:38:29.605314 systemd[1]: Reloading... Aug 13 01:38:29.628218 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 13 01:38:29.629689 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 13 01:38:29.630143 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 01:38:29.632119 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 01:38:29.633028 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 01:38:29.633329 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Aug 13 01:38:29.633471 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Aug 13 01:38:29.637241 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 01:38:29.637664 systemd-tmpfiles[1337]: Skipping /boot Aug 13 01:38:29.663099 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 01:38:29.663118 systemd-tmpfiles[1337]: Skipping /boot Aug 13 01:38:29.718378 zram_generator::config[1370]: No configuration found. Aug 13 01:38:29.793532 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:38:29.862120 systemd[1]: Reloading finished in 256 ms. Aug 13 01:38:29.885237 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 01:38:29.902406 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:38:29.910399 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 01:38:29.913522 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 01:38:29.921258 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 01:38:29.926526 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 01:38:29.929293 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:38:29.932754 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 01:38:29.935858 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:38:29.936012 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:38:29.938284 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:38:29.942627 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:38:29.952777 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 01:38:29.954493 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:38:29.954593 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:38:29.954681 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:38:29.960611 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 01:38:29.965380 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 01:38:29.966505 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:38:29.967393 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:38:29.977867 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:38:29.978098 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:38:29.978304 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:38:29.978479 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:38:29.981269 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 01:38:29.982011 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:38:29.987543 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:38:29.988636 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 01:38:29.990264 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:38:29.992281 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:38:29.995525 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:38:30.000651 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 01:38:30.002488 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:38:30.002588 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:38:30.002704 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:38:30.009527 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 01:38:30.010946 systemd-udevd[1413]: Using default interface naming scheme 'v255'. Aug 13 01:38:30.018839 systemd[1]: Finished ensure-sysext.service. Aug 13 01:38:30.029547 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 01:38:30.030744 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:38:30.032403 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:38:30.033771 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:38:30.044817 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:38:30.045988 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 01:38:30.047881 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 01:38:30.053919 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:38:30.054167 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:38:30.054990 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 01:38:30.062807 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 01:38:30.064021 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 01:38:30.069692 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:38:30.075514 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 01:38:30.107151 augenrules[1475]: No rules Aug 13 01:38:30.113946 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 01:38:30.114365 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 01:38:30.116686 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 01:38:30.188060 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 01:38:30.263375 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 01:38:30.339376 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 13 01:38:30.347805 systemd-networkd[1455]: lo: Link UP Aug 13 01:38:30.347817 systemd-networkd[1455]: lo: Gained carrier Aug 13 01:38:30.353399 systemd-networkd[1455]: Enumeration completed Aug 13 01:38:30.353479 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 01:38:30.354264 systemd-networkd[1455]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:38:30.354279 systemd-networkd[1455]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:38:30.356196 systemd-networkd[1455]: eth0: Link UP Aug 13 01:38:30.356386 systemd-networkd[1455]: eth0: Gained carrier Aug 13 01:38:30.356410 systemd-networkd[1455]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:38:30.357575 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 13 01:38:30.363446 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 01:38:30.376564 kernel: ACPI: button: Power Button [PWRF] Aug 13 01:38:30.396181 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 13 01:38:30.457618 systemd-resolved[1412]: Positive Trust Anchors: Aug 13 01:38:30.457938 systemd-resolved[1412]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:38:30.458012 systemd-resolved[1412]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 01:38:30.459990 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 01:38:30.460760 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 01:38:30.462606 systemd-resolved[1412]: Defaulting to hostname 'linux'. Aug 13 01:38:30.467402 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 01:38:30.467699 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 01:38:30.470415 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 01:38:30.475942 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 01:38:30.476977 systemd[1]: Reached target network.target - Network. Aug 13 01:38:30.477503 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:38:30.478431 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 01:38:30.479518 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 01:38:30.480122 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 01:38:30.481907 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Aug 13 01:38:30.482745 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 01:38:30.483481 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 01:38:30.484398 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 01:38:30.486413 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 01:38:30.486453 systemd[1]: Reached target paths.target - Path Units. Aug 13 01:38:30.486951 systemd[1]: Reached target timers.target - Timer Units. Aug 13 01:38:30.488786 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 01:38:30.492034 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 01:38:30.497003 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 13 01:38:30.499624 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 13 01:38:30.500218 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 13 01:38:30.510393 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 01:38:30.511795 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 13 01:38:30.516649 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 01:38:30.518513 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 01:38:30.523710 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 01:38:30.525212 systemd[1]: Reached target basic.target - Basic System. Aug 13 01:38:30.525950 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 01:38:30.526051 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 01:38:30.527475 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 01:38:30.531166 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 01:38:30.535679 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 01:38:30.538701 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 01:38:30.542549 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 01:38:30.553574 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 01:38:30.555063 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 01:38:30.557956 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Aug 13 01:38:30.574209 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 01:38:30.585586 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 01:38:30.592385 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 01:38:30.606960 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 01:38:30.610860 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 01:38:30.612620 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 01:38:30.613619 jq[1531]: false Aug 13 01:38:30.620711 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 01:38:30.622562 extend-filesystems[1533]: Found /dev/sda6 Aug 13 01:38:30.626824 extend-filesystems[1533]: Found /dev/sda9 Aug 13 01:38:30.629705 extend-filesystems[1533]: Checking size of /dev/sda9 Aug 13 01:38:30.630657 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 01:38:30.638161 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 01:38:30.642674 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 01:38:30.644994 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 01:38:30.645295 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 01:38:30.663980 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:38:30.669717 extend-filesystems[1533]: Resized partition /dev/sda9 Aug 13 01:38:30.670190 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 01:38:30.672467 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 01:38:30.675659 extend-filesystems[1560]: resize2fs 1.47.2 (1-Jan-2025) Aug 13 01:38:30.693890 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 555003 blocks Aug 13 01:38:30.693917 kernel: EXT4-fs (sda9): resized filesystem to 555003 Aug 13 01:38:30.693930 extend-filesystems[1560]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Aug 13 01:38:30.693930 extend-filesystems[1560]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 01:38:30.693930 extend-filesystems[1560]: The filesystem on /dev/sda9 is now 555003 (4k) blocks long. Aug 13 01:38:30.711738 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Refreshing passwd entry cache Aug 13 01:38:30.711738 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Failure getting users, quitting Aug 13 01:38:30.711738 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 01:38:30.711738 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Refreshing group entry cache Aug 13 01:38:30.711738 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Failure getting groups, quitting Aug 13 01:38:30.711738 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 01:38:30.692088 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 01:38:30.677644 oslogin_cache_refresh[1534]: Refreshing passwd entry cache Aug 13 01:38:30.716709 extend-filesystems[1533]: Resized filesystem in /dev/sda9 Aug 13 01:38:30.717350 jq[1546]: true Aug 13 01:38:30.692609 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 01:38:30.698323 oslogin_cache_refresh[1534]: Failure getting users, quitting Aug 13 01:38:30.701609 oslogin_cache_refresh[1534]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 01:38:30.701651 oslogin_cache_refresh[1534]: Refreshing group entry cache Aug 13 01:38:30.702133 oslogin_cache_refresh[1534]: Failure getting groups, quitting Aug 13 01:38:30.702143 oslogin_cache_refresh[1534]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 01:38:30.718493 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Aug 13 01:38:30.722663 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Aug 13 01:38:30.730705 jq[1569]: true Aug 13 01:38:30.740442 update_engine[1541]: I20250813 01:38:30.739651 1541 main.cc:92] Flatcar Update Engine starting Aug 13 01:38:30.741283 dbus-daemon[1529]: [system] SELinux support is enabled Aug 13 01:38:30.741486 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 01:38:30.747557 update_engine[1541]: I20250813 01:38:30.746026 1541 update_check_scheduler.cc:74] Next update check in 11m49s Aug 13 01:38:30.746666 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 01:38:30.746794 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 01:38:30.748477 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 01:38:30.748578 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 01:38:30.755046 systemd[1]: Started update-engine.service - Update Engine. Aug 13 01:38:30.766982 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 01:38:30.772689 (ntainerd)[1576]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 01:38:30.786985 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 01:38:30.787305 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 01:38:30.807374 kernel: EDAC MC: Ver: 3.0.0 Aug 13 01:38:30.850215 coreos-metadata[1528]: Aug 13 01:38:30.849 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 01:38:30.878374 bash[1597]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:38:30.887107 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 01:38:30.896658 systemd[1]: Starting sshkeys.service... Aug 13 01:38:30.900504 systemd-networkd[1455]: eth0: DHCPv4 address 172.236.100.188/24, gateway 172.236.100.1 acquired from 23.33.176.48 Aug 13 01:38:30.902892 systemd-timesyncd[1437]: Network configuration changed, trying to establish connection. Aug 13 01:38:30.906386 dbus-daemon[1529]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1455 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 01:38:30.914005 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 13 01:38:31.976854 systemd-resolved[1412]: Clock change detected. Flushing caches. Aug 13 01:38:31.977176 systemd-timesyncd[1437]: Contacted time server 108.61.73.244:123 (0.flatcar.pool.ntp.org). Aug 13 01:38:31.977290 systemd-timesyncd[1437]: Initial clock synchronization to Wed 2025-08-13 01:38:31.976811 UTC. Aug 13 01:38:31.990953 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 01:38:31.995112 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 01:38:32.012058 sshd_keygen[1570]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 01:38:32.058820 systemd-logind[1540]: Watching system buttons on /dev/input/event2 (Power Button) Aug 13 01:38:32.059129 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 01:38:32.059761 systemd-logind[1540]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 01:38:32.168977 systemd-logind[1540]: New seat seat0. Aug 13 01:38:32.210304 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 01:38:32.220218 dbus-daemon[1529]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 01:38:32.220753 dbus-daemon[1529]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1602 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 01:38:32.234801 coreos-metadata[1608]: Aug 13 01:38:32.234 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 01:38:32.236180 locksmithd[1578]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 01:38:32.242546 containerd[1576]: time="2025-08-13T01:38:32Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 13 01:38:32.245615 containerd[1576]: time="2025-08-13T01:38:32.243149989Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Aug 13 01:38:32.244461 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 13 01:38:32.254350 systemd[1]: Starting polkit.service - Authorization Manager... Aug 13 01:38:32.255416 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 01:38:32.260629 containerd[1576]: time="2025-08-13T01:38:32.260570017Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.74µs" Aug 13 01:38:32.260629 containerd[1576]: time="2025-08-13T01:38:32.260604567Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 13 01:38:32.260629 containerd[1576]: time="2025-08-13T01:38:32.260623697Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 13 01:38:32.260754 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:38:32.261365 containerd[1576]: time="2025-08-13T01:38:32.260792097Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 13 01:38:32.261365 containerd[1576]: time="2025-08-13T01:38:32.260808947Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 13 01:38:32.261365 containerd[1576]: time="2025-08-13T01:38:32.260832917Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 01:38:32.261365 containerd[1576]: time="2025-08-13T01:38:32.260895577Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 01:38:32.261365 containerd[1576]: time="2025-08-13T01:38:32.260906307Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 01:38:32.263462 containerd[1576]: time="2025-08-13T01:38:32.263125479Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 01:38:32.263462 containerd[1576]: time="2025-08-13T01:38:32.263153179Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 01:38:32.263462 containerd[1576]: time="2025-08-13T01:38:32.263165999Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 01:38:32.263462 containerd[1576]: time="2025-08-13T01:38:32.263174259Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 13 01:38:32.263462 containerd[1576]: time="2025-08-13T01:38:32.263270209Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 13 01:38:32.263573 containerd[1576]: time="2025-08-13T01:38:32.263496619Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 01:38:32.263573 containerd[1576]: time="2025-08-13T01:38:32.263535739Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 01:38:32.263573 containerd[1576]: time="2025-08-13T01:38:32.263545869Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 13 01:38:32.263626 containerd[1576]: time="2025-08-13T01:38:32.263594819Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 13 01:38:32.263901 containerd[1576]: time="2025-08-13T01:38:32.263872849Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 13 01:38:32.264509 containerd[1576]: time="2025-08-13T01:38:32.263946639Z" level=info msg="metadata content store policy set" policy=shared Aug 13 01:38:32.266936 containerd[1576]: time="2025-08-13T01:38:32.266900600Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 13 01:38:32.266974 containerd[1576]: time="2025-08-13T01:38:32.266962960Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 13 01:38:32.266997 containerd[1576]: time="2025-08-13T01:38:32.266978950Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 13 01:38:32.266997 containerd[1576]: time="2025-08-13T01:38:32.266991200Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 13 01:38:32.267083 containerd[1576]: time="2025-08-13T01:38:32.267058841Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 13 01:38:32.267083 containerd[1576]: time="2025-08-13T01:38:32.267080251Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 13 01:38:32.267122 containerd[1576]: time="2025-08-13T01:38:32.267094891Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 13 01:38:32.267122 containerd[1576]: time="2025-08-13T01:38:32.267112041Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 13 01:38:32.267122 containerd[1576]: time="2025-08-13T01:38:32.267122041Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 13 01:38:32.267186 containerd[1576]: time="2025-08-13T01:38:32.267132141Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 13 01:38:32.267186 containerd[1576]: time="2025-08-13T01:38:32.267142991Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 13 01:38:32.267186 containerd[1576]: time="2025-08-13T01:38:32.267158801Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 13 01:38:32.267287 containerd[1576]: time="2025-08-13T01:38:32.267261431Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 13 01:38:32.267309 containerd[1576]: time="2025-08-13T01:38:32.267289641Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 13 01:38:32.267309 containerd[1576]: time="2025-08-13T01:38:32.267305341Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 13 01:38:32.267342 containerd[1576]: time="2025-08-13T01:38:32.267316471Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 13 01:38:32.267342 containerd[1576]: time="2025-08-13T01:38:32.267327081Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 13 01:38:32.267342 containerd[1576]: time="2025-08-13T01:38:32.267336871Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 13 01:38:32.267398 containerd[1576]: time="2025-08-13T01:38:32.267347221Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 13 01:38:32.267398 containerd[1576]: time="2025-08-13T01:38:32.267358601Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 13 01:38:32.267398 containerd[1576]: time="2025-08-13T01:38:32.267374121Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 13 01:38:32.267398 containerd[1576]: time="2025-08-13T01:38:32.267384151Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 13 01:38:32.267398 containerd[1576]: time="2025-08-13T01:38:32.267393381Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 13 01:38:32.267482 containerd[1576]: time="2025-08-13T01:38:32.267455321Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 13 01:38:32.267482 containerd[1576]: time="2025-08-13T01:38:32.267468671Z" level=info msg="Start snapshots syncer" Aug 13 01:38:32.268132 containerd[1576]: time="2025-08-13T01:38:32.268097901Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 13 01:38:32.268428 containerd[1576]: time="2025-08-13T01:38:32.268381211Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 13 01:38:32.268527 containerd[1576]: time="2025-08-13T01:38:32.268437911Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 13 01:38:32.269982 containerd[1576]: time="2025-08-13T01:38:32.269935522Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 13 01:38:32.270947 containerd[1576]: time="2025-08-13T01:38:32.270102822Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 13 01:38:32.270947 containerd[1576]: time="2025-08-13T01:38:32.270131432Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 13 01:38:32.270947 containerd[1576]: time="2025-08-13T01:38:32.270142692Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 13 01:38:32.270947 containerd[1576]: time="2025-08-13T01:38:32.270153722Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 13 01:38:32.270947 containerd[1576]: time="2025-08-13T01:38:32.270166412Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 13 01:38:32.270947 containerd[1576]: time="2025-08-13T01:38:32.270177372Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 13 01:38:32.270947 containerd[1576]: time="2025-08-13T01:38:32.270187682Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 13 01:38:32.270947 containerd[1576]: time="2025-08-13T01:38:32.270226512Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 13 01:38:32.270947 containerd[1576]: time="2025-08-13T01:38:32.270237832Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 13 01:38:32.270947 containerd[1576]: time="2025-08-13T01:38:32.270249222Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 13 01:38:32.270947 containerd[1576]: time="2025-08-13T01:38:32.270295472Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 01:38:32.270947 containerd[1576]: time="2025-08-13T01:38:32.270309662Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 01:38:32.270947 containerd[1576]: time="2025-08-13T01:38:32.270317652Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 01:38:32.271395 containerd[1576]: time="2025-08-13T01:38:32.270326782Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 01:38:32.271395 containerd[1576]: time="2025-08-13T01:38:32.270389812Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 13 01:38:32.271395 containerd[1576]: time="2025-08-13T01:38:32.270404612Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 13 01:38:32.271395 containerd[1576]: time="2025-08-13T01:38:32.270415742Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 13 01:38:32.271395 containerd[1576]: time="2025-08-13T01:38:32.270433432Z" level=info msg="runtime interface created" Aug 13 01:38:32.271395 containerd[1576]: time="2025-08-13T01:38:32.270439002Z" level=info msg="created NRI interface" Aug 13 01:38:32.271395 containerd[1576]: time="2025-08-13T01:38:32.270445922Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 13 01:38:32.271395 containerd[1576]: time="2025-08-13T01:38:32.270456012Z" level=info msg="Connect containerd service" Aug 13 01:38:32.271395 containerd[1576]: time="2025-08-13T01:38:32.270479832Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 01:38:32.272598 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 01:38:32.273986 containerd[1576]: time="2025-08-13T01:38:32.272760053Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:38:32.272858 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 01:38:32.281451 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 01:38:32.309465 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 01:38:32.314375 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 01:38:32.339540 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 01:38:32.341375 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 01:38:32.358406 coreos-metadata[1608]: Aug 13 01:38:32.358 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Aug 13 01:38:32.387143 polkitd[1630]: Started polkitd version 126 Aug 13 01:38:32.391128 containerd[1576]: time="2025-08-13T01:38:32.391068402Z" level=info msg="Start subscribing containerd event" Aug 13 01:38:32.391189 containerd[1576]: time="2025-08-13T01:38:32.391139633Z" level=info msg="Start recovering state" Aug 13 01:38:32.392845 containerd[1576]: time="2025-08-13T01:38:32.391248883Z" level=info msg="Start event monitor" Aug 13 01:38:32.392845 containerd[1576]: time="2025-08-13T01:38:32.391270283Z" level=info msg="Start cni network conf syncer for default" Aug 13 01:38:32.392845 containerd[1576]: time="2025-08-13T01:38:32.391280553Z" level=info msg="Start streaming server" Aug 13 01:38:32.392845 containerd[1576]: time="2025-08-13T01:38:32.391290233Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 13 01:38:32.392845 containerd[1576]: time="2025-08-13T01:38:32.391297603Z" level=info msg="runtime interface starting up..." Aug 13 01:38:32.392845 containerd[1576]: time="2025-08-13T01:38:32.391303313Z" level=info msg="starting plugins..." Aug 13 01:38:32.392845 containerd[1576]: time="2025-08-13T01:38:32.391319263Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 13 01:38:32.392845 containerd[1576]: time="2025-08-13T01:38:32.391738143Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 01:38:32.392845 containerd[1576]: time="2025-08-13T01:38:32.391786713Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 01:38:32.392845 containerd[1576]: time="2025-08-13T01:38:32.392786633Z" level=info msg="containerd successfully booted in 0.151257s" Aug 13 01:38:32.392001 polkitd[1630]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 01:38:32.391901 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 01:38:32.392359 polkitd[1630]: Loading rules from directory /run/polkit-1/rules.d Aug 13 01:38:32.392404 polkitd[1630]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 13 01:38:32.392637 polkitd[1630]: Loading rules from directory /usr/local/share/polkit-1/rules.d Aug 13 01:38:32.392658 polkitd[1630]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 13 01:38:32.392674 polkitd[1630]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 01:38:32.393879 systemd[1]: Started polkit.service - Authorization Manager. Aug 13 01:38:32.393224 polkitd[1630]: Finished loading, compiling and executing 2 rules Aug 13 01:38:32.393739 dbus-daemon[1529]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 01:38:32.394474 polkitd[1630]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 01:38:32.404417 systemd-hostnamed[1602]: Hostname set to <172-236-100-188> (transient) Aug 13 01:38:32.404430 systemd-resolved[1412]: System hostname changed to '172-236-100-188'. Aug 13 01:38:32.429271 systemd-networkd[1455]: eth0: Gained IPv6LL Aug 13 01:38:32.435983 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 01:38:32.437929 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 01:38:32.440909 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:38:32.444250 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 01:38:32.477924 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 01:38:32.515737 coreos-metadata[1608]: Aug 13 01:38:32.515 INFO Fetch successful Aug 13 01:38:32.537146 update-ssh-keys[1674]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:38:32.538516 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 01:38:32.541424 systemd[1]: Finished sshkeys.service. Aug 13 01:38:32.891236 coreos-metadata[1528]: Aug 13 01:38:32.891 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Aug 13 01:38:32.997390 coreos-metadata[1528]: Aug 13 01:38:32.997 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Aug 13 01:38:33.210073 coreos-metadata[1528]: Aug 13 01:38:33.209 INFO Fetch successful Aug 13 01:38:33.210320 coreos-metadata[1528]: Aug 13 01:38:33.210 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Aug 13 01:38:33.312055 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:38:33.324324 (kubelet)[1684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 01:38:33.537919 coreos-metadata[1528]: Aug 13 01:38:33.537 INFO Fetch successful Aug 13 01:38:33.640488 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 01:38:33.642477 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 01:38:33.642939 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 01:38:33.644246 systemd[1]: Startup finished in 2.820s (kernel) + 6.309s (initrd) + 4.709s (userspace) = 13.839s. Aug 13 01:38:33.824259 kubelet[1684]: E0813 01:38:33.824136 1684 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:38:33.828223 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:38:33.828424 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:38:33.828990 systemd[1]: kubelet.service: Consumed 837ms CPU time, 263.3M memory peak. Aug 13 01:38:36.586448 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 01:38:36.587844 systemd[1]: Started sshd@0-172.236.100.188:22-147.75.109.163:47832.service - OpenSSH per-connection server daemon (147.75.109.163:47832). Aug 13 01:38:36.938396 sshd[1716]: Accepted publickey for core from 147.75.109.163 port 47832 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:38:36.940024 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:38:36.946526 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 01:38:36.947912 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 01:38:36.955723 systemd-logind[1540]: New session 1 of user core. Aug 13 01:38:36.968814 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 01:38:36.972155 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 01:38:36.984337 (systemd)[1720]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:38:36.986666 systemd-logind[1540]: New session c1 of user core. Aug 13 01:38:37.112461 systemd[1720]: Queued start job for default target default.target. Aug 13 01:38:37.123410 systemd[1720]: Created slice app.slice - User Application Slice. Aug 13 01:38:37.123443 systemd[1720]: Reached target paths.target - Paths. Aug 13 01:38:37.123494 systemd[1720]: Reached target timers.target - Timers. Aug 13 01:38:37.125160 systemd[1720]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 01:38:37.134909 systemd[1720]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 01:38:37.134971 systemd[1720]: Reached target sockets.target - Sockets. Aug 13 01:38:37.135001 systemd[1720]: Reached target basic.target - Basic System. Aug 13 01:38:37.135058 systemd[1720]: Reached target default.target - Main User Target. Aug 13 01:38:37.135090 systemd[1720]: Startup finished in 142ms. Aug 13 01:38:37.135519 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 01:38:37.146198 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 01:38:37.406806 systemd[1]: Started sshd@1-172.236.100.188:22-147.75.109.163:47838.service - OpenSSH per-connection server daemon (147.75.109.163:47838). Aug 13 01:38:37.745875 sshd[1731]: Accepted publickey for core from 147.75.109.163 port 47838 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:38:37.747172 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:38:37.755374 systemd-logind[1540]: New session 2 of user core. Aug 13 01:38:37.763502 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 01:38:37.992307 sshd[1733]: Connection closed by 147.75.109.163 port 47838 Aug 13 01:38:37.992933 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Aug 13 01:38:37.996999 systemd[1]: sshd@1-172.236.100.188:22-147.75.109.163:47838.service: Deactivated successfully. Aug 13 01:38:37.998884 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 01:38:37.999603 systemd-logind[1540]: Session 2 logged out. Waiting for processes to exit. Aug 13 01:38:38.001099 systemd-logind[1540]: Removed session 2. Aug 13 01:38:38.060973 systemd[1]: Started sshd@2-172.236.100.188:22-147.75.109.163:39458.service - OpenSSH per-connection server daemon (147.75.109.163:39458). Aug 13 01:38:38.402697 sshd[1739]: Accepted publickey for core from 147.75.109.163 port 39458 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:38:38.404080 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:38:38.409597 systemd-logind[1540]: New session 3 of user core. Aug 13 01:38:38.415163 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 01:38:38.646337 sshd[1741]: Connection closed by 147.75.109.163 port 39458 Aug 13 01:38:38.646878 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Aug 13 01:38:38.651189 systemd[1]: sshd@2-172.236.100.188:22-147.75.109.163:39458.service: Deactivated successfully. Aug 13 01:38:38.652967 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 01:38:38.653903 systemd-logind[1540]: Session 3 logged out. Waiting for processes to exit. Aug 13 01:38:38.655156 systemd-logind[1540]: Removed session 3. Aug 13 01:38:38.707917 systemd[1]: Started sshd@3-172.236.100.188:22-147.75.109.163:39460.service - OpenSSH per-connection server daemon (147.75.109.163:39460). Aug 13 01:38:39.045645 sshd[1747]: Accepted publickey for core from 147.75.109.163 port 39460 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:38:39.047259 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:38:39.052092 systemd-logind[1540]: New session 4 of user core. Aug 13 01:38:39.060160 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 01:38:39.291441 sshd[1749]: Connection closed by 147.75.109.163 port 39460 Aug 13 01:38:39.292034 sshd-session[1747]: pam_unix(sshd:session): session closed for user core Aug 13 01:38:39.303727 systemd[1]: sshd@3-172.236.100.188:22-147.75.109.163:39460.service: Deactivated successfully. Aug 13 01:38:39.305911 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 01:38:39.306863 systemd-logind[1540]: Session 4 logged out. Waiting for processes to exit. Aug 13 01:38:39.308167 systemd-logind[1540]: Removed session 4. Aug 13 01:38:39.353236 systemd[1]: Started sshd@4-172.236.100.188:22-147.75.109.163:39474.service - OpenSSH per-connection server daemon (147.75.109.163:39474). Aug 13 01:38:39.680499 sshd[1755]: Accepted publickey for core from 147.75.109.163 port 39474 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:38:39.682308 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:38:39.687471 systemd-logind[1540]: New session 5 of user core. Aug 13 01:38:39.703168 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 01:38:39.883105 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 01:38:39.883415 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:38:39.897262 sudo[1758]: pam_unix(sudo:session): session closed for user root Aug 13 01:38:39.947442 sshd[1757]: Connection closed by 147.75.109.163 port 39474 Aug 13 01:38:39.948063 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Aug 13 01:38:39.953623 systemd-logind[1540]: Session 5 logged out. Waiting for processes to exit. Aug 13 01:38:39.954361 systemd[1]: sshd@4-172.236.100.188:22-147.75.109.163:39474.service: Deactivated successfully. Aug 13 01:38:39.956344 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 01:38:39.958237 systemd-logind[1540]: Removed session 5. Aug 13 01:38:40.014256 systemd[1]: Started sshd@5-172.236.100.188:22-147.75.109.163:39490.service - OpenSSH per-connection server daemon (147.75.109.163:39490). Aug 13 01:38:40.354442 sshd[1764]: Accepted publickey for core from 147.75.109.163 port 39490 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:38:40.355991 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:38:40.361580 systemd-logind[1540]: New session 6 of user core. Aug 13 01:38:40.364162 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 01:38:40.555384 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 01:38:40.555687 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:38:40.560924 sudo[1768]: pam_unix(sudo:session): session closed for user root Aug 13 01:38:40.567230 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 13 01:38:40.567551 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:38:40.578247 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 01:38:40.615678 augenrules[1790]: No rules Aug 13 01:38:40.617245 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 01:38:40.617603 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 01:38:40.619689 sudo[1767]: pam_unix(sudo:session): session closed for user root Aug 13 01:38:40.671727 sshd[1766]: Connection closed by 147.75.109.163 port 39490 Aug 13 01:38:40.672418 sshd-session[1764]: pam_unix(sshd:session): session closed for user core Aug 13 01:38:40.677151 systemd[1]: sshd@5-172.236.100.188:22-147.75.109.163:39490.service: Deactivated successfully. Aug 13 01:38:40.679377 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 01:38:40.680249 systemd-logind[1540]: Session 6 logged out. Waiting for processes to exit. Aug 13 01:38:40.681498 systemd-logind[1540]: Removed session 6. Aug 13 01:38:40.734216 systemd[1]: Started sshd@6-172.236.100.188:22-147.75.109.163:39500.service - OpenSSH per-connection server daemon (147.75.109.163:39500). Aug 13 01:38:41.078304 sshd[1799]: Accepted publickey for core from 147.75.109.163 port 39500 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:38:41.080135 sshd-session[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:38:41.095778 systemd-logind[1540]: New session 7 of user core. Aug 13 01:38:41.101183 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 01:38:41.278378 sudo[1802]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 01:38:41.278704 sudo[1802]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:38:41.799448 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:38:41.800064 systemd[1]: kubelet.service: Consumed 837ms CPU time, 263.3M memory peak. Aug 13 01:38:41.802932 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:38:41.828775 systemd[1]: Reload requested from client PID 1836 ('systemctl') (unit session-7.scope)... Aug 13 01:38:41.828796 systemd[1]: Reloading... Aug 13 01:38:41.979868 zram_generator::config[1882]: No configuration found. Aug 13 01:38:42.066378 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:38:42.167698 systemd[1]: Reloading finished in 338 ms. Aug 13 01:38:42.228164 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 01:38:42.228265 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 01:38:42.228664 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:38:42.228708 systemd[1]: kubelet.service: Consumed 145ms CPU time, 98.3M memory peak. Aug 13 01:38:42.230954 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:38:42.388183 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:38:42.396774 (kubelet)[1934]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 01:38:42.442191 kubelet[1934]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:38:42.442191 kubelet[1934]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 01:38:42.442191 kubelet[1934]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:38:42.442536 kubelet[1934]: I0813 01:38:42.442254 1934 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:38:42.841869 kubelet[1934]: I0813 01:38:42.841774 1934 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 01:38:42.841869 kubelet[1934]: I0813 01:38:42.841805 1934 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:38:42.842289 kubelet[1934]: I0813 01:38:42.842260 1934 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 01:38:42.864903 kubelet[1934]: I0813 01:38:42.864882 1934 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:38:42.878123 kubelet[1934]: I0813 01:38:42.875982 1934 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 01:38:42.879524 kubelet[1934]: I0813 01:38:42.879509 1934 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:38:42.880943 kubelet[1934]: I0813 01:38:42.880915 1934 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:38:42.881156 kubelet[1934]: I0813 01:38:42.880989 1934 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"192.168.169.77","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 01:38:42.881312 kubelet[1934]: I0813 01:38:42.881300 1934 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:38:42.881360 kubelet[1934]: I0813 01:38:42.881352 1934 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 01:38:42.881538 kubelet[1934]: I0813 01:38:42.881526 1934 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:38:42.885312 kubelet[1934]: I0813 01:38:42.885297 1934 kubelet.go:446] "Attempting to sync node with API server" Aug 13 01:38:42.887001 kubelet[1934]: I0813 01:38:42.886986 1934 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:38:42.887117 kubelet[1934]: I0813 01:38:42.887105 1934 kubelet.go:352] "Adding apiserver pod source" Aug 13 01:38:42.887171 kubelet[1934]: I0813 01:38:42.887162 1934 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:38:42.887224 kubelet[1934]: E0813 01:38:42.887196 1934 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:38:42.887224 kubelet[1934]: E0813 01:38:42.887157 1934 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:38:42.889633 kubelet[1934]: I0813 01:38:42.889617 1934 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 01:38:42.890063 kubelet[1934]: I0813 01:38:42.890027 1934 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 01:38:42.890777 kubelet[1934]: W0813 01:38:42.890763 1934 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 01:38:42.892677 kubelet[1934]: I0813 01:38:42.892662 1934 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 01:38:42.892753 kubelet[1934]: I0813 01:38:42.892743 1934 server.go:1287] "Started kubelet" Aug 13 01:38:42.894096 kubelet[1934]: I0813 01:38:42.894082 1934 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:38:42.897590 kubelet[1934]: W0813 01:38:42.897559 1934 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "192.168.169.77" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Aug 13 01:38:42.897630 kubelet[1934]: E0813 01:38:42.897607 1934 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"192.168.169.77\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Aug 13 01:38:42.897655 kubelet[1934]: W0813 01:38:42.897639 1934 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Aug 13 01:38:42.897655 kubelet[1934]: E0813 01:38:42.897651 1934 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Aug 13 01:38:42.900030 kubelet[1934]: I0813 01:38:42.899995 1934 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:38:42.901546 kubelet[1934]: I0813 01:38:42.901515 1934 server.go:479] "Adding debug handlers to kubelet server" Aug 13 01:38:42.903938 kubelet[1934]: I0813 01:38:42.902970 1934 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:38:42.903938 kubelet[1934]: I0813 01:38:42.903199 1934 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:38:42.903938 kubelet[1934]: I0813 01:38:42.903354 1934 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:38:42.904202 kubelet[1934]: I0813 01:38:42.904181 1934 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 01:38:42.904417 kubelet[1934]: E0813 01:38:42.904394 1934 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"192.168.169.77\" not found" Aug 13 01:38:42.905158 kubelet[1934]: I0813 01:38:42.905000 1934 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 01:38:42.905158 kubelet[1934]: I0813 01:38:42.905084 1934 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:38:42.907569 kubelet[1934]: E0813 01:38:42.907545 1934 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 01:38:42.908201 kubelet[1934]: I0813 01:38:42.908175 1934 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:38:42.911059 kubelet[1934]: I0813 01:38:42.910509 1934 factory.go:221] Registration of the containerd container factory successfully Aug 13 01:38:42.911059 kubelet[1934]: I0813 01:38:42.910546 1934 factory.go:221] Registration of the systemd container factory successfully Aug 13 01:38:42.931498 kubelet[1934]: E0813 01:38:42.930598 1934 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"192.168.169.77\" not found" node="192.168.169.77" Aug 13 01:38:42.938178 kubelet[1934]: I0813 01:38:42.938156 1934 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 01:38:42.938178 kubelet[1934]: I0813 01:38:42.938170 1934 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 01:38:42.938244 kubelet[1934]: I0813 01:38:42.938186 1934 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:38:42.940451 kubelet[1934]: I0813 01:38:42.939550 1934 policy_none.go:49] "None policy: Start" Aug 13 01:38:42.940451 kubelet[1934]: I0813 01:38:42.939567 1934 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 01:38:42.940451 kubelet[1934]: I0813 01:38:42.939598 1934 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:38:42.946558 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 01:38:42.959324 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 01:38:42.963982 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 01:38:42.964354 kubelet[1934]: I0813 01:38:42.964324 1934 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 01:38:42.966437 kubelet[1934]: I0813 01:38:42.966421 1934 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 01:38:42.966496 kubelet[1934]: I0813 01:38:42.966486 1934 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 01:38:42.966557 kubelet[1934]: I0813 01:38:42.966547 1934 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 01:38:42.966596 kubelet[1934]: I0813 01:38:42.966588 1934 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 01:38:42.966762 kubelet[1934]: E0813 01:38:42.966731 1934 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 01:38:42.970300 kubelet[1934]: I0813 01:38:42.970121 1934 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 01:38:42.970300 kubelet[1934]: I0813 01:38:42.970284 1934 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:38:42.970372 kubelet[1934]: I0813 01:38:42.970293 1934 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:38:42.971919 kubelet[1934]: I0813 01:38:42.971905 1934 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:38:42.974572 kubelet[1934]: E0813 01:38:42.974555 1934 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 01:38:42.974864 kubelet[1934]: E0813 01:38:42.974838 1934 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"192.168.169.77\" not found" Aug 13 01:38:43.071197 kubelet[1934]: I0813 01:38:43.071168 1934 kubelet_node_status.go:75] "Attempting to register node" node="192.168.169.77" Aug 13 01:38:43.080409 kubelet[1934]: I0813 01:38:43.080386 1934 kubelet_node_status.go:78] "Successfully registered node" node="192.168.169.77" Aug 13 01:38:43.080490 kubelet[1934]: E0813 01:38:43.080413 1934 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"192.168.169.77\": node \"192.168.169.77\" not found" Aug 13 01:38:43.119259 kubelet[1934]: E0813 01:38:43.119231 1934 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"192.168.169.77\" not found" Aug 13 01:38:43.220145 kubelet[1934]: E0813 01:38:43.220119 1934 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"192.168.169.77\" not found" Aug 13 01:38:43.320815 kubelet[1934]: E0813 01:38:43.320766 1934 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"192.168.169.77\" not found" Aug 13 01:38:43.422158 kubelet[1934]: E0813 01:38:43.421756 1934 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"192.168.169.77\" not found" Aug 13 01:38:43.425778 sudo[1802]: pam_unix(sudo:session): session closed for user root Aug 13 01:38:43.477439 sshd[1801]: Connection closed by 147.75.109.163 port 39500 Aug 13 01:38:43.478115 sshd-session[1799]: pam_unix(sshd:session): session closed for user core Aug 13 01:38:43.482463 systemd[1]: sshd@6-172.236.100.188:22-147.75.109.163:39500.service: Deactivated successfully. Aug 13 01:38:43.484834 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 01:38:43.485069 systemd[1]: session-7.scope: Consumed 421ms CPU time, 73.1M memory peak. Aug 13 01:38:43.486910 systemd-logind[1540]: Session 7 logged out. Waiting for processes to exit. Aug 13 01:38:43.488235 systemd-logind[1540]: Removed session 7. Aug 13 01:38:43.522275 kubelet[1934]: E0813 01:38:43.522243 1934 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"192.168.169.77\" not found" Aug 13 01:38:43.625836 kubelet[1934]: E0813 01:38:43.625776 1934 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"192.168.169.77\" not found" Aug 13 01:38:43.726924 kubelet[1934]: E0813 01:38:43.726809 1934 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"192.168.169.77\" not found" Aug 13 01:38:43.827668 kubelet[1934]: E0813 01:38:43.827638 1934 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"192.168.169.77\" not found" Aug 13 01:38:43.844013 kubelet[1934]: I0813 01:38:43.843963 1934 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Aug 13 01:38:43.844224 kubelet[1934]: W0813 01:38:43.844187 1934 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Aug 13 01:38:43.844224 kubelet[1934]: W0813 01:38:43.844198 1934 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Aug 13 01:38:43.888382 kubelet[1934]: E0813 01:38:43.888348 1934 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:38:43.927821 kubelet[1934]: E0813 01:38:43.927785 1934 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"192.168.169.77\" not found" Aug 13 01:38:44.028454 kubelet[1934]: I0813 01:38:44.028346 1934 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Aug 13 01:38:44.028816 containerd[1576]: time="2025-08-13T01:38:44.028695707Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 01:38:44.029338 kubelet[1934]: I0813 01:38:44.029271 1934 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Aug 13 01:38:44.889199 kubelet[1934]: E0813 01:38:44.889134 1934 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:38:44.889199 kubelet[1934]: I0813 01:38:44.889186 1934 apiserver.go:52] "Watching apiserver" Aug 13 01:38:44.893125 kubelet[1934]: E0813 01:38:44.892197 1934 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mj94h" podUID="4787f758-46fd-4818-87f7-49572bbac91a" Aug 13 01:38:44.898971 systemd[1]: Created slice kubepods-besteffort-poddabe49ed_c10a_4c01_b062_1155f516bb9b.slice - libcontainer container kubepods-besteffort-poddabe49ed_c10a_4c01_b062_1155f516bb9b.slice. Aug 13 01:38:44.905309 kubelet[1934]: I0813 01:38:44.905280 1934 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 01:38:44.917058 kubelet[1934]: I0813 01:38:44.916964 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/dabe49ed-c10a-4c01-b062-1155f516bb9b-node-certs\") pod \"calico-node-hxxs9\" (UID: \"dabe49ed-c10a-4c01-b062-1155f516bb9b\") " pod="calico-system/calico-node-hxxs9" Aug 13 01:38:44.917058 kubelet[1934]: I0813 01:38:44.917001 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cnmd\" (UniqueName: \"kubernetes.io/projected/dabe49ed-c10a-4c01-b062-1155f516bb9b-kube-api-access-5cnmd\") pod \"calico-node-hxxs9\" (UID: \"dabe49ed-c10a-4c01-b062-1155f516bb9b\") " pod="calico-system/calico-node-hxxs9" Aug 13 01:38:44.917058 kubelet[1934]: I0813 01:38:44.917020 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4787f758-46fd-4818-87f7-49572bbac91a-varrun\") pod \"csi-node-driver-mj94h\" (UID: \"4787f758-46fd-4818-87f7-49572bbac91a\") " pod="calico-system/csi-node-driver-mj94h" Aug 13 01:38:44.917075 systemd[1]: Created slice kubepods-besteffort-pod63c661de_d2c6_4fcc_93a0_f9d7857c2d35.slice - libcontainer container kubepods-besteffort-pod63c661de_d2c6_4fcc_93a0_f9d7857c2d35.slice. Aug 13 01:38:44.918065 kubelet[1934]: I0813 01:38:44.917232 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9c7z\" (UniqueName: \"kubernetes.io/projected/275c7ee1-eec0-42b9-8633-4e2d45c3fed9-kube-api-access-r9c7z\") pod \"kube-proxy-rm5rw\" (UID: \"275c7ee1-eec0-42b9-8633-4e2d45c3fed9\") " pod="kube-system/kube-proxy-rm5rw" Aug 13 01:38:44.918065 kubelet[1934]: I0813 01:38:44.917264 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/63c661de-d2c6-4fcc-93a0-f9d7857c2d35-var-lib-calico\") pod \"tigera-operator-747864d56d-kdxxp\" (UID: \"63c661de-d2c6-4fcc-93a0-f9d7857c2d35\") " pod="tigera-operator/tigera-operator-747864d56d-kdxxp" Aug 13 01:38:44.918065 kubelet[1934]: I0813 01:38:44.917282 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dabe49ed-c10a-4c01-b062-1155f516bb9b-lib-modules\") pod \"calico-node-hxxs9\" (UID: \"dabe49ed-c10a-4c01-b062-1155f516bb9b\") " pod="calico-system/calico-node-hxxs9" Aug 13 01:38:44.918065 kubelet[1934]: I0813 01:38:44.917300 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/dabe49ed-c10a-4c01-b062-1155f516bb9b-cni-bin-dir\") pod \"calico-node-hxxs9\" (UID: \"dabe49ed-c10a-4c01-b062-1155f516bb9b\") " pod="calico-system/calico-node-hxxs9" Aug 13 01:38:44.918065 kubelet[1934]: I0813 01:38:44.917314 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dabe49ed-c10a-4c01-b062-1155f516bb9b-var-lib-calico\") pod \"calico-node-hxxs9\" (UID: \"dabe49ed-c10a-4c01-b062-1155f516bb9b\") " pod="calico-system/calico-node-hxxs9" Aug 13 01:38:44.918185 kubelet[1934]: I0813 01:38:44.917333 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/dabe49ed-c10a-4c01-b062-1155f516bb9b-cni-net-dir\") pod \"calico-node-hxxs9\" (UID: \"dabe49ed-c10a-4c01-b062-1155f516bb9b\") " pod="calico-system/calico-node-hxxs9" Aug 13 01:38:44.918185 kubelet[1934]: I0813 01:38:44.917350 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/dabe49ed-c10a-4c01-b062-1155f516bb9b-flexvol-driver-host\") pod \"calico-node-hxxs9\" (UID: \"dabe49ed-c10a-4c01-b062-1155f516bb9b\") " pod="calico-system/calico-node-hxxs9" Aug 13 01:38:44.918185 kubelet[1934]: I0813 01:38:44.917366 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/dabe49ed-c10a-4c01-b062-1155f516bb9b-policysync\") pod \"calico-node-hxxs9\" (UID: \"dabe49ed-c10a-4c01-b062-1155f516bb9b\") " pod="calico-system/calico-node-hxxs9" Aug 13 01:38:44.918185 kubelet[1934]: I0813 01:38:44.917383 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4787f758-46fd-4818-87f7-49572bbac91a-kubelet-dir\") pod \"csi-node-driver-mj94h\" (UID: \"4787f758-46fd-4818-87f7-49572bbac91a\") " pod="calico-system/csi-node-driver-mj94h" Aug 13 01:38:44.918185 kubelet[1934]: I0813 01:38:44.917396 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4787f758-46fd-4818-87f7-49572bbac91a-socket-dir\") pod \"csi-node-driver-mj94h\" (UID: \"4787f758-46fd-4818-87f7-49572bbac91a\") " pod="calico-system/csi-node-driver-mj94h" Aug 13 01:38:44.918272 kubelet[1934]: I0813 01:38:44.917411 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/275c7ee1-eec0-42b9-8633-4e2d45c3fed9-kube-proxy\") pod \"kube-proxy-rm5rw\" (UID: \"275c7ee1-eec0-42b9-8633-4e2d45c3fed9\") " pod="kube-system/kube-proxy-rm5rw" Aug 13 01:38:44.918272 kubelet[1934]: I0813 01:38:44.917430 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ztsf\" (UniqueName: \"kubernetes.io/projected/63c661de-d2c6-4fcc-93a0-f9d7857c2d35-kube-api-access-6ztsf\") pod \"tigera-operator-747864d56d-kdxxp\" (UID: \"63c661de-d2c6-4fcc-93a0-f9d7857c2d35\") " pod="tigera-operator/tigera-operator-747864d56d-kdxxp" Aug 13 01:38:44.918272 kubelet[1934]: I0813 01:38:44.917447 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/dabe49ed-c10a-4c01-b062-1155f516bb9b-cni-log-dir\") pod \"calico-node-hxxs9\" (UID: \"dabe49ed-c10a-4c01-b062-1155f516bb9b\") " pod="calico-system/calico-node-hxxs9" Aug 13 01:38:44.918272 kubelet[1934]: I0813 01:38:44.917461 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/275c7ee1-eec0-42b9-8633-4e2d45c3fed9-lib-modules\") pod \"kube-proxy-rm5rw\" (UID: \"275c7ee1-eec0-42b9-8633-4e2d45c3fed9\") " pod="kube-system/kube-proxy-rm5rw" Aug 13 01:38:44.918272 kubelet[1934]: I0813 01:38:44.917479 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dabe49ed-c10a-4c01-b062-1155f516bb9b-xtables-lock\") pod \"calico-node-hxxs9\" (UID: \"dabe49ed-c10a-4c01-b062-1155f516bb9b\") " pod="calico-system/calico-node-hxxs9" Aug 13 01:38:44.918360 kubelet[1934]: I0813 01:38:44.917494 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4787f758-46fd-4818-87f7-49572bbac91a-registration-dir\") pod \"csi-node-driver-mj94h\" (UID: \"4787f758-46fd-4818-87f7-49572bbac91a\") " pod="calico-system/csi-node-driver-mj94h" Aug 13 01:38:44.918360 kubelet[1934]: I0813 01:38:44.917513 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw8ws\" (UniqueName: \"kubernetes.io/projected/4787f758-46fd-4818-87f7-49572bbac91a-kube-api-access-mw8ws\") pod \"csi-node-driver-mj94h\" (UID: \"4787f758-46fd-4818-87f7-49572bbac91a\") " pod="calico-system/csi-node-driver-mj94h" Aug 13 01:38:44.918360 kubelet[1934]: I0813 01:38:44.917528 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/275c7ee1-eec0-42b9-8633-4e2d45c3fed9-xtables-lock\") pod \"kube-proxy-rm5rw\" (UID: \"275c7ee1-eec0-42b9-8633-4e2d45c3fed9\") " pod="kube-system/kube-proxy-rm5rw" Aug 13 01:38:44.918360 kubelet[1934]: I0813 01:38:44.917543 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dabe49ed-c10a-4c01-b062-1155f516bb9b-tigera-ca-bundle\") pod \"calico-node-hxxs9\" (UID: \"dabe49ed-c10a-4c01-b062-1155f516bb9b\") " pod="calico-system/calico-node-hxxs9" Aug 13 01:38:44.918360 kubelet[1934]: I0813 01:38:44.917559 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/dabe49ed-c10a-4c01-b062-1155f516bb9b-var-run-calico\") pod \"calico-node-hxxs9\" (UID: \"dabe49ed-c10a-4c01-b062-1155f516bb9b\") " pod="calico-system/calico-node-hxxs9" Aug 13 01:38:44.928310 systemd[1]: Created slice kubepods-besteffort-pod275c7ee1_eec0_42b9_8633_4e2d45c3fed9.slice - libcontainer container kubepods-besteffort-pod275c7ee1_eec0_42b9_8633_4e2d45c3fed9.slice. Aug 13 01:38:45.020214 kubelet[1934]: E0813 01:38:45.020184 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.020303 kubelet[1934]: W0813 01:38:45.020288 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.020455 kubelet[1934]: E0813 01:38:45.020440 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.020739 kubelet[1934]: E0813 01:38:45.020716 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.020739 kubelet[1934]: W0813 01:38:45.020729 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.020739 kubelet[1934]: E0813 01:38:45.020745 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.020932 kubelet[1934]: E0813 01:38:45.020917 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.020932 kubelet[1934]: W0813 01:38:45.020929 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.021012 kubelet[1934]: E0813 01:38:45.020953 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.021219 kubelet[1934]: E0813 01:38:45.021198 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.021219 kubelet[1934]: W0813 01:38:45.021214 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.021349 kubelet[1934]: E0813 01:38:45.021250 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.021567 kubelet[1934]: E0813 01:38:45.021456 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.021567 kubelet[1934]: W0813 01:38:45.021481 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.021567 kubelet[1934]: E0813 01:38:45.021507 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.021704 kubelet[1934]: E0813 01:38:45.021686 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.021704 kubelet[1934]: W0813 01:38:45.021698 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.021825 kubelet[1934]: E0813 01:38:45.021812 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.021887 kubelet[1934]: E0813 01:38:45.021880 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.021920 kubelet[1934]: W0813 01:38:45.021888 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.021996 kubelet[1934]: E0813 01:38:45.021976 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.022133 kubelet[1934]: E0813 01:38:45.022111 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.022133 kubelet[1934]: W0813 01:38:45.022123 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.022249 kubelet[1934]: E0813 01:38:45.022205 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.022292 kubelet[1934]: E0813 01:38:45.022283 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.022343 kubelet[1934]: W0813 01:38:45.022292 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.022408 kubelet[1934]: E0813 01:38:45.022369 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.022457 kubelet[1934]: E0813 01:38:45.022440 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.022457 kubelet[1934]: W0813 01:38:45.022452 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.022552 kubelet[1934]: E0813 01:38:45.022482 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.022618 kubelet[1934]: E0813 01:38:45.022601 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.022618 kubelet[1934]: W0813 01:38:45.022613 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.022740 kubelet[1934]: E0813 01:38:45.022674 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.022787 kubelet[1934]: E0813 01:38:45.022771 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.022787 kubelet[1934]: W0813 01:38:45.022783 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.022883 kubelet[1934]: E0813 01:38:45.022811 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.022965 kubelet[1934]: E0813 01:38:45.022949 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.022965 kubelet[1934]: W0813 01:38:45.022962 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.023091 kubelet[1934]: E0813 01:38:45.022992 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.023160 kubelet[1934]: E0813 01:38:45.023152 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.023224 kubelet[1934]: W0813 01:38:45.023161 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.023224 kubelet[1934]: E0813 01:38:45.023192 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.023400 kubelet[1934]: E0813 01:38:45.023381 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.023400 kubelet[1934]: W0813 01:38:45.023394 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.023534 kubelet[1934]: E0813 01:38:45.023516 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.023601 kubelet[1934]: E0813 01:38:45.023583 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.023601 kubelet[1934]: W0813 01:38:45.023597 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.023707 kubelet[1934]: E0813 01:38:45.023683 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.023812 kubelet[1934]: E0813 01:38:45.023783 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.023812 kubelet[1934]: W0813 01:38:45.023795 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.023906 kubelet[1934]: E0813 01:38:45.023860 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.023973 kubelet[1934]: E0813 01:38:45.023958 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.023973 kubelet[1934]: W0813 01:38:45.023965 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.024077 kubelet[1934]: E0813 01:38:45.023993 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.024149 kubelet[1934]: E0813 01:38:45.024130 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.024149 kubelet[1934]: W0813 01:38:45.024143 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.024255 kubelet[1934]: E0813 01:38:45.024173 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.024306 kubelet[1934]: E0813 01:38:45.024290 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.024306 kubelet[1934]: W0813 01:38:45.024301 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.024412 kubelet[1934]: E0813 01:38:45.024328 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.024925 kubelet[1934]: E0813 01:38:45.024440 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.024925 kubelet[1934]: W0813 01:38:45.024461 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.024925 kubelet[1934]: E0813 01:38:45.024491 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.024925 kubelet[1934]: E0813 01:38:45.024603 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.024925 kubelet[1934]: W0813 01:38:45.024611 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.024925 kubelet[1934]: E0813 01:38:45.024748 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.024925 kubelet[1934]: W0813 01:38:45.024755 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.024925 kubelet[1934]: E0813 01:38:45.024886 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.024925 kubelet[1934]: W0813 01:38:45.024893 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.025158 kubelet[1934]: E0813 01:38:45.025023 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.025158 kubelet[1934]: W0813 01:38:45.025030 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.025207 kubelet[1934]: E0813 01:38:45.025182 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.025207 kubelet[1934]: W0813 01:38:45.025200 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.025365 kubelet[1934]: E0813 01:38:45.025336 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.025365 kubelet[1934]: W0813 01:38:45.025351 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.025495 kubelet[1934]: E0813 01:38:45.025474 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.025495 kubelet[1934]: W0813 01:38:45.025488 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.025630 kubelet[1934]: E0813 01:38:45.025611 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.025630 kubelet[1934]: W0813 01:38:45.025626 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.025936 kubelet[1934]: E0813 01:38:45.025750 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.025936 kubelet[1934]: W0813 01:38:45.025762 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.025936 kubelet[1934]: E0813 01:38:45.025770 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.026007 kubelet[1934]: E0813 01:38:45.025943 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.026007 kubelet[1934]: W0813 01:38:45.025951 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.026007 kubelet[1934]: E0813 01:38:45.025958 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.026007 kubelet[1934]: E0813 01:38:45.025972 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.026131 kubelet[1934]: E0813 01:38:45.026115 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.026131 kubelet[1934]: W0813 01:38:45.026123 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.026131 kubelet[1934]: E0813 01:38:45.026130 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.026582 kubelet[1934]: E0813 01:38:45.026291 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.026582 kubelet[1934]: W0813 01:38:45.026303 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.026582 kubelet[1934]: E0813 01:38:45.026311 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.026582 kubelet[1934]: E0813 01:38:45.026323 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.026582 kubelet[1934]: E0813 01:38:45.026448 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.026582 kubelet[1934]: W0813 01:38:45.026455 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.026582 kubelet[1934]: E0813 01:38:45.026462 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.026789 kubelet[1934]: E0813 01:38:45.026606 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.026789 kubelet[1934]: W0813 01:38:45.026614 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.026789 kubelet[1934]: E0813 01:38:45.026621 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.026789 kubelet[1934]: E0813 01:38:45.026732 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.026977 kubelet[1934]: E0813 01:38:45.026952 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.026977 kubelet[1934]: W0813 01:38:45.026970 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.026977 kubelet[1934]: E0813 01:38:45.026978 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.027064 kubelet[1934]: E0813 01:38:45.026991 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.027216 kubelet[1934]: E0813 01:38:45.027191 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.027216 kubelet[1934]: W0813 01:38:45.027208 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.027216 kubelet[1934]: E0813 01:38:45.027216 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.027275 kubelet[1934]: E0813 01:38:45.027228 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.027391 kubelet[1934]: E0813 01:38:45.027369 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.027391 kubelet[1934]: W0813 01:38:45.027384 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.027441 kubelet[1934]: E0813 01:38:45.027392 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.028006 kubelet[1934]: E0813 01:38:45.027935 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.033070 kubelet[1934]: E0813 01:38:45.030130 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.033070 kubelet[1934]: W0813 01:38:45.030146 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.033070 kubelet[1934]: E0813 01:38:45.030156 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.033070 kubelet[1934]: E0813 01:38:45.030266 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.033070 kubelet[1934]: E0813 01:38:45.030445 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.033070 kubelet[1934]: W0813 01:38:45.030453 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.033070 kubelet[1934]: E0813 01:38:45.030460 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.033070 kubelet[1934]: E0813 01:38:45.030474 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.033070 kubelet[1934]: E0813 01:38:45.030624 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.033070 kubelet[1934]: W0813 01:38:45.030631 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.033282 kubelet[1934]: E0813 01:38:45.030638 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.038070 kubelet[1934]: E0813 01:38:45.034722 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.038070 kubelet[1934]: W0813 01:38:45.034741 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.038070 kubelet[1934]: E0813 01:38:45.034750 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.041078 kubelet[1934]: E0813 01:38:45.039269 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.041078 kubelet[1934]: W0813 01:38:45.039283 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.041078 kubelet[1934]: E0813 01:38:45.039325 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.041078 kubelet[1934]: E0813 01:38:45.039545 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.041078 kubelet[1934]: W0813 01:38:45.039553 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.041078 kubelet[1934]: E0813 01:38:45.039712 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.041078 kubelet[1934]: W0813 01:38:45.039719 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.041078 kubelet[1934]: E0813 01:38:45.039727 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.041078 kubelet[1934]: E0813 01:38:45.039898 1934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:38:45.041078 kubelet[1934]: W0813 01:38:45.039905 1934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:38:45.041289 kubelet[1934]: E0813 01:38:45.039913 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.041289 kubelet[1934]: E0813 01:38:45.039930 1934 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:38:45.211824 containerd[1576]: time="2025-08-13T01:38:45.211697388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hxxs9,Uid:dabe49ed-c10a-4c01-b062-1155f516bb9b,Namespace:calico-system,Attempt:0,}" Aug 13 01:38:45.227482 containerd[1576]: time="2025-08-13T01:38:45.227376716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-kdxxp,Uid:63c661de-d2c6-4fcc-93a0-f9d7857c2d35,Namespace:tigera-operator,Attempt:0,}" Aug 13 01:38:45.231677 kubelet[1934]: E0813 01:38:45.231654 1934 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:38:45.232302 containerd[1576]: time="2025-08-13T01:38:45.232010149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rm5rw,Uid:275c7ee1-eec0-42b9-8633-4e2d45c3fed9,Namespace:kube-system,Attempt:0,}" Aug 13 01:38:45.890034 kubelet[1934]: E0813 01:38:45.889944 1934 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:38:45.968603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1350476248.mount: Deactivated successfully. Aug 13 01:38:45.974893 containerd[1576]: time="2025-08-13T01:38:45.974855440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:38:45.976319 containerd[1576]: time="2025-08-13T01:38:45.976281450Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:38:45.977232 containerd[1576]: time="2025-08-13T01:38:45.977188831Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 01:38:45.977752 containerd[1576]: time="2025-08-13T01:38:45.977723821Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Aug 13 01:38:45.979068 containerd[1576]: time="2025-08-13T01:38:45.978822682Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:38:45.979929 containerd[1576]: time="2025-08-13T01:38:45.979909682Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:38:45.980145 containerd[1576]: time="2025-08-13T01:38:45.980127902Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Aug 13 01:38:45.987622 containerd[1576]: time="2025-08-13T01:38:45.987579876Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 753.301246ms" Aug 13 01:38:45.988222 containerd[1576]: time="2025-08-13T01:38:45.988108346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:38:45.989686 containerd[1576]: time="2025-08-13T01:38:45.989651347Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 759.207939ms" Aug 13 01:38:45.994238 containerd[1576]: time="2025-08-13T01:38:45.994200209Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 777.532738ms" Aug 13 01:38:46.010946 containerd[1576]: time="2025-08-13T01:38:46.010840598Z" level=info msg="connecting to shim 0d071968072b11e1fbffea23a20482ef59eb3c64bac16d68b9c79647c2aeef01" address="unix:///run/containerd/s/9d360957bb3b1e884ec46f8651b99dac7832bb4d1b5d69a18bb0f4988572b31f" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:38:46.011347 containerd[1576]: time="2025-08-13T01:38:46.011315458Z" level=info msg="connecting to shim 53cfbd45a779721ba931df13bbf96c3769a339a554742b0929a5cd33d09d60f9" address="unix:///run/containerd/s/584b764b616a35ec0c81d25142fbe513f1ff984a3bc2d3dce4ea7cf16d275cbe" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:38:46.035389 containerd[1576]: time="2025-08-13T01:38:46.035181650Z" level=info msg="connecting to shim 90b609c4273fafce0b2a0087cc5460a2067b3c795c7a79e727c86faf53621135" address="unix:///run/containerd/s/6d500f4cbaa83819e79802bb3b4586eba1525f2c65095f58ced2512ea68225dd" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:38:46.037363 systemd[1]: Started cri-containerd-0d071968072b11e1fbffea23a20482ef59eb3c64bac16d68b9c79647c2aeef01.scope - libcontainer container 0d071968072b11e1fbffea23a20482ef59eb3c64bac16d68b9c79647c2aeef01. Aug 13 01:38:46.062190 systemd[1]: Started cri-containerd-53cfbd45a779721ba931df13bbf96c3769a339a554742b0929a5cd33d09d60f9.scope - libcontainer container 53cfbd45a779721ba931df13bbf96c3769a339a554742b0929a5cd33d09d60f9. Aug 13 01:38:46.069942 systemd[1]: Started cri-containerd-90b609c4273fafce0b2a0087cc5460a2067b3c795c7a79e727c86faf53621135.scope - libcontainer container 90b609c4273fafce0b2a0087cc5460a2067b3c795c7a79e727c86faf53621135. Aug 13 01:38:46.110626 containerd[1576]: time="2025-08-13T01:38:46.110524887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hxxs9,Uid:dabe49ed-c10a-4c01-b062-1155f516bb9b,Namespace:calico-system,Attempt:0,} returns sandbox id \"90b609c4273fafce0b2a0087cc5460a2067b3c795c7a79e727c86faf53621135\"" Aug 13 01:38:46.113640 containerd[1576]: time="2025-08-13T01:38:46.113604099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rm5rw,Uid:275c7ee1-eec0-42b9-8633-4e2d45c3fed9,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d071968072b11e1fbffea23a20482ef59eb3c64bac16d68b9c79647c2aeef01\"" Aug 13 01:38:46.114798 kubelet[1934]: E0813 01:38:46.114678 1934 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:38:46.115361 containerd[1576]: time="2025-08-13T01:38:46.115257540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 01:38:46.145213 containerd[1576]: time="2025-08-13T01:38:46.145016275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-kdxxp,Uid:63c661de-d2c6-4fcc-93a0-f9d7857c2d35,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"53cfbd45a779721ba931df13bbf96c3769a339a554742b0929a5cd33d09d60f9\"" Aug 13 01:38:46.674818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3864032074.mount: Deactivated successfully. Aug 13 01:38:46.761152 containerd[1576]: time="2025-08-13T01:38:46.761102133Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:38:46.762030 containerd[1576]: time="2025-08-13T01:38:46.761815583Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=5939797" Aug 13 01:38:46.762615 containerd[1576]: time="2025-08-13T01:38:46.762577143Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:38:46.764236 containerd[1576]: time="2025-08-13T01:38:46.764197754Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:38:46.764780 containerd[1576]: time="2025-08-13T01:38:46.764747284Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 649.392804ms" Aug 13 01:38:46.764860 containerd[1576]: time="2025-08-13T01:38:46.764842544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 01:38:46.766069 containerd[1576]: time="2025-08-13T01:38:46.766018925Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Aug 13 01:38:46.768174 containerd[1576]: time="2025-08-13T01:38:46.768144576Z" level=info msg="CreateContainer within sandbox \"90b609c4273fafce0b2a0087cc5460a2067b3c795c7a79e727c86faf53621135\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 01:38:46.777092 containerd[1576]: time="2025-08-13T01:38:46.776120340Z" level=info msg="Container e4e4535ce9afb3d9f1ccb6471544bf875062de4115df5f68fa4ab9c1d1528328: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:38:46.786932 containerd[1576]: time="2025-08-13T01:38:46.786899125Z" level=info msg="CreateContainer within sandbox \"90b609c4273fafce0b2a0087cc5460a2067b3c795c7a79e727c86faf53621135\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e4e4535ce9afb3d9f1ccb6471544bf875062de4115df5f68fa4ab9c1d1528328\"" Aug 13 01:38:46.787638 containerd[1576]: time="2025-08-13T01:38:46.787595586Z" level=info msg="StartContainer for \"e4e4535ce9afb3d9f1ccb6471544bf875062de4115df5f68fa4ab9c1d1528328\"" Aug 13 01:38:46.788953 containerd[1576]: time="2025-08-13T01:38:46.788926606Z" level=info msg="connecting to shim e4e4535ce9afb3d9f1ccb6471544bf875062de4115df5f68fa4ab9c1d1528328" address="unix:///run/containerd/s/6d500f4cbaa83819e79802bb3b4586eba1525f2c65095f58ced2512ea68225dd" protocol=ttrpc version=3 Aug 13 01:38:46.809187 systemd[1]: Started cri-containerd-e4e4535ce9afb3d9f1ccb6471544bf875062de4115df5f68fa4ab9c1d1528328.scope - libcontainer container e4e4535ce9afb3d9f1ccb6471544bf875062de4115df5f68fa4ab9c1d1528328. Aug 13 01:38:46.869090 systemd[1]: cri-containerd-e4e4535ce9afb3d9f1ccb6471544bf875062de4115df5f68fa4ab9c1d1528328.scope: Deactivated successfully. Aug 13 01:38:46.870796 containerd[1576]: time="2025-08-13T01:38:46.870752097Z" level=info msg="StartContainer for \"e4e4535ce9afb3d9f1ccb6471544bf875062de4115df5f68fa4ab9c1d1528328\" returns successfully" Aug 13 01:38:46.873260 containerd[1576]: time="2025-08-13T01:38:46.873238349Z" level=info msg="received exit event container_id:\"e4e4535ce9afb3d9f1ccb6471544bf875062de4115df5f68fa4ab9c1d1528328\" id:\"e4e4535ce9afb3d9f1ccb6471544bf875062de4115df5f68fa4ab9c1d1528328\" pid:2189 exited_at:{seconds:1755049126 nanos:872808298}" Aug 13 01:38:46.873655 containerd[1576]: time="2025-08-13T01:38:46.873612739Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e4e4535ce9afb3d9f1ccb6471544bf875062de4115df5f68fa4ab9c1d1528328\" id:\"e4e4535ce9afb3d9f1ccb6471544bf875062de4115df5f68fa4ab9c1d1528328\" pid:2189 exited_at:{seconds:1755049126 nanos:872808298}" Aug 13 01:38:46.890540 kubelet[1934]: E0813 01:38:46.890518 1934 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:38:46.967542 kubelet[1934]: E0813 01:38:46.967409 1934 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mj94h" podUID="4787f758-46fd-4818-87f7-49572bbac91a" Aug 13 01:38:47.891282 kubelet[1934]: E0813 01:38:47.891235 1934 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:38:48.054523 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3073206763.mount: Deactivated successfully. Aug 13 01:38:48.377097 containerd[1576]: time="2025-08-13T01:38:48.377015230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:38:48.378121 containerd[1576]: time="2025-08-13T01:38:48.377901570Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.7: active requests=0, bytes read=30895380" Aug 13 01:38:48.378687 containerd[1576]: time="2025-08-13T01:38:48.378647411Z" level=info msg="ImageCreate event name:\"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:38:48.380181 containerd[1576]: time="2025-08-13T01:38:48.380150321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:38:48.380722 containerd[1576]: time="2025-08-13T01:38:48.380685062Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.7\" with image id \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\", repo tag \"registry.k8s.io/kube-proxy:v1.32.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\", size \"30894399\" in 1.614604497s" Aug 13 01:38:48.380765 containerd[1576]: time="2025-08-13T01:38:48.380722142Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\"" Aug 13 01:38:48.382563 containerd[1576]: time="2025-08-13T01:38:48.382524873Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 01:38:48.384073 containerd[1576]: time="2025-08-13T01:38:48.383696183Z" level=info msg="CreateContainer within sandbox \"0d071968072b11e1fbffea23a20482ef59eb3c64bac16d68b9c79647c2aeef01\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 01:38:48.393879 containerd[1576]: time="2025-08-13T01:38:48.393839938Z" level=info msg="Container b35d2aef4318bc1074cf0e59b8667e499c921a0274b447a5ed7071d31af45a3b: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:38:48.395300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount13715192.mount: Deactivated successfully. Aug 13 01:38:48.402818 containerd[1576]: time="2025-08-13T01:38:48.402784133Z" level=info msg="CreateContainer within sandbox \"0d071968072b11e1fbffea23a20482ef59eb3c64bac16d68b9c79647c2aeef01\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b35d2aef4318bc1074cf0e59b8667e499c921a0274b447a5ed7071d31af45a3b\"" Aug 13 01:38:48.403593 containerd[1576]: time="2025-08-13T01:38:48.403570433Z" level=info msg="StartContainer for \"b35d2aef4318bc1074cf0e59b8667e499c921a0274b447a5ed7071d31af45a3b\"" Aug 13 01:38:48.404928 containerd[1576]: time="2025-08-13T01:38:48.404883774Z" level=info msg="connecting to shim b35d2aef4318bc1074cf0e59b8667e499c921a0274b447a5ed7071d31af45a3b" address="unix:///run/containerd/s/9d360957bb3b1e884ec46f8651b99dac7832bb4d1b5d69a18bb0f4988572b31f" protocol=ttrpc version=3 Aug 13 01:38:48.430161 systemd[1]: Started cri-containerd-b35d2aef4318bc1074cf0e59b8667e499c921a0274b447a5ed7071d31af45a3b.scope - libcontainer container b35d2aef4318bc1074cf0e59b8667e499c921a0274b447a5ed7071d31af45a3b. Aug 13 01:38:48.483814 containerd[1576]: time="2025-08-13T01:38:48.483764413Z" level=info msg="StartContainer for \"b35d2aef4318bc1074cf0e59b8667e499c921a0274b447a5ed7071d31af45a3b\" returns successfully" Aug 13 01:38:48.892276 kubelet[1934]: E0813 01:38:48.892240 1934 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:38:48.968833 kubelet[1934]: E0813 01:38:48.967900 1934 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mj94h" podUID="4787f758-46fd-4818-87f7-49572bbac91a" Aug 13 01:38:48.982295 kubelet[1934]: E0813 01:38:48.982262 1934 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:38:48.993404 kubelet[1934]: I0813 01:38:48.993355 1934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rm5rw" podStartSLOduration=3.726833636 podStartE2EDuration="5.993344898s" podCreationTimestamp="2025-08-13 01:38:43 +0000 UTC" firstStartedPulling="2025-08-13 01:38:46.11531951 +0000 UTC m=+3.711315266" lastFinishedPulling="2025-08-13 01:38:48.381830772 +0000 UTC m=+5.977826528" observedRunningTime="2025-08-13 01:38:48.992275937 +0000 UTC m=+6.588271703" watchObservedRunningTime="2025-08-13 01:38:48.993344898 +0000 UTC m=+6.589340654" Aug 13 01:38:49.404651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1816193276.mount: Deactivated successfully. Aug 13 01:38:49.892908 kubelet[1934]: E0813 01:38:49.892853 1934 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:38:49.984399 kubelet[1934]: E0813 01:38:49.984379 1934 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:38:50.461198 containerd[1576]: time="2025-08-13T01:38:50.461107811Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:38:50.461198 containerd[1576]: time="2025-08-13T01:38:50.462002922Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Aug 13 01:38:50.465201 containerd[1576]: time="2025-08-13T01:38:50.464060693Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:38:50.465881 containerd[1576]: time="2025-08-13T01:38:50.465847864Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:38:50.466693 containerd[1576]: time="2025-08-13T01:38:50.466659674Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.084101831s" Aug 13 01:38:50.466751 containerd[1576]: time="2025-08-13T01:38:50.466739424Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 01:38:50.468237 containerd[1576]: time="2025-08-13T01:38:50.468221755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 01:38:50.468696 containerd[1576]: time="2025-08-13T01:38:50.468614875Z" level=info msg="CreateContainer within sandbox \"53cfbd45a779721ba931df13bbf96c3769a339a554742b0929a5cd33d09d60f9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 01:38:50.477810 containerd[1576]: time="2025-08-13T01:38:50.477301819Z" level=info msg="Container be09e3f3180c5b39ef07a111b40403b602ef26b564c66430ca6dee10122be2eb: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:38:50.482145 containerd[1576]: time="2025-08-13T01:38:50.482094652Z" level=info msg="CreateContainer within sandbox \"53cfbd45a779721ba931df13bbf96c3769a339a554742b0929a5cd33d09d60f9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"be09e3f3180c5b39ef07a111b40403b602ef26b564c66430ca6dee10122be2eb\"" Aug 13 01:38:50.485423 containerd[1576]: time="2025-08-13T01:38:50.485400033Z" level=info msg="StartContainer for \"be09e3f3180c5b39ef07a111b40403b602ef26b564c66430ca6dee10122be2eb\"" Aug 13 01:38:50.486767 containerd[1576]: time="2025-08-13T01:38:50.486729494Z" level=info msg="connecting to shim be09e3f3180c5b39ef07a111b40403b602ef26b564c66430ca6dee10122be2eb" address="unix:///run/containerd/s/584b764b616a35ec0c81d25142fbe513f1ff984a3bc2d3dce4ea7cf16d275cbe" protocol=ttrpc version=3 Aug 13 01:38:50.510188 systemd[1]: Started cri-containerd-be09e3f3180c5b39ef07a111b40403b602ef26b564c66430ca6dee10122be2eb.scope - libcontainer container be09e3f3180c5b39ef07a111b40403b602ef26b564c66430ca6dee10122be2eb. Aug 13 01:38:50.540840 containerd[1576]: time="2025-08-13T01:38:50.540790681Z" level=info msg="StartContainer for \"be09e3f3180c5b39ef07a111b40403b602ef26b564c66430ca6dee10122be2eb\" returns successfully" Aug 13 01:38:50.894420 kubelet[1934]: E0813 01:38:50.894373 1934 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:38:50.967521 kubelet[1934]: E0813 01:38:50.967180 1934 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mj94h" podUID="4787f758-46fd-4818-87f7-49572bbac91a" Aug 13 01:38:51.895525 kubelet[1934]: E0813 01:38:51.895486 1934 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:38:52.114905 containerd[1576]: time="2025-08-13T01:38:52.114342397Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:38:52.115830 containerd[1576]: time="2025-08-13T01:38:52.115808878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 13 01:38:52.120457 containerd[1576]: time="2025-08-13T01:38:52.120435490Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:38:52.122984 containerd[1576]: time="2025-08-13T01:38:52.122962392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:38:52.123719 containerd[1576]: time="2025-08-13T01:38:52.123468782Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 1.655135447s" Aug 13 01:38:52.123846 containerd[1576]: time="2025-08-13T01:38:52.123829442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 01:38:52.126782 containerd[1576]: time="2025-08-13T01:38:52.126487593Z" level=info msg="CreateContainer within sandbox \"90b609c4273fafce0b2a0087cc5460a2067b3c795c7a79e727c86faf53621135\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 01:38:52.145689 containerd[1576]: time="2025-08-13T01:38:52.145334443Z" level=info msg="Container 089a592e00d45673723a0a223508c3cd313ded78c7d2040707eff392c10b314e: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:38:52.150715 containerd[1576]: time="2025-08-13T01:38:52.150694105Z" level=info msg="CreateContainer within sandbox \"90b609c4273fafce0b2a0087cc5460a2067b3c795c7a79e727c86faf53621135\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"089a592e00d45673723a0a223508c3cd313ded78c7d2040707eff392c10b314e\"" Aug 13 01:38:52.151149 containerd[1576]: time="2025-08-13T01:38:52.151128546Z" level=info msg="StartContainer for \"089a592e00d45673723a0a223508c3cd313ded78c7d2040707eff392c10b314e\"" Aug 13 01:38:52.152467 containerd[1576]: time="2025-08-13T01:38:52.152442706Z" level=info msg="connecting to shim 089a592e00d45673723a0a223508c3cd313ded78c7d2040707eff392c10b314e" address="unix:///run/containerd/s/6d500f4cbaa83819e79802bb3b4586eba1525f2c65095f58ced2512ea68225dd" protocol=ttrpc version=3 Aug 13 01:38:52.173178 systemd[1]: Started cri-containerd-089a592e00d45673723a0a223508c3cd313ded78c7d2040707eff392c10b314e.scope - libcontainer container 089a592e00d45673723a0a223508c3cd313ded78c7d2040707eff392c10b314e. Aug 13 01:38:52.211798 containerd[1576]: time="2025-08-13T01:38:52.211754036Z" level=info msg="StartContainer for \"089a592e00d45673723a0a223508c3cd313ded78c7d2040707eff392c10b314e\" returns successfully" Aug 13 01:38:52.641979 containerd[1576]: time="2025-08-13T01:38:52.641778691Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:38:52.644218 systemd[1]: cri-containerd-089a592e00d45673723a0a223508c3cd313ded78c7d2040707eff392c10b314e.scope: Deactivated successfully. Aug 13 01:38:52.644647 systemd[1]: cri-containerd-089a592e00d45673723a0a223508c3cd313ded78c7d2040707eff392c10b314e.scope: Consumed 476ms CPU time, 196.7M memory peak, 171.2M written to disk. Aug 13 01:38:52.646709 containerd[1576]: time="2025-08-13T01:38:52.646668503Z" level=info msg="received exit event container_id:\"089a592e00d45673723a0a223508c3cd313ded78c7d2040707eff392c10b314e\" id:\"089a592e00d45673723a0a223508c3cd313ded78c7d2040707eff392c10b314e\" pid:2457 exited_at:{seconds:1755049132 nanos:645894983}" Aug 13 01:38:52.646777 containerd[1576]: time="2025-08-13T01:38:52.646686533Z" level=info msg="TaskExit event in podsandbox handler container_id:\"089a592e00d45673723a0a223508c3cd313ded78c7d2040707eff392c10b314e\" id:\"089a592e00d45673723a0a223508c3cd313ded78c7d2040707eff392c10b314e\" pid:2457 exited_at:{seconds:1755049132 nanos:645894983}" Aug 13 01:38:52.663700 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-089a592e00d45673723a0a223508c3cd313ded78c7d2040707eff392c10b314e-rootfs.mount: Deactivated successfully. Aug 13 01:38:52.737098 kubelet[1934]: I0813 01:38:52.737080 1934 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 01:38:52.753219 kubelet[1934]: I0813 01:38:52.753179 1934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-kdxxp" podStartSLOduration=5.432311728 podStartE2EDuration="9.753161976s" podCreationTimestamp="2025-08-13 01:38:43 +0000 UTC" firstStartedPulling="2025-08-13 01:38:46.146639526 +0000 UTC m=+3.742635282" lastFinishedPulling="2025-08-13 01:38:50.467489774 +0000 UTC m=+8.063485530" observedRunningTime="2025-08-13 01:38:51.005216873 +0000 UTC m=+8.601212629" watchObservedRunningTime="2025-08-13 01:38:52.753161976 +0000 UTC m=+10.349157742" Aug 13 01:38:52.759752 systemd[1]: Created slice kubepods-burstable-pod7723ad53_30d5_4e35_b7c8_fc435759001f.slice - libcontainer container kubepods-burstable-pod7723ad53_30d5_4e35_b7c8_fc435759001f.slice. Aug 13 01:38:52.761983 kubelet[1934]: W0813 01:38:52.761960 1934 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:192.168.169.77" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node '192.168.169.77' and this object Aug 13 01:38:52.762056 kubelet[1934]: E0813 01:38:52.762007 1934 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:192.168.169.77\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node '192.168.169.77' and this object" logger="UnhandledError" Aug 13 01:38:52.762123 kubelet[1934]: W0813 01:38:52.762105 1934 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:192.168.169.77" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node '192.168.169.77' and this object Aug 13 01:38:52.762152 kubelet[1934]: E0813 01:38:52.762124 1934 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:192.168.169.77\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node '192.168.169.77' and this object" logger="UnhandledError" Aug 13 01:38:52.772101 systemd[1]: Created slice kubepods-burstable-pode58301eb_e332_4d9b_9a2c_dd6075626bdb.slice - libcontainer container kubepods-burstable-pode58301eb_e332_4d9b_9a2c_dd6075626bdb.slice. Aug 13 01:38:52.780135 systemd[1]: Created slice kubepods-besteffort-poda72001c7_9889_419a_a089_f0c59b51c194.slice - libcontainer container kubepods-besteffort-poda72001c7_9889_419a_a089_f0c59b51c194.slice. Aug 13 01:38:52.787279 kubelet[1934]: I0813 01:38:52.785667 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a72001c7-9889-419a-a089-f0c59b51c194-calico-apiserver-certs\") pod \"calico-apiserver-84496c965d-d2m42\" (UID: \"a72001c7-9889-419a-a089-f0c59b51c194\") " pod="calico-apiserver/calico-apiserver-84496c965d-d2m42" Aug 13 01:38:52.787279 kubelet[1934]: I0813 01:38:52.785695 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8dhw\" (UniqueName: \"kubernetes.io/projected/7f3fe3fb-5a16-471a-afe9-f5e076f9d826-kube-api-access-v8dhw\") pod \"calico-apiserver-84496c965d-vd4rt\" (UID: \"7f3fe3fb-5a16-471a-afe9-f5e076f9d826\") " pod="calico-apiserver/calico-apiserver-84496c965d-vd4rt" Aug 13 01:38:52.787279 kubelet[1934]: I0813 01:38:52.785713 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d65p\" (UniqueName: \"kubernetes.io/projected/e58301eb-e332-4d9b-9a2c-dd6075626bdb-kube-api-access-5d65p\") pod \"coredns-668d6bf9bc-lk7hd\" (UID: \"e58301eb-e332-4d9b-9a2c-dd6075626bdb\") " pod="kube-system/coredns-668d6bf9bc-lk7hd" Aug 13 01:38:52.787279 kubelet[1934]: I0813 01:38:52.785728 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12dec802-7c33-429a-b888-597fd2eba41c-tigera-ca-bundle\") pod \"calico-kube-controllers-868b9987f8-c6whk\" (UID: \"12dec802-7c33-429a-b888-597fd2eba41c\") " pod="calico-system/calico-kube-controllers-868b9987f8-c6whk" Aug 13 01:38:52.787279 kubelet[1934]: I0813 01:38:52.785747 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7f3fe3fb-5a16-471a-afe9-f5e076f9d826-calico-apiserver-certs\") pod \"calico-apiserver-84496c965d-vd4rt\" (UID: \"7f3fe3fb-5a16-471a-afe9-f5e076f9d826\") " pod="calico-apiserver/calico-apiserver-84496c965d-vd4rt" Aug 13 01:38:52.787414 kubelet[1934]: I0813 01:38:52.785764 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7723ad53-30d5-4e35-b7c8-fc435759001f-config-volume\") pod \"coredns-668d6bf9bc-sjhnv\" (UID: \"7723ad53-30d5-4e35-b7c8-fc435759001f\") " pod="kube-system/coredns-668d6bf9bc-sjhnv" Aug 13 01:38:52.787414 kubelet[1934]: I0813 01:38:52.785782 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln4pf\" (UniqueName: \"kubernetes.io/projected/a72001c7-9889-419a-a089-f0c59b51c194-kube-api-access-ln4pf\") pod \"calico-apiserver-84496c965d-d2m42\" (UID: \"a72001c7-9889-419a-a089-f0c59b51c194\") " pod="calico-apiserver/calico-apiserver-84496c965d-d2m42" Aug 13 01:38:52.787414 kubelet[1934]: I0813 01:38:52.785797 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ps6n\" (UniqueName: \"kubernetes.io/projected/7723ad53-30d5-4e35-b7c8-fc435759001f-kube-api-access-8ps6n\") pod \"coredns-668d6bf9bc-sjhnv\" (UID: \"7723ad53-30d5-4e35-b7c8-fc435759001f\") " pod="kube-system/coredns-668d6bf9bc-sjhnv" Aug 13 01:38:52.787414 kubelet[1934]: I0813 01:38:52.785816 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e58301eb-e332-4d9b-9a2c-dd6075626bdb-config-volume\") pod \"coredns-668d6bf9bc-lk7hd\" (UID: \"e58301eb-e332-4d9b-9a2c-dd6075626bdb\") " pod="kube-system/coredns-668d6bf9bc-lk7hd" Aug 13 01:38:52.787414 kubelet[1934]: I0813 01:38:52.785831 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5s6f\" (UniqueName: \"kubernetes.io/projected/12dec802-7c33-429a-b888-597fd2eba41c-kube-api-access-t5s6f\") pod \"calico-kube-controllers-868b9987f8-c6whk\" (UID: \"12dec802-7c33-429a-b888-597fd2eba41c\") " pod="calico-system/calico-kube-controllers-868b9987f8-c6whk" Aug 13 01:38:52.788056 systemd[1]: Created slice kubepods-besteffort-pod12dec802_7c33_429a_b888_597fd2eba41c.slice - libcontainer container kubepods-besteffort-pod12dec802_7c33_429a_b888_597fd2eba41c.slice. Aug 13 01:38:52.793465 systemd[1]: Created slice kubepods-besteffort-pod7f3fe3fb_5a16_471a_afe9_f5e076f9d826.slice - libcontainer container kubepods-besteffort-pod7f3fe3fb_5a16_471a_afe9_f5e076f9d826.slice. Aug 13 01:38:52.798254 systemd[1]: Created slice kubepods-besteffort-pod76b9b46c_a3db_47a1_a6d5_9f38fc763ee1.slice - libcontainer container kubepods-besteffort-pod76b9b46c_a3db_47a1_a6d5_9f38fc763ee1.slice. Aug 13 01:38:52.802829 systemd[1]: Created slice kubepods-besteffort-pod0d7129e6_655c_4f80_abca_3fdf8acc703c.slice - libcontainer container kubepods-besteffort-pod0d7129e6_655c_4f80_abca_3fdf8acc703c.slice. Aug 13 01:38:52.886712 kubelet[1934]: I0813 01:38:52.886686 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8fzx\" (UniqueName: \"kubernetes.io/projected/76b9b46c-a3db-47a1-a6d5-9f38fc763ee1-kube-api-access-w8fzx\") pod \"whisker-6f5c498445-4llhh\" (UID: \"76b9b46c-a3db-47a1-a6d5-9f38fc763ee1\") " pod="calico-system/whisker-6f5c498445-4llhh" Aug 13 01:38:52.887946 kubelet[1934]: I0813 01:38:52.886714 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d7129e6-655c-4f80-abca-3fdf8acc703c-config\") pod \"goldmane-768f4c5c69-vzph9\" (UID: \"0d7129e6-655c-4f80-abca-3fdf8acc703c\") " pod="calico-system/goldmane-768f4c5c69-vzph9" Aug 13 01:38:52.887946 kubelet[1934]: I0813 01:38:52.886730 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcmtw\" (UniqueName: \"kubernetes.io/projected/0d7129e6-655c-4f80-abca-3fdf8acc703c-kube-api-access-hcmtw\") pod \"goldmane-768f4c5c69-vzph9\" (UID: \"0d7129e6-655c-4f80-abca-3fdf8acc703c\") " pod="calico-system/goldmane-768f4c5c69-vzph9" Aug 13 01:38:52.887946 kubelet[1934]: I0813 01:38:52.886745 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76b9b46c-a3db-47a1-a6d5-9f38fc763ee1-whisker-ca-bundle\") pod \"whisker-6f5c498445-4llhh\" (UID: \"76b9b46c-a3db-47a1-a6d5-9f38fc763ee1\") " pod="calico-system/whisker-6f5c498445-4llhh" Aug 13 01:38:52.887946 kubelet[1934]: I0813 01:38:52.886776 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/0d7129e6-655c-4f80-abca-3fdf8acc703c-goldmane-key-pair\") pod \"goldmane-768f4c5c69-vzph9\" (UID: \"0d7129e6-655c-4f80-abca-3fdf8acc703c\") " pod="calico-system/goldmane-768f4c5c69-vzph9" Aug 13 01:38:52.887946 kubelet[1934]: I0813 01:38:52.886789 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/76b9b46c-a3db-47a1-a6d5-9f38fc763ee1-whisker-backend-key-pair\") pod \"whisker-6f5c498445-4llhh\" (UID: \"76b9b46c-a3db-47a1-a6d5-9f38fc763ee1\") " pod="calico-system/whisker-6f5c498445-4llhh" Aug 13 01:38:52.888077 kubelet[1934]: I0813 01:38:52.886842 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d7129e6-655c-4f80-abca-3fdf8acc703c-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-vzph9\" (UID: \"0d7129e6-655c-4f80-abca-3fdf8acc703c\") " pod="calico-system/goldmane-768f4c5c69-vzph9" Aug 13 01:38:52.896979 kubelet[1934]: E0813 01:38:52.896851 1934 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:38:52.972850 systemd[1]: Created slice kubepods-besteffort-pod4787f758_46fd_4818_87f7_49572bbac91a.slice - libcontainer container kubepods-besteffort-pod4787f758_46fd_4818_87f7_49572bbac91a.slice. Aug 13 01:38:52.978196 containerd[1576]: time="2025-08-13T01:38:52.978152319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mj94h,Uid:4787f758-46fd-4818-87f7-49572bbac91a,Namespace:calico-system,Attempt:0,}" Aug 13 01:38:53.007595 containerd[1576]: time="2025-08-13T01:38:53.007565254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 01:38:53.043205 containerd[1576]: time="2025-08-13T01:38:53.043136211Z" level=error msg="Failed to destroy network for sandbox \"5483fd90d5fe9cef18ecbac0f4698bdd4ed820ae5247fff62fbc1721d8fd0822\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:38:53.046009 containerd[1576]: time="2025-08-13T01:38:53.045903333Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mj94h,Uid:4787f758-46fd-4818-87f7-49572bbac91a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5483fd90d5fe9cef18ecbac0f4698bdd4ed820ae5247fff62fbc1721d8fd0822\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:38:53.046233 kubelet[1934]: E0813 01:38:53.046184 1934 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5483fd90d5fe9cef18ecbac0f4698bdd4ed820ae5247fff62fbc1721d8fd0822\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:38:53.046233 kubelet[1934]: E0813 01:38:53.046237 1934 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5483fd90d5fe9cef18ecbac0f4698bdd4ed820ae5247fff62fbc1721d8fd0822\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mj94h" Aug 13 01:38:53.046368 kubelet[1934]: E0813 01:38:53.046257 1934 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5483fd90d5fe9cef18ecbac0f4698bdd4ed820ae5247fff62fbc1721d8fd0822\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mj94h" Aug 13 01:38:53.046368 kubelet[1934]: E0813 01:38:53.046293 1934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mj94h_calico-system(4787f758-46fd-4818-87f7-49572bbac91a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mj94h_calico-system(4787f758-46fd-4818-87f7-49572bbac91a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5483fd90d5fe9cef18ecbac0f4698bdd4ed820ae5247fff62fbc1721d8fd0822\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mj94h" podUID="4787f758-46fd-4818-87f7-49572bbac91a" Aug 13 01:38:53.069080 kubelet[1934]: E0813 01:38:53.069032 1934 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:38:53.069599 containerd[1576]: time="2025-08-13T01:38:53.069540095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sjhnv,Uid:7723ad53-30d5-4e35-b7c8-fc435759001f,Namespace:kube-system,Attempt:0,}" Aug 13 01:38:53.078065 kubelet[1934]: E0813 01:38:53.078011 1934 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:38:53.078605 containerd[1576]: time="2025-08-13T01:38:53.078568869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lk7hd,Uid:e58301eb-e332-4d9b-9a2c-dd6075626bdb,Namespace:kube-system,Attempt:0,}" Aug 13 01:38:53.091528 containerd[1576]: time="2025-08-13T01:38:53.091338785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-868b9987f8-c6whk,Uid:12dec802-7c33-429a-b888-597fd2eba41c,Namespace:calico-system,Attempt:0,}" Aug 13 01:38:53.101735 containerd[1576]: time="2025-08-13T01:38:53.101696941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f5c498445-4llhh,Uid:76b9b46c-a3db-47a1-a6d5-9f38fc763ee1,Namespace:calico-system,Attempt:0,}" Aug 13 01:38:53.106911 containerd[1576]: time="2025-08-13T01:38:53.106880083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-vzph9,Uid:0d7129e6-655c-4f80-abca-3fdf8acc703c,Namespace:calico-system,Attempt:0,}" Aug 13 01:38:53.150564 containerd[1576]: time="2025-08-13T01:38:53.149699045Z" level=error msg="Failed to destroy network for sandbox \"012f90d1deee50aef2aabcdcb0431bea99be597925d99d476011312c3a091232\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:38:53.153282 containerd[1576]: time="2025-08-13T01:38:53.153159556Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sjhnv,Uid:7723ad53-30d5-4e35-b7c8-fc435759001f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"012f90d1deee50aef2aabcdcb0431bea99be597925d99d476011312c3a091232\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:38:53.154073 kubelet[1934]: E0813 01:38:53.153574 1934 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"012f90d1deee50aef2aabcdcb0431bea99be597925d99d476011312c3a091232\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:38:53.154073 kubelet[1934]: E0813 01:38:53.153651 1934 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"012f90d1deee50aef2aabcdcb0431bea99be597925d99d476011312c3a091232\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-sjhnv" Aug 13 01:38:53.154073 kubelet[1934]: E0813 01:38:53.153671 1934 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"012f90d1deee50aef2aabcdcb0431bea99be597925d99d476011312c3a091232\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-sjhnv" Aug 13 01:38:53.154237 kubelet[1934]: E0813 01:38:53.153727 1934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-sjhnv_kube-system(7723ad53-30d5-4e35-b7c8-fc435759001f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-sjhnv_kube-system(7723ad53-30d5-4e35-b7c8-fc435759001f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"012f90d1deee50aef2aabcdcb0431bea99be597925d99d476011312c3a091232\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-sjhnv" podUID="7723ad53-30d5-4e35-b7c8-fc435759001f" Aug 13 01:38:53.164812 systemd[1]: run-netns-cni\x2d3921b3c3\x2d83dd\x2dcebf\x2d4d3a\x2d88d93b8f6c8e.mount: Deactivated successfully. Aug 13 01:38:53.177659 containerd[1576]: time="2025-08-13T01:38:53.177610909Z" level=error msg="Failed to destroy network for sandbox \"d6c1ec01d5621d8c9a99e2a446ceaf016e04a2287d6ec1e0a76ceb2d0efc861f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:38:53.179882 systemd[1]: run-netns-cni\x2d2e5c720b\x2dbaa6\x2da3ed\x2d1659\x2d83e56671d451.mount: Deactivated successfully. Aug 13 01:38:53.180626 containerd[1576]: time="2025-08-13T01:38:53.180565150Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lk7hd,Uid:e58301eb-e332-4d9b-9a2c-dd6075626bdb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6c1ec01d5621d8c9a99e2a446ceaf016e04a2287d6ec1e0a76ceb2d0efc861f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:38:53.180864 kubelet[1934]: E0813 01:38:53.180844 1934 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6c1ec01d5621d8c9a99e2a446ceaf016e04a2287d6ec1e0a76ceb2d0efc861f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:38:53.181007 kubelet[1934]: E0813 01:38:53.180960 1934 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6c1ec01d5621d8c9a99e2a446ceaf016e04a2287d6ec1e0a76ceb2d0efc861f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lk7hd" Aug 13 01:38:53.181007 kubelet[1934]: E0813 01:38:53.180985 1934 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6c1ec01d5621d8c9a99e2a446ceaf016e04a2287d6ec1e0a76ceb2d0efc861f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lk7hd" Aug 13 01:38:53.181488 kubelet[1934]: E0813 01:38:53.181404 1934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-lk7hd_kube-system(e58301eb-e332-4d9b-9a2c-dd6075626bdb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-lk7hd_kube-system(e58301eb-e332-4d9b-9a2c-dd6075626bdb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d6c1ec01d5621d8c9a99e2a446ceaf016e04a2287d6ec1e0a76ceb2d0efc861f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lk7hd" podUID="e58301eb-e332-4d9b-9a2c-dd6075626bdb" Aug 13 01:38:53.217086 containerd[1576]: time="2025-08-13T01:38:53.216989878Z" level=error msg="Failed to destroy network for sandbox \"392b77cb11a7bb4ecd243f6a57c7146dae18adc7d513a34b771c8877dd06233e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:38:53.219664 systemd[1]: run-netns-cni\x2d27c6eeb6\x2d7f98\x2d2a72\x2db457\x2d03f58e6af0d9.mount: Deactivated successfully. Aug 13 01:38:53.220890 containerd[1576]: time="2025-08-13T01:38:53.220841330Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-868b9987f8-c6whk,Uid:12dec802-7c33-429a-b888-597fd2eba41c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"392b77cb11a7bb4ecd243f6a57c7146dae18adc7d513a34b771c8877dd06233e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:38:53.221458 kubelet[1934]: E0813 01:38:53.221142 1934 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"392b77cb11a7bb4ecd243f6a57c7146dae18adc7d513a34b771c8877dd06233e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:38:53.221458 kubelet[1934]: E0813 01:38:53.221206 1934 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"392b77cb11a7bb4ecd243f6a57c7146dae18adc7d513a34b771c8877dd06233e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-868b9987f8-c6whk" Aug 13 01:38:53.221458 kubelet[1934]: E0813 01:38:53.221229 1934 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"392b77cb11a7bb4ecd243f6a57c7146dae18adc7d513a34b771c8877dd06233e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-868b9987f8-c6whk" Aug 13 01:38:53.221563 kubelet[1934]: E0813 01:38:53.221265 1934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-868b9987f8-c6whk_calico-system(12dec802-7c33-429a-b888-597fd2eba41c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-868b9987f8-c6whk_calico-system(12dec802-7c33-429a-b888-597fd2eba41c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"392b77cb11a7bb4ecd243f6a57c7146dae18adc7d513a34b771c8877dd06233e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-868b9987f8-c6whk" podUID="12dec802-7c33-429a-b888-597fd2eba41c" Aug 13 01:38:53.241184 containerd[1576]: time="2025-08-13T01:38:53.241138200Z" level=error msg="Failed to destroy network for sandbox \"4a0cb42ce1d86d3df5c01a0d3c9173bf15bedb3706c8db4fe2793741136a4dda\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:38:53.242998 systemd[1]: run-netns-cni\x2db474abfe\x2da553\x2df8c4\x2ded73\x2d3ba5d536c4ac.mount: Deactivated successfully. Aug 13 01:38:53.244763 containerd[1576]: time="2025-08-13T01:38:53.244725072Z" level=error msg="Failed to destroy network for sandbox \"a124a23b80484706f3a4b88c974f349dd9654270e5073ff0fb52a67404d6fd12\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:38:53.244905 containerd[1576]: time="2025-08-13T01:38:53.244850102Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-vzph9,Uid:0d7129e6-655c-4f80-abca-3fdf8acc703c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a0cb42ce1d86d3df5c01a0d3c9173bf15bedb3706c8db4fe2793741136a4dda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:38:53.245157 kubelet[1934]: E0813 01:38:53.245029 1934 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a0cb42ce1d86d3df5c01a0d3c9173bf15bedb3706c8db4fe2793741136a4dda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:38:53.245274 kubelet[1934]: E0813 01:38:53.245172 1934 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a0cb42ce1d86d3df5c01a0d3c9173bf15bedb3706c8db4fe2793741136a4dda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-vzph9" Aug 13 01:38:53.245274 kubelet[1934]: E0813 01:38:53.245191 1934 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a0cb42ce1d86d3df5c01a0d3c9173bf15bedb3706c8db4fe2793741136a4dda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-vzph9" Aug 13 01:38:53.245274 kubelet[1934]: E0813 01:38:53.245254 1934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-vzph9_calico-system(0d7129e6-655c-4f80-abca-3fdf8acc703c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-vzph9_calico-system(0d7129e6-655c-4f80-abca-3fdf8acc703c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a0cb42ce1d86d3df5c01a0d3c9173bf15bedb3706c8db4fe2793741136a4dda\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-vzph9" podUID="0d7129e6-655c-4f80-abca-3fdf8acc703c" Aug 13 01:38:53.245916 containerd[1576]: time="2025-08-13T01:38:53.245850123Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f5c498445-4llhh,Uid:76b9b46c-a3db-47a1-a6d5-9f38fc763ee1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a124a23b80484706f3a4b88c974f349dd9654270e5073ff0fb52a67404d6fd12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:38:53.246018 kubelet[1934]: E0813 01:38:53.245992 1934 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a124a23b80484706f3a4b88c974f349dd9654270e5073ff0fb52a67404d6fd12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:38:53.246413 kubelet[1934]: E0813 01:38:53.246127 1934 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a124a23b80484706f3a4b88c974f349dd9654270e5073ff0fb52a67404d6fd12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f5c498445-4llhh" Aug 13 01:38:53.246413 kubelet[1934]: E0813 01:38:53.246159 1934 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a124a23b80484706f3a4b88c974f349dd9654270e5073ff0fb52a67404d6fd12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f5c498445-4llhh" Aug 13 01:38:53.246413 kubelet[1934]: E0813 01:38:53.246201 1934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6f5c498445-4llhh_calico-system(76b9b46c-a3db-47a1-a6d5-9f38fc763ee1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6f5c498445-4llhh_calico-system(76b9b46c-a3db-47a1-a6d5-9f38fc763ee1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a124a23b80484706f3a4b88c974f349dd9654270e5073ff0fb52a67404d6fd12\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f5c498445-4llhh" podUID="76b9b46c-a3db-47a1-a6d5-9f38fc763ee1" Aug 13 01:38:53.898492 kubelet[1934]: E0813 01:38:53.898450 1934 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:38:53.986274 containerd[1576]: time="2025-08-13T01:38:53.986237663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84496c965d-d2m42,Uid:a72001c7-9889-419a-a089-f0c59b51c194,Namespace:calico-apiserver,Attempt:0,}" Aug 13 01:38:53.997420 containerd[1576]: time="2025-08-13T01:38:53.997174538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84496c965d-vd4rt,Uid:7f3fe3fb-5a16-471a-afe9-f5e076f9d826,Namespace:calico-apiserver,Attempt:0,}" Aug 13 01:38:54.066742 containerd[1576]: time="2025-08-13T01:38:54.066697773Z" level=error msg="Failed to destroy network for sandbox \"a080157bac0ec39694b4ad680336aaf1785befac25d7234eb237638b23ff7194\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:38:54.068022 containerd[1576]: time="2025-08-13T01:38:54.067989963Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84496c965d-d2m42,Uid:a72001c7-9889-419a-a089-f0c59b51c194,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a080157bac0ec39694b4ad680336aaf1785befac25d7234eb237638b23ff7194\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:38:54.068407 kubelet[1934]: E0813 01:38:54.068351 1934 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a080157bac0ec39694b4ad680336aaf1785befac25d7234eb237638b23ff7194\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:38:54.068458 kubelet[1934]: E0813 01:38:54.068410 1934 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a080157bac0ec39694b4ad680336aaf1785befac25d7234eb237638b23ff7194\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84496c965d-d2m42" Aug 13 01:38:54.068458 kubelet[1934]: E0813 01:38:54.068431 1934 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a080157bac0ec39694b4ad680336aaf1785befac25d7234eb237638b23ff7194\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84496c965d-d2m42" Aug 13 01:38:54.068526 kubelet[1934]: E0813 01:38:54.068494 1934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84496c965d-d2m42_calico-apiserver(a72001c7-9889-419a-a089-f0c59b51c194)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84496c965d-d2m42_calico-apiserver(a72001c7-9889-419a-a089-f0c59b51c194)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a080157bac0ec39694b4ad680336aaf1785befac25d7234eb237638b23ff7194\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84496c965d-d2m42" podUID="a72001c7-9889-419a-a089-f0c59b51c194" Aug 13 01:38:54.078488 containerd[1576]: time="2025-08-13T01:38:54.078448239Z" level=error msg="Failed to destroy network for sandbox \"ac018574520f2e2700ea56f4969831890879f26200b985ac5946338d829f8420\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:38:54.080736 containerd[1576]: time="2025-08-13T01:38:54.080701140Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84496c965d-vd4rt,Uid:7f3fe3fb-5a16-471a-afe9-f5e076f9d826,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac018574520f2e2700ea56f4969831890879f26200b985ac5946338d829f8420\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:38:54.081091 kubelet[1934]: E0813 01:38:54.080859 1934 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac018574520f2e2700ea56f4969831890879f26200b985ac5946338d829f8420\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:38:54.081091 kubelet[1934]: E0813 01:38:54.080909 1934 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac018574520f2e2700ea56f4969831890879f26200b985ac5946338d829f8420\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84496c965d-vd4rt" Aug 13 01:38:54.081091 kubelet[1934]: E0813 01:38:54.080930 1934 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac018574520f2e2700ea56f4969831890879f26200b985ac5946338d829f8420\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84496c965d-vd4rt" Aug 13 01:38:54.081179 kubelet[1934]: E0813 01:38:54.080965 1934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84496c965d-vd4rt_calico-apiserver(7f3fe3fb-5a16-471a-afe9-f5e076f9d826)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84496c965d-vd4rt_calico-apiserver(7f3fe3fb-5a16-471a-afe9-f5e076f9d826)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac018574520f2e2700ea56f4969831890879f26200b985ac5946338d829f8420\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84496c965d-vd4rt" podUID="7f3fe3fb-5a16-471a-afe9-f5e076f9d826" Aug 13 01:38:54.148431 systemd[1]: run-netns-cni\x2deba2ee00\x2d91fb\x2dd879\x2defd1\x2d2ade15621f50.mount: Deactivated successfully. Aug 13 01:38:54.898922 kubelet[1934]: E0813 01:38:54.898882 1934 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:38:55.900633 kubelet[1934]: E0813 01:38:55.900419 1934 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:38:56.304614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3506602887.mount: Deactivated successfully. Aug 13 01:38:56.329998 containerd[1576]: time="2025-08-13T01:38:56.329944224Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:38:56.330958 containerd[1576]: time="2025-08-13T01:38:56.330776314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 01:38:56.331529 containerd[1576]: time="2025-08-13T01:38:56.331496884Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:38:56.332994 containerd[1576]: time="2025-08-13T01:38:56.332960685Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:38:56.333422 containerd[1576]: time="2025-08-13T01:38:56.333386075Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 3.325665321s" Aug 13 01:38:56.333469 containerd[1576]: time="2025-08-13T01:38:56.333423395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 01:38:56.347068 containerd[1576]: time="2025-08-13T01:38:56.347019962Z" level=info msg="CreateContainer within sandbox \"90b609c4273fafce0b2a0087cc5460a2067b3c795c7a79e727c86faf53621135\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 01:38:56.358197 containerd[1576]: time="2025-08-13T01:38:56.358164468Z" level=info msg="Container 5e2c3ccb543d31ecbb3fcf22f03f713b4b176a4ca9e1a86e80194e9853114e59: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:38:56.364026 containerd[1576]: time="2025-08-13T01:38:56.363978921Z" level=info msg="CreateContainer within sandbox \"90b609c4273fafce0b2a0087cc5460a2067b3c795c7a79e727c86faf53621135\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5e2c3ccb543d31ecbb3fcf22f03f713b4b176a4ca9e1a86e80194e9853114e59\"" Aug 13 01:38:56.364434 containerd[1576]: time="2025-08-13T01:38:56.364415601Z" level=info msg="StartContainer for \"5e2c3ccb543d31ecbb3fcf22f03f713b4b176a4ca9e1a86e80194e9853114e59\"" Aug 13 01:38:56.365845 containerd[1576]: time="2025-08-13T01:38:56.365670211Z" level=info msg="connecting to shim 5e2c3ccb543d31ecbb3fcf22f03f713b4b176a4ca9e1a86e80194e9853114e59" address="unix:///run/containerd/s/6d500f4cbaa83819e79802bb3b4586eba1525f2c65095f58ced2512ea68225dd" protocol=ttrpc version=3 Aug 13 01:38:56.384153 systemd[1]: Started cri-containerd-5e2c3ccb543d31ecbb3fcf22f03f713b4b176a4ca9e1a86e80194e9853114e59.scope - libcontainer container 5e2c3ccb543d31ecbb3fcf22f03f713b4b176a4ca9e1a86e80194e9853114e59. Aug 13 01:38:56.427744 containerd[1576]: time="2025-08-13T01:38:56.427709962Z" level=info msg="StartContainer for \"5e2c3ccb543d31ecbb3fcf22f03f713b4b176a4ca9e1a86e80194e9853114e59\" returns successfully" Aug 13 01:38:56.499383 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 01:38:56.499492 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 01:38:56.647019 kubelet[1934]: I0813 01:38:56.646897 1934 status_manager.go:890] "Failed to get status for pod" podUID="a6fc4261-d1e0-4df1-b913-90692b3a76b6" pod="default/nginx-deployment-7fcdb87857-gncrg" err="pods \"nginx-deployment-7fcdb87857-gncrg\" is forbidden: User \"system:node:192.168.169.77\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node '192.168.169.77' and this object" Aug 13 01:38:56.647019 kubelet[1934]: W0813 01:38:56.646973 1934 reflector.go:569] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:192.168.169.77" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node '192.168.169.77' and this object Aug 13 01:38:56.647019 kubelet[1934]: E0813 01:38:56.646994 1934 reflector.go:166] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:192.168.169.77\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node '192.168.169.77' and this object" logger="UnhandledError" Aug 13 01:38:56.650150 systemd[1]: Created slice kubepods-besteffort-poda6fc4261_d1e0_4df1_b913_90692b3a76b6.slice - libcontainer container kubepods-besteffort-poda6fc4261_d1e0_4df1_b913_90692b3a76b6.slice. Aug 13 01:38:56.708032 kubelet[1934]: I0813 01:38:56.707998 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f5xr\" (UniqueName: \"kubernetes.io/projected/a6fc4261-d1e0-4df1-b913-90692b3a76b6-kube-api-access-5f5xr\") pod \"nginx-deployment-7fcdb87857-gncrg\" (UID: \"a6fc4261-d1e0-4df1-b913-90692b3a76b6\") " pod="default/nginx-deployment-7fcdb87857-gncrg" Aug 13 01:38:56.900681 kubelet[1934]: E0813 01:38:56.900521 1934 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:38:57.042370 kubelet[1934]: I0813 01:38:57.042315 1934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hxxs9" podStartSLOduration=3.822163533 podStartE2EDuration="14.04229689s" podCreationTimestamp="2025-08-13 01:38:43 +0000 UTC" firstStartedPulling="2025-08-13 01:38:46.114372719 +0000 UTC m=+3.710368475" lastFinishedPulling="2025-08-13 01:38:56.334506076 +0000 UTC m=+13.930501832" observedRunningTime="2025-08-13 01:38:57.029186773 +0000 UTC m=+14.625182529" watchObservedRunningTime="2025-08-13 01:38:57.04229689 +0000 UTC m=+14.638292656" Aug 13 01:38:57.815162 kubelet[1934]: E0813 01:38:57.815100 1934 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Aug 13 01:38:57.815162 kubelet[1934]: E0813 01:38:57.815140 1934 projected.go:194] Error preparing data for projected volume kube-api-access-5f5xr for pod default/nginx-deployment-7fcdb87857-gncrg: failed to sync configmap cache: timed out waiting for the condition Aug 13 01:38:57.815366 kubelet[1934]: E0813 01:38:57.815218 1934 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a6fc4261-d1e0-4df1-b913-90692b3a76b6-kube-api-access-5f5xr podName:a6fc4261-d1e0-4df1-b913-90692b3a76b6 nodeName:}" failed. No retries permitted until 2025-08-13 01:38:58.315191346 +0000 UTC m=+15.911187102 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5f5xr" (UniqueName: "kubernetes.io/projected/a6fc4261-d1e0-4df1-b913-90692b3a76b6-kube-api-access-5f5xr") pod "nginx-deployment-7fcdb87857-gncrg" (UID: "a6fc4261-d1e0-4df1-b913-90692b3a76b6") : failed to sync configmap cache: timed out waiting for the condition Aug 13 01:38:57.900929 kubelet[1934]: E0813 01:38:57.900866 1934 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:38:58.308328 systemd-networkd[1455]: vxlan.calico: Link UP Aug 13 01:38:58.308344 systemd-networkd[1455]: vxlan.calico: Gained carrier Aug 13 01:38:58.453736 containerd[1576]: time="2025-08-13T01:38:58.453667205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-gncrg,Uid:a6fc4261-d1e0-4df1-b913-90692b3a76b6,Namespace:default,Attempt:0,}" Aug 13 01:38:58.570493 systemd-networkd[1455]: cali43d09e7fe56: Link UP Aug 13 01:38:58.571207 systemd-networkd[1455]: cali43d09e7fe56: Gained carrier Aug 13 01:38:58.583860 containerd[1576]: 2025-08-13 01:38:58.492 [INFO][2945] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {192.168.169.77-k8s-nginx--deployment--7fcdb87857--gncrg-eth0 nginx-deployment-7fcdb87857- default a6fc4261-d1e0-4df1-b913-90692b3a76b6 9198 0 2025-08-13 01:38:56 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 192.168.169.77 nginx-deployment-7fcdb87857-gncrg eth0 default [] [] [kns.default ksa.default.default] cali43d09e7fe56 [] [] }} ContainerID="2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" Namespace="default" Pod="nginx-deployment-7fcdb87857-gncrg" WorkloadEndpoint="192.168.169.77-k8s-nginx--deployment--7fcdb87857--gncrg-" Aug 13 01:38:58.583860 containerd[1576]: 2025-08-13 01:38:58.493 [INFO][2945] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" Namespace="default" Pod="nginx-deployment-7fcdb87857-gncrg" WorkloadEndpoint="192.168.169.77-k8s-nginx--deployment--7fcdb87857--gncrg-eth0" Aug 13 01:38:58.583860 containerd[1576]: 2025-08-13 01:38:58.515 [INFO][2957] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" HandleID="k8s-pod-network.2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" Workload="192.168.169.77-k8s-nginx--deployment--7fcdb87857--gncrg-eth0" Aug 13 01:38:58.584011 containerd[1576]: 2025-08-13 01:38:58.515 [INFO][2957] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" HandleID="k8s-pod-network.2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" Workload="192.168.169.77-k8s-nginx--deployment--7fcdb87857--gncrg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f210), Attrs:map[string]string{"namespace":"default", "node":"192.168.169.77", "pod":"nginx-deployment-7fcdb87857-gncrg", "timestamp":"2025-08-13 01:38:58.515822366 +0000 UTC"}, Hostname:"192.168.169.77", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:38:58.584011 containerd[1576]: 2025-08-13 01:38:58.516 [INFO][2957] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:38:58.584011 containerd[1576]: 2025-08-13 01:38:58.516 [INFO][2957] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:38:58.584011 containerd[1576]: 2025-08-13 01:38:58.516 [INFO][2957] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '192.168.169.77' Aug 13 01:38:58.584011 containerd[1576]: 2025-08-13 01:38:58.523 [INFO][2957] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" host="192.168.169.77" Aug 13 01:38:58.584011 containerd[1576]: 2025-08-13 01:38:58.528 [INFO][2957] ipam/ipam.go 394: Looking up existing affinities for host host="192.168.169.77" Aug 13 01:38:58.584011 containerd[1576]: 2025-08-13 01:38:58.533 [INFO][2957] ipam/ipam.go 511: Trying affinity for 192.168.60.192/26 host="192.168.169.77" Aug 13 01:38:58.584011 containerd[1576]: 2025-08-13 01:38:58.535 [INFO][2957] ipam/ipam.go 158: Attempting to load block cidr=192.168.60.192/26 host="192.168.169.77" Aug 13 01:38:58.584011 containerd[1576]: 2025-08-13 01:38:58.538 [INFO][2957] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.60.192/26 host="192.168.169.77" Aug 13 01:38:58.584240 containerd[1576]: 2025-08-13 01:38:58.538 [INFO][2957] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.60.192/26 handle="k8s-pod-network.2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" host="192.168.169.77" Aug 13 01:38:58.584240 containerd[1576]: 2025-08-13 01:38:58.539 [INFO][2957] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f Aug 13 01:38:58.584240 containerd[1576]: 2025-08-13 01:38:58.543 [INFO][2957] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.60.192/26 handle="k8s-pod-network.2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" host="192.168.169.77" Aug 13 01:38:58.584240 containerd[1576]: 2025-08-13 01:38:58.548 [INFO][2957] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.60.193/26] block=192.168.60.192/26 handle="k8s-pod-network.2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" host="192.168.169.77" Aug 13 01:38:58.584240 containerd[1576]: 2025-08-13 01:38:58.548 [INFO][2957] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.60.193/26] handle="k8s-pod-network.2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" host="192.168.169.77" Aug 13 01:38:58.584240 containerd[1576]: 2025-08-13 01:38:58.548 [INFO][2957] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:38:58.584240 containerd[1576]: 2025-08-13 01:38:58.548 [INFO][2957] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.60.193/26] IPv6=[] ContainerID="2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" HandleID="k8s-pod-network.2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" Workload="192.168.169.77-k8s-nginx--deployment--7fcdb87857--gncrg-eth0" Aug 13 01:38:58.584371 containerd[1576]: 2025-08-13 01:38:58.553 [INFO][2945] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" Namespace="default" Pod="nginx-deployment-7fcdb87857-gncrg" WorkloadEndpoint="192.168.169.77-k8s-nginx--deployment--7fcdb87857--gncrg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.169.77-k8s-nginx--deployment--7fcdb87857--gncrg-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"a6fc4261-d1e0-4df1-b913-90692b3a76b6", ResourceVersion:"9198", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 38, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.169.77", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-gncrg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.60.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali43d09e7fe56", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:38:58.584371 containerd[1576]: 2025-08-13 01:38:58.553 [INFO][2945] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.193/32] ContainerID="2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" Namespace="default" Pod="nginx-deployment-7fcdb87857-gncrg" WorkloadEndpoint="192.168.169.77-k8s-nginx--deployment--7fcdb87857--gncrg-eth0" Aug 13 01:38:58.584443 containerd[1576]: 2025-08-13 01:38:58.553 [INFO][2945] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali43d09e7fe56 ContainerID="2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" Namespace="default" Pod="nginx-deployment-7fcdb87857-gncrg" WorkloadEndpoint="192.168.169.77-k8s-nginx--deployment--7fcdb87857--gncrg-eth0" Aug 13 01:38:58.584443 containerd[1576]: 2025-08-13 01:38:58.570 [INFO][2945] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" Namespace="default" Pod="nginx-deployment-7fcdb87857-gncrg" WorkloadEndpoint="192.168.169.77-k8s-nginx--deployment--7fcdb87857--gncrg-eth0" Aug 13 01:38:58.584483 containerd[1576]: 2025-08-13 01:38:58.571 [INFO][2945] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" Namespace="default" Pod="nginx-deployment-7fcdb87857-gncrg" WorkloadEndpoint="192.168.169.77-k8s-nginx--deployment--7fcdb87857--gncrg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.169.77-k8s-nginx--deployment--7fcdb87857--gncrg-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"a6fc4261-d1e0-4df1-b913-90692b3a76b6", ResourceVersion:"9198", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 38, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.169.77", ContainerID:"2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f", Pod:"nginx-deployment-7fcdb87857-gncrg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.60.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali43d09e7fe56", MAC:"0a:b4:06:90:f0:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:38:58.584534 containerd[1576]: 2025-08-13 01:38:58.579 [INFO][2945] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" Namespace="default" Pod="nginx-deployment-7fcdb87857-gncrg" WorkloadEndpoint="192.168.169.77-k8s-nginx--deployment--7fcdb87857--gncrg-eth0" Aug 13 01:38:58.621653 containerd[1576]: time="2025-08-13T01:38:58.621577489Z" level=info msg="connecting to shim 2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" address="unix:///run/containerd/s/2340b50bfc396af947cdaf61578116cbc4a90e6de523c56cc5b539e8021aca33" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:38:58.652244 systemd[1]: Started cri-containerd-2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f.scope - libcontainer container 2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f. Aug 13 01:38:58.701871 containerd[1576]: time="2025-08-13T01:38:58.701833109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-gncrg,Uid:a6fc4261-d1e0-4df1-b913-90692b3a76b6,Namespace:default,Attempt:0,} returns sandbox id \"2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f\"" Aug 13 01:38:58.703295 containerd[1576]: time="2025-08-13T01:38:58.703268769Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Aug 13 01:38:58.901337 kubelet[1934]: E0813 01:38:58.901262 1934 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:38:59.629230 systemd-networkd[1455]: cali43d09e7fe56: Gained IPv6LL Aug 13 01:38:59.902274 kubelet[1934]: E0813 01:38:59.901539 1934 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:39:00.141469 systemd-networkd[1455]: vxlan.calico: Gained IPv6LL Aug 13 01:39:00.291461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3949749638.mount: Deactivated successfully. Aug 13 01:39:00.458730 kubelet[1934]: I0813 01:39:00.458710 1934 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 01:39:00.558708 containerd[1576]: time="2025-08-13T01:39:00.558585337Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5e2c3ccb543d31ecbb3fcf22f03f713b4b176a4ca9e1a86e80194e9853114e59\" id:\"7e45d958318ffb3d94e7ffd2a94f51ea4df48fb27ef50f86d7eafddc1a346944\" pid:3077 exit_status:1 exited_at:{seconds:1755049140 nanos:557525276}" Aug 13 01:39:00.646028 containerd[1576]: time="2025-08-13T01:39:00.645991670Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5e2c3ccb543d31ecbb3fcf22f03f713b4b176a4ca9e1a86e80194e9853114e59\" id:\"48e3722329f86948ac9c3d8a8ac42a4b12511e82fa3f91872a78769a99af2cbb\" pid:3101 exit_status:1 exited_at:{seconds:1755049140 nanos:645526560}" Aug 13 01:39:00.902351 kubelet[1934]: E0813 01:39:00.902238 1934 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:39:01.305134 containerd[1576]: time="2025-08-13T01:39:01.304373979Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:39:01.305134 containerd[1576]: time="2025-08-13T01:39:01.305076289Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73303204" Aug 13 01:39:01.305952 containerd[1576]: time="2025-08-13T01:39:01.305913900Z" level=info msg="ImageCreate event name:\"sha256:f36b8965af58ac17c6fcb27d986b37161ceb26b3d41d3cd53f232b0e16761305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:39:01.307822 containerd[1576]: time="2025-08-13T01:39:01.307798231Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:a6969d434cb816d30787e9f7ab16b632e12dc05a2c8f4dae701d83ef2199c985\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:39:01.309148 containerd[1576]: time="2025-08-13T01:39:01.309124132Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:f36b8965af58ac17c6fcb27d986b37161ceb26b3d41d3cd53f232b0e16761305\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:a6969d434cb816d30787e9f7ab16b632e12dc05a2c8f4dae701d83ef2199c985\", size \"73303082\" in 2.605790702s" Aug 13 01:39:01.309197 containerd[1576]: time="2025-08-13T01:39:01.309168872Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:f36b8965af58ac17c6fcb27d986b37161ceb26b3d41d3cd53f232b0e16761305\"" Aug 13 01:39:01.311124 containerd[1576]: time="2025-08-13T01:39:01.311095073Z" level=info msg="CreateContainer within sandbox \"2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Aug 13 01:39:01.318930 containerd[1576]: time="2025-08-13T01:39:01.318900726Z" level=info msg="Container 39abb692d3e45de312c27cd1786e4fd3e9fbf583e295fb9d125fd7a92ab623e0: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:39:01.324900 containerd[1576]: time="2025-08-13T01:39:01.324865709Z" level=info msg="CreateContainer within sandbox \"2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"39abb692d3e45de312c27cd1786e4fd3e9fbf583e295fb9d125fd7a92ab623e0\"" Aug 13 01:39:01.325398 containerd[1576]: time="2025-08-13T01:39:01.325373320Z" level=info msg="StartContainer for \"39abb692d3e45de312c27cd1786e4fd3e9fbf583e295fb9d125fd7a92ab623e0\"" Aug 13 01:39:01.327073 containerd[1576]: time="2025-08-13T01:39:01.327027360Z" level=info msg="connecting to shim 39abb692d3e45de312c27cd1786e4fd3e9fbf583e295fb9d125fd7a92ab623e0" address="unix:///run/containerd/s/2340b50bfc396af947cdaf61578116cbc4a90e6de523c56cc5b539e8021aca33" protocol=ttrpc version=3 Aug 13 01:39:01.354309 systemd[1]: Started cri-containerd-39abb692d3e45de312c27cd1786e4fd3e9fbf583e295fb9d125fd7a92ab623e0.scope - libcontainer container 39abb692d3e45de312c27cd1786e4fd3e9fbf583e295fb9d125fd7a92ab623e0. Aug 13 01:39:01.387005 containerd[1576]: time="2025-08-13T01:39:01.386974210Z" level=info msg="StartContainer for \"39abb692d3e45de312c27cd1786e4fd3e9fbf583e295fb9d125fd7a92ab623e0\" returns successfully" Aug 13 01:39:01.902359 kubelet[1934]: E0813 01:39:01.902327 1934 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:39:02.035903 kubelet[1934]: I0813 01:39:02.035832 1934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-gncrg" podStartSLOduration=3.428656842 podStartE2EDuration="6.035817455s" podCreationTimestamp="2025-08-13 01:38:56 +0000 UTC" firstStartedPulling="2025-08-13 01:38:58.702645399 +0000 UTC m=+16.298641155" lastFinishedPulling="2025-08-13 01:39:01.309806012 +0000 UTC m=+18.905801768" observedRunningTime="2025-08-13 01:39:02.035639915 +0000 UTC m=+19.631635671" watchObservedRunningTime="2025-08-13 01:39:02.035817455 +0000 UTC m=+19.631813211" Aug 13 01:39:02.436720 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 01:39:02.887994 kubelet[1934]: E0813 01:39:02.887933 1934 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:39:02.903432 kubelet[1934]: E0813 01:39:02.903407 1934 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:39:03.011394 kubelet[1934]: I0813 01:39:03.011363 1934 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:39:03.011394 kubelet[1934]: I0813 01:39:03.011400 1934 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:39:03.012734 kubelet[1934]: I0813 01:39:03.012702 1934 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:39:03.020864 kubelet[1934]: I0813 01:39:03.020842 1934 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:39:03.020942 kubelet[1934]: I0813 01:39:03.020899 1934 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-84496c965d-d2m42","calico-apiserver/calico-apiserver-84496c965d-vd4rt","calico-system/whisker-6f5c498445-4llhh","calico-system/goldmane-768f4c5c69-vzph9","kube-system/coredns-668d6bf9bc-sjhnv","kube-system/coredns-668d6bf9bc-lk7hd","calico-system/calico-kube-controllers-868b9987f8-c6whk","calico-system/csi-node-driver-mj94h","default/nginx-deployment-7fcdb87857-gncrg","tigera-operator/tigera-operator-747864d56d-kdxxp","calico-system/calico-node-hxxs9","kube-system/kube-proxy-rm5rw"] Aug 13 01:39:03.025573 kubelet[1934]: I0813 01:39:03.025498 1934 eviction_manager.go:627] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-84496c965d-d2m42" Aug 13 01:39:03.025573 kubelet[1934]: I0813 01:39:03.025521 1934 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-84496c965d-d2m42"] Aug 13 01:39:03.049657 kubelet[1934]: I0813 01:39:03.049350 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a72001c7-9889-419a-a089-f0c59b51c194-calico-apiserver-certs\") pod \"a72001c7-9889-419a-a089-f0c59b51c194\" (UID: \"a72001c7-9889-419a-a089-f0c59b51c194\") " Aug 13 01:39:03.049657 kubelet[1934]: I0813 01:39:03.049385 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln4pf\" (UniqueName: \"kubernetes.io/projected/a72001c7-9889-419a-a089-f0c59b51c194-kube-api-access-ln4pf\") pod \"a72001c7-9889-419a-a089-f0c59b51c194\" (UID: \"a72001c7-9889-419a-a089-f0c59b51c194\") " Aug 13 01:39:03.053965 systemd[1]: var-lib-kubelet-pods-a72001c7\x2d9889\x2d419a\x2da089\x2df0c59b51c194-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dln4pf.mount: Deactivated successfully. Aug 13 01:39:03.054247 kubelet[1934]: I0813 01:39:03.054216 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a72001c7-9889-419a-a089-f0c59b51c194-kube-api-access-ln4pf" (OuterVolumeSpecName: "kube-api-access-ln4pf") pod "a72001c7-9889-419a-a089-f0c59b51c194" (UID: "a72001c7-9889-419a-a089-f0c59b51c194"). InnerVolumeSpecName "kube-api-access-ln4pf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:39:03.054899 kubelet[1934]: I0813 01:39:03.054863 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a72001c7-9889-419a-a089-f0c59b51c194-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "a72001c7-9889-419a-a089-f0c59b51c194" (UID: "a72001c7-9889-419a-a089-f0c59b51c194"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 01:39:03.057741 systemd[1]: var-lib-kubelet-pods-a72001c7\x2d9889\x2d419a\x2da089\x2df0c59b51c194-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Aug 13 01:39:03.070327 kubelet[1934]: I0813 01:39:03.070301 1934 kubelet.go:2351] "Pod admission denied" podUID="ff1e451b-0fd2-433b-b006-b76038ce42e3" pod="calico-apiserver/calico-apiserver-84496c965d-f9jht" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:03.112101 kubelet[1934]: I0813 01:39:03.112063 1934 kubelet.go:2351] "Pod admission denied" podUID="fe8829fb-6f76-4708-8f83-c294da39aa9a" pod="calico-apiserver/calico-apiserver-84496c965d-4h7tn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:03.147876 kubelet[1934]: I0813 01:39:03.147638 1934 kubelet.go:2351] "Pod admission denied" podUID="2e810b4e-ed9c-4aa4-97e1-f36e70e01562" pod="calico-apiserver/calico-apiserver-84496c965d-rppv2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:03.150354 kubelet[1934]: I0813 01:39:03.150303 1934 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a72001c7-9889-419a-a089-f0c59b51c194-calico-apiserver-certs\") on node \"192.168.169.77\" DevicePath \"\"" Aug 13 01:39:03.150354 kubelet[1934]: I0813 01:39:03.150325 1934 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ln4pf\" (UniqueName: \"kubernetes.io/projected/a72001c7-9889-419a-a089-f0c59b51c194-kube-api-access-ln4pf\") on node \"192.168.169.77\" DevicePath \"\"" Aug 13 01:39:03.176757 kubelet[1934]: I0813 01:39:03.176733 1934 kubelet.go:2351] "Pod admission denied" podUID="dd750369-a209-4322-9d7b-c760ad5f6d9a" pod="calico-apiserver/calico-apiserver-84496c965d-kl59c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:03.213007 kubelet[1934]: I0813 01:39:03.212987 1934 kubelet.go:2351] "Pod admission denied" podUID="c859b687-292a-441c-9c69-4a25465a9fc4" pod="calico-apiserver/calico-apiserver-84496c965d-2qkbm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:03.238154 kubelet[1934]: I0813 01:39:03.238137 1934 kubelet.go:2351] "Pod admission denied" podUID="494a9c01-1dac-4dee-8fc4-08aec0d938a3" pod="calico-apiserver/calico-apiserver-84496c965d-9w5km" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:03.283475 kubelet[1934]: I0813 01:39:03.283449 1934 kubelet.go:2351] "Pod admission denied" podUID="a794d870-7cf4-4a9d-ae33-621e45451e46" pod="calico-apiserver/calico-apiserver-84496c965d-ngv9w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:03.314055 kubelet[1934]: I0813 01:39:03.314015 1934 kubelet.go:2351] "Pod admission denied" podUID="b219f9a5-b527-4eee-8141-58bbc48074cc" pod="calico-apiserver/calico-apiserver-84496c965d-m9thm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:03.332770 systemd[1]: Removed slice kubepods-besteffort-poda72001c7_9889_419a_a089_f0c59b51c194.slice - libcontainer container kubepods-besteffort-poda72001c7_9889_419a_a089_f0c59b51c194.slice. Aug 13 01:39:03.349924 kubelet[1934]: I0813 01:39:03.349895 1934 kubelet.go:2351] "Pod admission denied" podUID="90b79aeb-ebea-49b3-9051-ba24ef2aab86" pod="calico-apiserver/calico-apiserver-84496c965d-ww2qm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:03.400929 kubelet[1934]: I0813 01:39:03.400834 1934 kubelet.go:2351] "Pod admission denied" podUID="83cce4e5-2890-4c95-87ee-2a52bd014ffa" pod="calico-apiserver/calico-apiserver-84496c965d-vlnjw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:03.551945 kubelet[1934]: I0813 01:39:03.551903 1934 kubelet.go:2351] "Pod admission denied" podUID="4c71e170-d5ee-44bb-b1a2-57173aea183e" pod="calico-apiserver/calico-apiserver-84496c965d-tsx9d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:03.904000 kubelet[1934]: E0813 01:39:03.903931 1934 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:39:03.968338 containerd[1576]: time="2025-08-13T01:39:03.968090270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mj94h,Uid:4787f758-46fd-4818-87f7-49572bbac91a,Namespace:calico-system,Attempt:0,}" Aug 13 01:39:03.981077 kubelet[1934]: E0813 01:39:03.981013 1934 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:39:03.981748 containerd[1576]: time="2025-08-13T01:39:03.981570347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lk7hd,Uid:e58301eb-e332-4d9b-9a2c-dd6075626bdb,Namespace:kube-system,Attempt:0,}" Aug 13 01:39:04.026018 kubelet[1934]: I0813 01:39:04.025943 1934 eviction_manager.go:458] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-84496c965d-d2m42"] Aug 13 01:39:04.052280 kubelet[1934]: I0813 01:39:04.052245 1934 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:39:04.052561 kubelet[1934]: I0813 01:39:04.052285 1934 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:39:04.054012 kubelet[1934]: I0813 01:39:04.053986 1934 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:39:04.068275 kubelet[1934]: I0813 01:39:04.067565 1934 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:39:04.068275 kubelet[1934]: I0813 01:39:04.067608 1934 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/goldmane-768f4c5c69-vzph9","calico-apiserver/calico-apiserver-84496c965d-vd4rt","calico-system/whisker-6f5c498445-4llhh","kube-system/coredns-668d6bf9bc-sjhnv","kube-system/coredns-668d6bf9bc-lk7hd","calico-system/calico-kube-controllers-868b9987f8-c6whk","calico-system/csi-node-driver-mj94h","default/nginx-deployment-7fcdb87857-gncrg","tigera-operator/tigera-operator-747864d56d-kdxxp","calico-system/calico-node-hxxs9","kube-system/kube-proxy-rm5rw"] Aug 13 01:39:04.078406 kubelet[1934]: I0813 01:39:04.078390 1934 eviction_manager.go:627] "Eviction manager: pod is evicted successfully" pod="calico-system/goldmane-768f4c5c69-vzph9" Aug 13 01:39:04.078532 kubelet[1934]: I0813 01:39:04.078521 1934 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-system/goldmane-768f4c5c69-vzph9"] Aug 13 01:39:04.098243 systemd-networkd[1455]: cali00948e67b42: Link UP Aug 13 01:39:04.099457 systemd-networkd[1455]: cali00948e67b42: Gained carrier Aug 13 01:39:04.118032 containerd[1576]: 2025-08-13 01:39:04.012 [INFO][3201] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {192.168.169.77-k8s-csi--node--driver--mj94h-eth0 csi-node-driver- calico-system 4787f758-46fd-4818-87f7-49572bbac91a 9077 0 2025-08-13 01:38:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 192.168.169.77 csi-node-driver-mj94h eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali00948e67b42 [] [] }} ContainerID="d00d08b8beeb6203178203f34e868b5359834b8e1e2cf4183c911cea563ac21f" Namespace="calico-system" Pod="csi-node-driver-mj94h" WorkloadEndpoint="192.168.169.77-k8s-csi--node--driver--mj94h-" Aug 13 01:39:04.118032 containerd[1576]: 2025-08-13 01:39:04.012 [INFO][3201] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d00d08b8beeb6203178203f34e868b5359834b8e1e2cf4183c911cea563ac21f" Namespace="calico-system" Pod="csi-node-driver-mj94h" WorkloadEndpoint="192.168.169.77-k8s-csi--node--driver--mj94h-eth0" Aug 13 01:39:04.118032 containerd[1576]: 2025-08-13 01:39:04.047 [INFO][3220] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d00d08b8beeb6203178203f34e868b5359834b8e1e2cf4183c911cea563ac21f" HandleID="k8s-pod-network.d00d08b8beeb6203178203f34e868b5359834b8e1e2cf4183c911cea563ac21f" Workload="192.168.169.77-k8s-csi--node--driver--mj94h-eth0" Aug 13 01:39:04.118910 containerd[1576]: 2025-08-13 01:39:04.048 [INFO][3220] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d00d08b8beeb6203178203f34e868b5359834b8e1e2cf4183c911cea563ac21f" HandleID="k8s-pod-network.d00d08b8beeb6203178203f34e868b5359834b8e1e2cf4183c911cea563ac21f" Workload="192.168.169.77-k8s-csi--node--driver--mj94h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002bfb00), Attrs:map[string]string{"namespace":"calico-system", "node":"192.168.169.77", "pod":"csi-node-driver-mj94h", "timestamp":"2025-08-13 01:39:04.04795292 +0000 UTC"}, Hostname:"192.168.169.77", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:39:04.118910 containerd[1576]: 2025-08-13 01:39:04.048 [INFO][3220] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:39:04.118910 containerd[1576]: 2025-08-13 01:39:04.048 [INFO][3220] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:39:04.118910 containerd[1576]: 2025-08-13 01:39:04.048 [INFO][3220] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '192.168.169.77' Aug 13 01:39:04.118910 containerd[1576]: 2025-08-13 01:39:04.057 [INFO][3220] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d00d08b8beeb6203178203f34e868b5359834b8e1e2cf4183c911cea563ac21f" host="192.168.169.77" Aug 13 01:39:04.118910 containerd[1576]: 2025-08-13 01:39:04.062 [INFO][3220] ipam/ipam.go 394: Looking up existing affinities for host host="192.168.169.77" Aug 13 01:39:04.118910 containerd[1576]: 2025-08-13 01:39:04.073 [INFO][3220] ipam/ipam.go 511: Trying affinity for 192.168.60.192/26 host="192.168.169.77" Aug 13 01:39:04.118910 containerd[1576]: 2025-08-13 01:39:04.075 [INFO][3220] ipam/ipam.go 158: Attempting to load block cidr=192.168.60.192/26 host="192.168.169.77" Aug 13 01:39:04.118910 containerd[1576]: 2025-08-13 01:39:04.077 [INFO][3220] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.60.192/26 host="192.168.169.77" Aug 13 01:39:04.118910 containerd[1576]: 2025-08-13 01:39:04.078 [INFO][3220] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.60.192/26 handle="k8s-pod-network.d00d08b8beeb6203178203f34e868b5359834b8e1e2cf4183c911cea563ac21f" host="192.168.169.77" Aug 13 01:39:04.119279 containerd[1576]: 2025-08-13 01:39:04.080 [INFO][3220] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d00d08b8beeb6203178203f34e868b5359834b8e1e2cf4183c911cea563ac21f Aug 13 01:39:04.119279 containerd[1576]: 2025-08-13 01:39:04.084 [INFO][3220] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.60.192/26 handle="k8s-pod-network.d00d08b8beeb6203178203f34e868b5359834b8e1e2cf4183c911cea563ac21f" host="192.168.169.77" Aug 13 01:39:04.119279 containerd[1576]: 2025-08-13 01:39:04.092 [INFO][3220] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.60.194/26] block=192.168.60.192/26 handle="k8s-pod-network.d00d08b8beeb6203178203f34e868b5359834b8e1e2cf4183c911cea563ac21f" host="192.168.169.77" Aug 13 01:39:04.119279 containerd[1576]: 2025-08-13 01:39:04.092 [INFO][3220] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.60.194/26] handle="k8s-pod-network.d00d08b8beeb6203178203f34e868b5359834b8e1e2cf4183c911cea563ac21f" host="192.168.169.77" Aug 13 01:39:04.119279 containerd[1576]: 2025-08-13 01:39:04.092 [INFO][3220] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:39:04.119279 containerd[1576]: 2025-08-13 01:39:04.092 [INFO][3220] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.60.194/26] IPv6=[] ContainerID="d00d08b8beeb6203178203f34e868b5359834b8e1e2cf4183c911cea563ac21f" HandleID="k8s-pod-network.d00d08b8beeb6203178203f34e868b5359834b8e1e2cf4183c911cea563ac21f" Workload="192.168.169.77-k8s-csi--node--driver--mj94h-eth0" Aug 13 01:39:04.120202 containerd[1576]: 2025-08-13 01:39:04.095 [INFO][3201] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d00d08b8beeb6203178203f34e868b5359834b8e1e2cf4183c911cea563ac21f" Namespace="calico-system" Pod="csi-node-driver-mj94h" WorkloadEndpoint="192.168.169.77-k8s-csi--node--driver--mj94h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.169.77-k8s-csi--node--driver--mj94h-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4787f758-46fd-4818-87f7-49572bbac91a", ResourceVersion:"9077", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 38, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.169.77", ContainerID:"", Pod:"csi-node-driver-mj94h", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.60.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali00948e67b42", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:39:04.120271 containerd[1576]: 2025-08-13 01:39:04.095 [INFO][3201] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.194/32] ContainerID="d00d08b8beeb6203178203f34e868b5359834b8e1e2cf4183c911cea563ac21f" Namespace="calico-system" Pod="csi-node-driver-mj94h" WorkloadEndpoint="192.168.169.77-k8s-csi--node--driver--mj94h-eth0" Aug 13 01:39:04.120271 containerd[1576]: 2025-08-13 01:39:04.095 [INFO][3201] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali00948e67b42 ContainerID="d00d08b8beeb6203178203f34e868b5359834b8e1e2cf4183c911cea563ac21f" Namespace="calico-system" Pod="csi-node-driver-mj94h" WorkloadEndpoint="192.168.169.77-k8s-csi--node--driver--mj94h-eth0" Aug 13 01:39:04.120271 containerd[1576]: 2025-08-13 01:39:04.099 [INFO][3201] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d00d08b8beeb6203178203f34e868b5359834b8e1e2cf4183c911cea563ac21f" Namespace="calico-system" Pod="csi-node-driver-mj94h" WorkloadEndpoint="192.168.169.77-k8s-csi--node--driver--mj94h-eth0" Aug 13 01:39:04.120332 containerd[1576]: 2025-08-13 01:39:04.099 [INFO][3201] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d00d08b8beeb6203178203f34e868b5359834b8e1e2cf4183c911cea563ac21f" Namespace="calico-system" Pod="csi-node-driver-mj94h" WorkloadEndpoint="192.168.169.77-k8s-csi--node--driver--mj94h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.169.77-k8s-csi--node--driver--mj94h-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4787f758-46fd-4818-87f7-49572bbac91a", ResourceVersion:"9077", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 38, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.169.77", ContainerID:"d00d08b8beeb6203178203f34e868b5359834b8e1e2cf4183c911cea563ac21f", Pod:"csi-node-driver-mj94h", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.60.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali00948e67b42", MAC:"3a:a9:02:67:40:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:39:04.120380 containerd[1576]: 2025-08-13 01:39:04.115 [INFO][3201] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d00d08b8beeb6203178203f34e868b5359834b8e1e2cf4183c911cea563ac21f" Namespace="calico-system" Pod="csi-node-driver-mj94h" WorkloadEndpoint="192.168.169.77-k8s-csi--node--driver--mj94h-eth0" Aug 13 01:39:04.148824 containerd[1576]: time="2025-08-13T01:39:04.148781860Z" level=info msg="connecting to shim d00d08b8beeb6203178203f34e868b5359834b8e1e2cf4183c911cea563ac21f" address="unix:///run/containerd/s/aebc6ec13b2bc749e107f694460c118b671d914ace4a9c4a5f0e1829d09a5b71" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:39:04.158375 kubelet[1934]: I0813 01:39:04.158309 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d7129e6-655c-4f80-abca-3fdf8acc703c-goldmane-ca-bundle\") pod \"0d7129e6-655c-4f80-abca-3fdf8acc703c\" (UID: \"0d7129e6-655c-4f80-abca-3fdf8acc703c\") " Aug 13 01:39:04.159454 kubelet[1934]: I0813 01:39:04.159087 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/0d7129e6-655c-4f80-abca-3fdf8acc703c-goldmane-key-pair\") pod \"0d7129e6-655c-4f80-abca-3fdf8acc703c\" (UID: \"0d7129e6-655c-4f80-abca-3fdf8acc703c\") " Aug 13 01:39:04.159454 kubelet[1934]: I0813 01:39:04.159132 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d7129e6-655c-4f80-abca-3fdf8acc703c-config\") pod \"0d7129e6-655c-4f80-abca-3fdf8acc703c\" (UID: \"0d7129e6-655c-4f80-abca-3fdf8acc703c\") " Aug 13 01:39:04.159454 kubelet[1934]: I0813 01:39:04.159152 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcmtw\" (UniqueName: \"kubernetes.io/projected/0d7129e6-655c-4f80-abca-3fdf8acc703c-kube-api-access-hcmtw\") pod \"0d7129e6-655c-4f80-abca-3fdf8acc703c\" (UID: \"0d7129e6-655c-4f80-abca-3fdf8acc703c\") " Aug 13 01:39:04.159454 kubelet[1934]: I0813 01:39:04.159146 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d7129e6-655c-4f80-abca-3fdf8acc703c-goldmane-ca-bundle" (OuterVolumeSpecName: "goldmane-ca-bundle") pod "0d7129e6-655c-4f80-abca-3fdf8acc703c" (UID: "0d7129e6-655c-4f80-abca-3fdf8acc703c"). InnerVolumeSpecName "goldmane-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 01:39:04.159727 kubelet[1934]: I0813 01:39:04.159711 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d7129e6-655c-4f80-abca-3fdf8acc703c-config" (OuterVolumeSpecName: "config") pod "0d7129e6-655c-4f80-abca-3fdf8acc703c" (UID: "0d7129e6-655c-4f80-abca-3fdf8acc703c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 01:39:04.163881 kubelet[1934]: I0813 01:39:04.163160 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d7129e6-655c-4f80-abca-3fdf8acc703c-goldmane-key-pair" (OuterVolumeSpecName: "goldmane-key-pair") pod "0d7129e6-655c-4f80-abca-3fdf8acc703c" (UID: "0d7129e6-655c-4f80-abca-3fdf8acc703c"). InnerVolumeSpecName "goldmane-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 01:39:04.163458 systemd[1]: var-lib-kubelet-pods-0d7129e6\x2d655c\x2d4f80\x2dabca\x2d3fdf8acc703c-volumes-kubernetes.io\x7esecret-goldmane\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 01:39:04.166824 systemd[1]: var-lib-kubelet-pods-0d7129e6\x2d655c\x2d4f80\x2dabca\x2d3fdf8acc703c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhcmtw.mount: Deactivated successfully. Aug 13 01:39:04.168486 kubelet[1934]: I0813 01:39:04.168129 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d7129e6-655c-4f80-abca-3fdf8acc703c-kube-api-access-hcmtw" (OuterVolumeSpecName: "kube-api-access-hcmtw") pod "0d7129e6-655c-4f80-abca-3fdf8acc703c" (UID: "0d7129e6-655c-4f80-abca-3fdf8acc703c"). InnerVolumeSpecName "kube-api-access-hcmtw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:39:04.182166 systemd[1]: Started cri-containerd-d00d08b8beeb6203178203f34e868b5359834b8e1e2cf4183c911cea563ac21f.scope - libcontainer container d00d08b8beeb6203178203f34e868b5359834b8e1e2cf4183c911cea563ac21f. Aug 13 01:39:04.204207 systemd-networkd[1455]: cali58995cd7266: Link UP Aug 13 01:39:04.205196 systemd-networkd[1455]: cali58995cd7266: Gained carrier Aug 13 01:39:04.220451 containerd[1576]: 2025-08-13 01:39:04.020 [INFO][3209] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {192.168.169.77-k8s-coredns--668d6bf9bc--lk7hd-eth0 coredns-668d6bf9bc- kube-system e58301eb-e332-4d9b-9a2c-dd6075626bdb 9146 0 2025-08-13 01:35:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 192.168.169.77 coredns-668d6bf9bc-lk7hd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali58995cd7266 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7c93cc08b591dd5b16c1caea943849165ad2a22408ce353a61d7efc79759e17c" Namespace="kube-system" Pod="coredns-668d6bf9bc-lk7hd" WorkloadEndpoint="192.168.169.77-k8s-coredns--668d6bf9bc--lk7hd-" Aug 13 01:39:04.220451 containerd[1576]: 2025-08-13 01:39:04.020 [INFO][3209] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7c93cc08b591dd5b16c1caea943849165ad2a22408ce353a61d7efc79759e17c" Namespace="kube-system" Pod="coredns-668d6bf9bc-lk7hd" WorkloadEndpoint="192.168.169.77-k8s-coredns--668d6bf9bc--lk7hd-eth0" Aug 13 01:39:04.220451 containerd[1576]: 2025-08-13 01:39:04.069 [INFO][3226] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7c93cc08b591dd5b16c1caea943849165ad2a22408ce353a61d7efc79759e17c" HandleID="k8s-pod-network.7c93cc08b591dd5b16c1caea943849165ad2a22408ce353a61d7efc79759e17c" Workload="192.168.169.77-k8s-coredns--668d6bf9bc--lk7hd-eth0" Aug 13 01:39:04.220679 containerd[1576]: 2025-08-13 01:39:04.069 [INFO][3226] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7c93cc08b591dd5b16c1caea943849165ad2a22408ce353a61d7efc79759e17c" HandleID="k8s-pod-network.7c93cc08b591dd5b16c1caea943849165ad2a22408ce353a61d7efc79759e17c" Workload="192.168.169.77-k8s-coredns--668d6bf9bc--lk7hd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f100), Attrs:map[string]string{"namespace":"kube-system", "node":"192.168.169.77", "pod":"coredns-668d6bf9bc-lk7hd", "timestamp":"2025-08-13 01:39:04.069167491 +0000 UTC"}, Hostname:"192.168.169.77", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:39:04.220679 containerd[1576]: 2025-08-13 01:39:04.069 [INFO][3226] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:39:04.220679 containerd[1576]: 2025-08-13 01:39:04.092 [INFO][3226] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:39:04.220679 containerd[1576]: 2025-08-13 01:39:04.092 [INFO][3226] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '192.168.169.77' Aug 13 01:39:04.220679 containerd[1576]: 2025-08-13 01:39:04.163 [INFO][3226] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7c93cc08b591dd5b16c1caea943849165ad2a22408ce353a61d7efc79759e17c" host="192.168.169.77" Aug 13 01:39:04.220679 containerd[1576]: 2025-08-13 01:39:04.171 [INFO][3226] ipam/ipam.go 394: Looking up existing affinities for host host="192.168.169.77" Aug 13 01:39:04.220679 containerd[1576]: 2025-08-13 01:39:04.176 [INFO][3226] ipam/ipam.go 511: Trying affinity for 192.168.60.192/26 host="192.168.169.77" Aug 13 01:39:04.220679 containerd[1576]: 2025-08-13 01:39:04.177 [INFO][3226] ipam/ipam.go 158: Attempting to load block cidr=192.168.60.192/26 host="192.168.169.77" Aug 13 01:39:04.220679 containerd[1576]: 2025-08-13 01:39:04.180 [INFO][3226] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.60.192/26 host="192.168.169.77" Aug 13 01:39:04.220679 containerd[1576]: 2025-08-13 01:39:04.180 [INFO][3226] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.60.192/26 handle="k8s-pod-network.7c93cc08b591dd5b16c1caea943849165ad2a22408ce353a61d7efc79759e17c" host="192.168.169.77" Aug 13 01:39:04.220916 containerd[1576]: 2025-08-13 01:39:04.182 [INFO][3226] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7c93cc08b591dd5b16c1caea943849165ad2a22408ce353a61d7efc79759e17c Aug 13 01:39:04.220916 containerd[1576]: 2025-08-13 01:39:04.189 [INFO][3226] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.60.192/26 handle="k8s-pod-network.7c93cc08b591dd5b16c1caea943849165ad2a22408ce353a61d7efc79759e17c" host="192.168.169.77" Aug 13 01:39:04.220916 containerd[1576]: 2025-08-13 01:39:04.195 [INFO][3226] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.60.195/26] block=192.168.60.192/26 handle="k8s-pod-network.7c93cc08b591dd5b16c1caea943849165ad2a22408ce353a61d7efc79759e17c" host="192.168.169.77" Aug 13 01:39:04.220916 containerd[1576]: 2025-08-13 01:39:04.195 [INFO][3226] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.60.195/26] handle="k8s-pod-network.7c93cc08b591dd5b16c1caea943849165ad2a22408ce353a61d7efc79759e17c" host="192.168.169.77" Aug 13 01:39:04.220916 containerd[1576]: 2025-08-13 01:39:04.195 [INFO][3226] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:39:04.220916 containerd[1576]: 2025-08-13 01:39:04.195 [INFO][3226] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.60.195/26] IPv6=[] ContainerID="7c93cc08b591dd5b16c1caea943849165ad2a22408ce353a61d7efc79759e17c" HandleID="k8s-pod-network.7c93cc08b591dd5b16c1caea943849165ad2a22408ce353a61d7efc79759e17c" Workload="192.168.169.77-k8s-coredns--668d6bf9bc--lk7hd-eth0" Aug 13 01:39:04.221107 containerd[1576]: 2025-08-13 01:39:04.199 [INFO][3209] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7c93cc08b591dd5b16c1caea943849165ad2a22408ce353a61d7efc79759e17c" Namespace="kube-system" Pod="coredns-668d6bf9bc-lk7hd" WorkloadEndpoint="192.168.169.77-k8s-coredns--668d6bf9bc--lk7hd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.169.77-k8s-coredns--668d6bf9bc--lk7hd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e58301eb-e332-4d9b-9a2c-dd6075626bdb", ResourceVersion:"9146", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 35, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.169.77", ContainerID:"", Pod:"coredns-668d6bf9bc-lk7hd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali58995cd7266", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:39:04.221107 containerd[1576]: 2025-08-13 01:39:04.199 [INFO][3209] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.195/32] ContainerID="7c93cc08b591dd5b16c1caea943849165ad2a22408ce353a61d7efc79759e17c" Namespace="kube-system" Pod="coredns-668d6bf9bc-lk7hd" WorkloadEndpoint="192.168.169.77-k8s-coredns--668d6bf9bc--lk7hd-eth0" Aug 13 01:39:04.221107 containerd[1576]: 2025-08-13 01:39:04.199 [INFO][3209] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali58995cd7266 ContainerID="7c93cc08b591dd5b16c1caea943849165ad2a22408ce353a61d7efc79759e17c" Namespace="kube-system" Pod="coredns-668d6bf9bc-lk7hd" WorkloadEndpoint="192.168.169.77-k8s-coredns--668d6bf9bc--lk7hd-eth0" Aug 13 01:39:04.221107 containerd[1576]: 2025-08-13 01:39:04.203 [INFO][3209] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7c93cc08b591dd5b16c1caea943849165ad2a22408ce353a61d7efc79759e17c" Namespace="kube-system" Pod="coredns-668d6bf9bc-lk7hd" WorkloadEndpoint="192.168.169.77-k8s-coredns--668d6bf9bc--lk7hd-eth0" Aug 13 01:39:04.221107 containerd[1576]: 2025-08-13 01:39:04.203 [INFO][3209] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7c93cc08b591dd5b16c1caea943849165ad2a22408ce353a61d7efc79759e17c" Namespace="kube-system" Pod="coredns-668d6bf9bc-lk7hd" WorkloadEndpoint="192.168.169.77-k8s-coredns--668d6bf9bc--lk7hd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.169.77-k8s-coredns--668d6bf9bc--lk7hd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e58301eb-e332-4d9b-9a2c-dd6075626bdb", ResourceVersion:"9146", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 35, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.169.77", ContainerID:"7c93cc08b591dd5b16c1caea943849165ad2a22408ce353a61d7efc79759e17c", Pod:"coredns-668d6bf9bc-lk7hd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali58995cd7266", MAC:"2e:16:80:fe:f4:fb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:39:04.221107 containerd[1576]: 2025-08-13 01:39:04.210 [INFO][3209] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7c93cc08b591dd5b16c1caea943849165ad2a22408ce353a61d7efc79759e17c" Namespace="kube-system" Pod="coredns-668d6bf9bc-lk7hd" WorkloadEndpoint="192.168.169.77-k8s-coredns--668d6bf9bc--lk7hd-eth0" Aug 13 01:39:04.228863 containerd[1576]: time="2025-08-13T01:39:04.228789660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mj94h,Uid:4787f758-46fd-4818-87f7-49572bbac91a,Namespace:calico-system,Attempt:0,} returns sandbox id \"d00d08b8beeb6203178203f34e868b5359834b8e1e2cf4183c911cea563ac21f\"" Aug 13 01:39:04.233212 containerd[1576]: time="2025-08-13T01:39:04.233193903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 01:39:04.244558 containerd[1576]: time="2025-08-13T01:39:04.244488578Z" level=info msg="connecting to shim 7c93cc08b591dd5b16c1caea943849165ad2a22408ce353a61d7efc79759e17c" address="unix:///run/containerd/s/548283b0e9abdebff0c678e641778d4cd0b12c8ab17afe6f7b449a2be6c9f594" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:39:04.260284 kubelet[1934]: I0813 01:39:04.260255 1934 reconciler_common.go:299] "Volume detached for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/0d7129e6-655c-4f80-abca-3fdf8acc703c-goldmane-key-pair\") on node \"192.168.169.77\" DevicePath \"\"" Aug 13 01:39:04.260284 kubelet[1934]: I0813 01:39:04.260282 1934 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d7129e6-655c-4f80-abca-3fdf8acc703c-config\") on node \"192.168.169.77\" DevicePath \"\"" Aug 13 01:39:04.260376 kubelet[1934]: I0813 01:39:04.260294 1934 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hcmtw\" (UniqueName: \"kubernetes.io/projected/0d7129e6-655c-4f80-abca-3fdf8acc703c-kube-api-access-hcmtw\") on node \"192.168.169.77\" DevicePath \"\"" Aug 13 01:39:04.261117 kubelet[1934]: I0813 01:39:04.261094 1934 reconciler_common.go:299] "Volume detached for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d7129e6-655c-4f80-abca-3fdf8acc703c-goldmane-ca-bundle\") on node \"192.168.169.77\" DevicePath \"\"" Aug 13 01:39:04.268176 systemd[1]: Started cri-containerd-7c93cc08b591dd5b16c1caea943849165ad2a22408ce353a61d7efc79759e17c.scope - libcontainer container 7c93cc08b591dd5b16c1caea943849165ad2a22408ce353a61d7efc79759e17c. Aug 13 01:39:04.317120 containerd[1576]: time="2025-08-13T01:39:04.316996814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lk7hd,Uid:e58301eb-e332-4d9b-9a2c-dd6075626bdb,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c93cc08b591dd5b16c1caea943849165ad2a22408ce353a61d7efc79759e17c\"" Aug 13 01:39:04.317585 kubelet[1934]: E0813 01:39:04.317567 1934 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:39:04.904372 kubelet[1934]: E0813 01:39:04.904299 1934 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:39:04.924553 containerd[1576]: time="2025-08-13T01:39:04.924509028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:39:04.925079 containerd[1576]: time="2025-08-13T01:39:04.925007438Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 13 01:39:04.925573 containerd[1576]: time="2025-08-13T01:39:04.925517538Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:39:04.926770 containerd[1576]: time="2025-08-13T01:39:04.926750079Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:39:04.927534 containerd[1576]: time="2025-08-13T01:39:04.927258549Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 693.665376ms" Aug 13 01:39:04.927534 containerd[1576]: time="2025-08-13T01:39:04.927287339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 01:39:04.928745 containerd[1576]: time="2025-08-13T01:39:04.928705960Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 01:39:04.931065 containerd[1576]: time="2025-08-13T01:39:04.929530520Z" level=info msg="CreateContainer within sandbox \"d00d08b8beeb6203178203f34e868b5359834b8e1e2cf4183c911cea563ac21f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 01:39:04.935620 containerd[1576]: time="2025-08-13T01:39:04.935587173Z" level=info msg="Container 02b9ca822773074e01658f2c8c984fa2f425fcee4869e53460cc65d69feef822: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:39:04.948533 containerd[1576]: time="2025-08-13T01:39:04.948495390Z" level=info msg="CreateContainer within sandbox \"d00d08b8beeb6203178203f34e868b5359834b8e1e2cf4183c911cea563ac21f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"02b9ca822773074e01658f2c8c984fa2f425fcee4869e53460cc65d69feef822\"" Aug 13 01:39:04.949090 containerd[1576]: time="2025-08-13T01:39:04.949056510Z" level=info msg="StartContainer for \"02b9ca822773074e01658f2c8c984fa2f425fcee4869e53460cc65d69feef822\"" Aug 13 01:39:04.950224 containerd[1576]: time="2025-08-13T01:39:04.950201631Z" level=info msg="connecting to shim 02b9ca822773074e01658f2c8c984fa2f425fcee4869e53460cc65d69feef822" address="unix:///run/containerd/s/aebc6ec13b2bc749e107f694460c118b671d914ace4a9c4a5f0e1829d09a5b71" protocol=ttrpc version=3 Aug 13 01:39:04.967532 containerd[1576]: time="2025-08-13T01:39:04.967511539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f5c498445-4llhh,Uid:76b9b46c-a3db-47a1-a6d5-9f38fc763ee1,Namespace:calico-system,Attempt:0,}" Aug 13 01:39:04.969178 systemd[1]: Started cri-containerd-02b9ca822773074e01658f2c8c984fa2f425fcee4869e53460cc65d69feef822.scope - libcontainer container 02b9ca822773074e01658f2c8c984fa2f425fcee4869e53460cc65d69feef822. Aug 13 01:39:04.978568 systemd[1]: Removed slice kubepods-besteffort-pod0d7129e6_655c_4f80_abca_3fdf8acc703c.slice - libcontainer container kubepods-besteffort-pod0d7129e6_655c_4f80_abca_3fdf8acc703c.slice. Aug 13 01:39:05.030747 containerd[1576]: time="2025-08-13T01:39:05.030710182Z" level=info msg="StartContainer for \"02b9ca822773074e01658f2c8c984fa2f425fcee4869e53460cc65d69feef822\" returns successfully" Aug 13 01:39:05.080469 kubelet[1934]: I0813 01:39:05.079583 1934 eviction_manager.go:458] "Eviction manager: pods successfully cleaned up" pods=["calico-system/goldmane-768f4c5c69-vzph9"] Aug 13 01:39:05.083213 systemd-networkd[1455]: cali5512fdf3012: Link UP Aug 13 01:39:05.085928 systemd-networkd[1455]: cali5512fdf3012: Gained carrier Aug 13 01:39:05.099197 kubelet[1934]: I0813 01:39:05.099181 1934 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:39:05.099298 kubelet[1934]: I0813 01:39:05.099289 1934 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:39:05.101778 kubelet[1934]: I0813 01:39:05.101766 1934 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:39:05.104466 containerd[1576]: 2025-08-13 01:39:05.013 [INFO][3363] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {192.168.169.77-k8s-whisker--6f5c498445--4llhh-eth0 whisker-6f5c498445- calico-system 76b9b46c-a3db-47a1-a6d5-9f38fc763ee1 9140 0 2025-08-13 01:36:17 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6f5c498445 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 192.168.169.77 whisker-6f5c498445-4llhh eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5512fdf3012 [] [] }} ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" Namespace="calico-system" Pod="whisker-6f5c498445-4llhh" WorkloadEndpoint="192.168.169.77-k8s-whisker--6f5c498445--4llhh-" Aug 13 01:39:05.104466 containerd[1576]: 2025-08-13 01:39:05.013 [INFO][3363] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" Namespace="calico-system" Pod="whisker-6f5c498445-4llhh" WorkloadEndpoint="192.168.169.77-k8s-whisker--6f5c498445--4llhh-eth0" Aug 13 01:39:05.104466 containerd[1576]: 2025-08-13 01:39:05.046 [INFO][3393] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" HandleID="k8s-pod-network.3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" Workload="192.168.169.77-k8s-whisker--6f5c498445--4llhh-eth0" Aug 13 01:39:05.104466 containerd[1576]: 2025-08-13 01:39:05.046 [INFO][3393] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" HandleID="k8s-pod-network.3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" Workload="192.168.169.77-k8s-whisker--6f5c498445--4llhh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad4a0), Attrs:map[string]string{"namespace":"calico-system", "node":"192.168.169.77", "pod":"whisker-6f5c498445-4llhh", "timestamp":"2025-08-13 01:39:05.04639925 +0000 UTC"}, Hostname:"192.168.169.77", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:39:05.104466 containerd[1576]: 2025-08-13 01:39:05.046 [INFO][3393] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:39:05.104466 containerd[1576]: 2025-08-13 01:39:05.046 [INFO][3393] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:39:05.104466 containerd[1576]: 2025-08-13 01:39:05.046 [INFO][3393] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '192.168.169.77' Aug 13 01:39:05.104466 containerd[1576]: 2025-08-13 01:39:05.053 [INFO][3393] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" host="192.168.169.77" Aug 13 01:39:05.104466 containerd[1576]: 2025-08-13 01:39:05.058 [INFO][3393] ipam/ipam.go 394: Looking up existing affinities for host host="192.168.169.77" Aug 13 01:39:05.104466 containerd[1576]: 2025-08-13 01:39:05.062 [INFO][3393] ipam/ipam.go 511: Trying affinity for 192.168.60.192/26 host="192.168.169.77" Aug 13 01:39:05.104466 containerd[1576]: 2025-08-13 01:39:05.064 [INFO][3393] ipam/ipam.go 158: Attempting to load block cidr=192.168.60.192/26 host="192.168.169.77" Aug 13 01:39:05.104466 containerd[1576]: 2025-08-13 01:39:05.066 [INFO][3393] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.60.192/26 host="192.168.169.77" Aug 13 01:39:05.104466 containerd[1576]: 2025-08-13 01:39:05.066 [INFO][3393] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.60.192/26 handle="k8s-pod-network.3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" host="192.168.169.77" Aug 13 01:39:05.104466 containerd[1576]: 2025-08-13 01:39:05.068 [INFO][3393] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a Aug 13 01:39:05.104466 containerd[1576]: 2025-08-13 01:39:05.072 [INFO][3393] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.60.192/26 handle="k8s-pod-network.3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" host="192.168.169.77" Aug 13 01:39:05.104466 containerd[1576]: 2025-08-13 01:39:05.076 [INFO][3393] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.60.196/26] block=192.168.60.192/26 handle="k8s-pod-network.3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" host="192.168.169.77" Aug 13 01:39:05.104466 containerd[1576]: 2025-08-13 01:39:05.076 [INFO][3393] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.60.196/26] handle="k8s-pod-network.3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" host="192.168.169.77" Aug 13 01:39:05.104466 containerd[1576]: 2025-08-13 01:39:05.076 [INFO][3393] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:39:05.104466 containerd[1576]: 2025-08-13 01:39:05.076 [INFO][3393] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.60.196/26] IPv6=[] ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" HandleID="k8s-pod-network.3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" Workload="192.168.169.77-k8s-whisker--6f5c498445--4llhh-eth0" Aug 13 01:39:05.105589 containerd[1576]: 2025-08-13 01:39:05.079 [INFO][3363] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" Namespace="calico-system" Pod="whisker-6f5c498445-4llhh" WorkloadEndpoint="192.168.169.77-k8s-whisker--6f5c498445--4llhh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.169.77-k8s-whisker--6f5c498445--4llhh-eth0", GenerateName:"whisker-6f5c498445-", Namespace:"calico-system", SelfLink:"", UID:"76b9b46c-a3db-47a1-a6d5-9f38fc763ee1", ResourceVersion:"9140", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 36, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6f5c498445", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.169.77", ContainerID:"", Pod:"whisker-6f5c498445-4llhh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.60.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5512fdf3012", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:39:05.105589 containerd[1576]: 2025-08-13 01:39:05.079 [INFO][3363] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.196/32] ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" Namespace="calico-system" Pod="whisker-6f5c498445-4llhh" WorkloadEndpoint="192.168.169.77-k8s-whisker--6f5c498445--4llhh-eth0" Aug 13 01:39:05.105589 containerd[1576]: 2025-08-13 01:39:05.079 [INFO][3363] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5512fdf3012 ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" Namespace="calico-system" Pod="whisker-6f5c498445-4llhh" WorkloadEndpoint="192.168.169.77-k8s-whisker--6f5c498445--4llhh-eth0" Aug 13 01:39:05.105589 containerd[1576]: 2025-08-13 01:39:05.086 [INFO][3363] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" Namespace="calico-system" Pod="whisker-6f5c498445-4llhh" WorkloadEndpoint="192.168.169.77-k8s-whisker--6f5c498445--4llhh-eth0" Aug 13 01:39:05.105589 containerd[1576]: 2025-08-13 01:39:05.086 [INFO][3363] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" Namespace="calico-system" Pod="whisker-6f5c498445-4llhh" WorkloadEndpoint="192.168.169.77-k8s-whisker--6f5c498445--4llhh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.169.77-k8s-whisker--6f5c498445--4llhh-eth0", GenerateName:"whisker-6f5c498445-", Namespace:"calico-system", SelfLink:"", UID:"76b9b46c-a3db-47a1-a6d5-9f38fc763ee1", ResourceVersion:"9140", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 36, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6f5c498445", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.169.77", ContainerID:"3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a", Pod:"whisker-6f5c498445-4llhh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.60.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5512fdf3012", MAC:"5e:c9:42:ff:3f:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:39:05.105589 containerd[1576]: 2025-08-13 01:39:05.097 [INFO][3363] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" Namespace="calico-system" Pod="whisker-6f5c498445-4llhh" WorkloadEndpoint="192.168.169.77-k8s-whisker--6f5c498445--4llhh-eth0" Aug 13 01:39:05.126566 containerd[1576]: time="2025-08-13T01:39:05.126461359Z" level=info msg="connecting to shim 3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" address="unix:///run/containerd/s/3bd07c73d399e461714befac2fb13cd54ac625e7aa500ab464b23f335a5dcc4c" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:39:05.128859 kubelet[1934]: I0813 01:39:05.128806 1934 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:39:05.129634 kubelet[1934]: I0813 01:39:05.128882 1934 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-84496c965d-vd4rt","calico-system/whisker-6f5c498445-4llhh","calico-system/calico-kube-controllers-868b9987f8-c6whk","kube-system/coredns-668d6bf9bc-lk7hd","kube-system/coredns-668d6bf9bc-sjhnv","default/nginx-deployment-7fcdb87857-gncrg","tigera-operator/tigera-operator-747864d56d-kdxxp","calico-system/calico-node-hxxs9","kube-system/kube-proxy-rm5rw","calico-system/csi-node-driver-mj94h"] Aug 13 01:39:05.137141 kubelet[1934]: I0813 01:39:05.137124 1934 eviction_manager.go:627] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-84496c965d-vd4rt" Aug 13 01:39:05.137501 kubelet[1934]: I0813 01:39:05.137431 1934 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-84496c965d-vd4rt"] Aug 13 01:39:05.166457 kubelet[1934]: I0813 01:39:05.166385 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8dhw\" (UniqueName: \"kubernetes.io/projected/7f3fe3fb-5a16-471a-afe9-f5e076f9d826-kube-api-access-v8dhw\") pod \"7f3fe3fb-5a16-471a-afe9-f5e076f9d826\" (UID: \"7f3fe3fb-5a16-471a-afe9-f5e076f9d826\") " Aug 13 01:39:05.167280 systemd[1]: Started cri-containerd-3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a.scope - libcontainer container 3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a. Aug 13 01:39:05.167792 kubelet[1934]: I0813 01:39:05.167323 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7f3fe3fb-5a16-471a-afe9-f5e076f9d826-calico-apiserver-certs\") pod \"7f3fe3fb-5a16-471a-afe9-f5e076f9d826\" (UID: \"7f3fe3fb-5a16-471a-afe9-f5e076f9d826\") " Aug 13 01:39:05.172934 systemd[1]: var-lib-kubelet-pods-7f3fe3fb\x2d5a16\x2d471a\x2dafe9\x2df5e076f9d826-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv8dhw.mount: Deactivated successfully. Aug 13 01:39:05.176197 kubelet[1934]: I0813 01:39:05.175735 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f3fe3fb-5a16-471a-afe9-f5e076f9d826-kube-api-access-v8dhw" (OuterVolumeSpecName: "kube-api-access-v8dhw") pod "7f3fe3fb-5a16-471a-afe9-f5e076f9d826" (UID: "7f3fe3fb-5a16-471a-afe9-f5e076f9d826"). InnerVolumeSpecName "kube-api-access-v8dhw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:39:05.177473 kubelet[1934]: I0813 01:39:05.177322 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f3fe3fb-5a16-471a-afe9-f5e076f9d826-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "7f3fe3fb-5a16-471a-afe9-f5e076f9d826" (UID: "7f3fe3fb-5a16-471a-afe9-f5e076f9d826"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 01:39:05.178822 systemd[1]: var-lib-kubelet-pods-7f3fe3fb\x2d5a16\x2d471a\x2dafe9\x2df5e076f9d826-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Aug 13 01:39:05.230846 containerd[1576]: time="2025-08-13T01:39:05.230808642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f5c498445-4llhh,Uid:76b9b46c-a3db-47a1-a6d5-9f38fc763ee1,Namespace:calico-system,Attempt:0,} returns sandbox id \"3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a\"" Aug 13 01:39:05.267841 kubelet[1934]: I0813 01:39:05.267810 1934 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v8dhw\" (UniqueName: \"kubernetes.io/projected/7f3fe3fb-5a16-471a-afe9-f5e076f9d826-kube-api-access-v8dhw\") on node \"192.168.169.77\" DevicePath \"\"" Aug 13 01:39:05.267841 kubelet[1934]: I0813 01:39:05.267837 1934 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7f3fe3fb-5a16-471a-afe9-f5e076f9d826-calico-apiserver-certs\") on node \"192.168.169.77\" DevicePath \"\"" Aug 13 01:39:05.453224 systemd-networkd[1455]: cali00948e67b42: Gained IPv6LL Aug 13 01:39:05.904704 kubelet[1934]: E0813 01:39:05.904666 1934 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:39:06.029266 systemd-networkd[1455]: cali58995cd7266: Gained IPv6LL Aug 13 01:39:06.047327 systemd[1]: Removed slice kubepods-besteffort-pod7f3fe3fb_5a16_471a_afe9_f5e076f9d826.slice - libcontainer container kubepods-besteffort-pod7f3fe3fb_5a16_471a_afe9_f5e076f9d826.slice. Aug 13 01:39:06.137785 kubelet[1934]: I0813 01:39:06.137736 1934 eviction_manager.go:458] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-84496c965d-vd4rt"] Aug 13 01:39:06.141794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1692982836.mount: Deactivated successfully. Aug 13 01:39:06.153345 kubelet[1934]: I0813 01:39:06.153317 1934 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:39:06.153495 kubelet[1934]: I0813 01:39:06.153442 1934 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:39:06.155493 kubelet[1934]: I0813 01:39:06.155428 1934 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:39:06.178263 kubelet[1934]: I0813 01:39:06.178185 1934 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:39:06.178420 kubelet[1934]: I0813 01:39:06.178396 1934 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/whisker-6f5c498445-4llhh","kube-system/coredns-668d6bf9bc-sjhnv","kube-system/coredns-668d6bf9bc-lk7hd","calico-system/calico-kube-controllers-868b9987f8-c6whk","default/nginx-deployment-7fcdb87857-gncrg","tigera-operator/tigera-operator-747864d56d-kdxxp","calico-system/calico-node-hxxs9","kube-system/kube-proxy-rm5rw","calico-system/csi-node-driver-mj94h"] Aug 13 01:39:06.471525 containerd[1576]: time="2025-08-13T01:39:06.471417855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:39:06.472512 containerd[1576]: time="2025-08-13T01:39:06.472434762Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 01:39:06.473093 containerd[1576]: time="2025-08-13T01:39:06.473015844Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:39:06.476032 containerd[1576]: time="2025-08-13T01:39:06.475100411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:39:06.476032 containerd[1576]: time="2025-08-13T01:39:06.475921921Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.547185721s" Aug 13 01:39:06.476032 containerd[1576]: time="2025-08-13T01:39:06.475954312Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 01:39:06.477247 containerd[1576]: time="2025-08-13T01:39:06.477220649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 01:39:06.479259 containerd[1576]: time="2025-08-13T01:39:06.479223983Z" level=info msg="CreateContainer within sandbox \"7c93cc08b591dd5b16c1caea943849165ad2a22408ce353a61d7efc79759e17c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 01:39:06.485720 containerd[1576]: time="2025-08-13T01:39:06.484085462Z" level=info msg="Container 25e3fd56406f90ed05405d121d735fa2b1716ce8258e85804f159495baa24243: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:39:06.494791 containerd[1576]: time="2025-08-13T01:39:06.494766387Z" level=info msg="CreateContainer within sandbox \"7c93cc08b591dd5b16c1caea943849165ad2a22408ce353a61d7efc79759e17c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"25e3fd56406f90ed05405d121d735fa2b1716ce8258e85804f159495baa24243\"" Aug 13 01:39:06.495414 containerd[1576]: time="2025-08-13T01:39:06.495385479Z" level=info msg="StartContainer for \"25e3fd56406f90ed05405d121d735fa2b1716ce8258e85804f159495baa24243\"" Aug 13 01:39:06.496256 containerd[1576]: time="2025-08-13T01:39:06.496232450Z" level=info msg="connecting to shim 25e3fd56406f90ed05405d121d735fa2b1716ce8258e85804f159495baa24243" address="unix:///run/containerd/s/548283b0e9abdebff0c678e641778d4cd0b12c8ab17afe6f7b449a2be6c9f594" protocol=ttrpc version=3 Aug 13 01:39:06.521172 systemd[1]: Started cri-containerd-25e3fd56406f90ed05405d121d735fa2b1716ce8258e85804f159495baa24243.scope - libcontainer container 25e3fd56406f90ed05405d121d735fa2b1716ce8258e85804f159495baa24243. Aug 13 01:39:06.552848 containerd[1576]: time="2025-08-13T01:39:06.552812319Z" level=info msg="StartContainer for \"25e3fd56406f90ed05405d121d735fa2b1716ce8258e85804f159495baa24243\" returns successfully" Aug 13 01:39:06.797221 systemd-networkd[1455]: cali5512fdf3012: Gained IPv6LL Aug 13 01:39:06.907711 kubelet[1934]: E0813 01:39:06.907637 1934 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:39:07.044577 kubelet[1934]: E0813 01:39:07.044543 1934 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:39:07.057281 kubelet[1934]: I0813 01:39:07.056052 1934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-lk7hd" podStartSLOduration=185.897189404 podStartE2EDuration="3m8.056023223s" podCreationTimestamp="2025-08-13 01:35:59 +0000 UTC" firstStartedPulling="2025-08-13 01:39:04.318254615 +0000 UTC m=+21.914250381" lastFinishedPulling="2025-08-13 01:39:06.477088444 +0000 UTC m=+24.073084200" observedRunningTime="2025-08-13 01:39:07.05477405 +0000 UTC m=+24.650769806" watchObservedRunningTime="2025-08-13 01:39:07.056023223 +0000 UTC m=+24.652018979" Aug 13 01:39:07.286105 containerd[1576]: time="2025-08-13T01:39:07.286033028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:39:07.286919 containerd[1576]: time="2025-08-13T01:39:07.286888018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 01:39:07.287861 containerd[1576]: time="2025-08-13T01:39:07.287833511Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:39:07.290057 containerd[1576]: time="2025-08-13T01:39:07.289518379Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:39:07.290057 containerd[1576]: time="2025-08-13T01:39:07.289906723Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 812.656072ms" Aug 13 01:39:07.290057 containerd[1576]: time="2025-08-13T01:39:07.289942974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 01:39:07.291154 containerd[1576]: time="2025-08-13T01:39:07.291135264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 01:39:07.292191 containerd[1576]: time="2025-08-13T01:39:07.292154410Z" level=info msg="CreateContainer within sandbox \"d00d08b8beeb6203178203f34e868b5359834b8e1e2cf4183c911cea563ac21f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 01:39:07.303167 containerd[1576]: time="2025-08-13T01:39:07.303129290Z" level=info msg="Container 920a87d0888a81395fbdf29fadbaa7b1d52ccaa20ead20595b4dfbb2b30a72d0: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:39:07.304879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3365026827.mount: Deactivated successfully. Aug 13 01:39:07.310990 containerd[1576]: time="2025-08-13T01:39:07.310889619Z" level=info msg="CreateContainer within sandbox \"d00d08b8beeb6203178203f34e868b5359834b8e1e2cf4183c911cea563ac21f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"920a87d0888a81395fbdf29fadbaa7b1d52ccaa20ead20595b4dfbb2b30a72d0\"" Aug 13 01:39:07.311821 containerd[1576]: time="2025-08-13T01:39:07.311797881Z" level=info msg="StartContainer for \"920a87d0888a81395fbdf29fadbaa7b1d52ccaa20ead20595b4dfbb2b30a72d0\"" Aug 13 01:39:07.313148 containerd[1576]: time="2025-08-13T01:39:07.313005712Z" level=info msg="connecting to shim 920a87d0888a81395fbdf29fadbaa7b1d52ccaa20ead20595b4dfbb2b30a72d0" address="unix:///run/containerd/s/aebc6ec13b2bc749e107f694460c118b671d914ace4a9c4a5f0e1829d09a5b71" protocol=ttrpc version=3 Aug 13 01:39:07.337168 systemd[1]: Started cri-containerd-920a87d0888a81395fbdf29fadbaa7b1d52ccaa20ead20595b4dfbb2b30a72d0.scope - libcontainer container 920a87d0888a81395fbdf29fadbaa7b1d52ccaa20ead20595b4dfbb2b30a72d0. Aug 13 01:39:07.382310 containerd[1576]: time="2025-08-13T01:39:07.382270971Z" level=info msg="StartContainer for \"920a87d0888a81395fbdf29fadbaa7b1d52ccaa20ead20595b4dfbb2b30a72d0\" returns successfully" Aug 13 01:39:07.908374 kubelet[1934]: E0813 01:39:07.908319 1934 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:39:07.958002 containerd[1576]: time="2025-08-13T01:39:07.957403176Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:39:07.958002 containerd[1576]: time="2025-08-13T01:39:07.957956606Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Aug 13 01:39:07.958428 containerd[1576]: time="2025-08-13T01:39:07.958408162Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:39:07.959665 containerd[1576]: time="2025-08-13T01:39:07.959646684Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:39:07.960071 containerd[1576]: time="2025-08-13T01:39:07.960017848Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 668.79919ms" Aug 13 01:39:07.960114 containerd[1576]: time="2025-08-13T01:39:07.960075820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Aug 13 01:39:07.961864 containerd[1576]: time="2025-08-13T01:39:07.961826610Z" level=info msg="CreateContainer within sandbox \"3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 01:39:07.968441 kubelet[1934]: E0813 01:39:07.968419 1934 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:39:07.970057 containerd[1576]: time="2025-08-13T01:39:07.969243126Z" level=info msg="Container 396048b761cf10ec334881c4402a30a6181b620ffab79bdbe548ae65fbf4d3e3: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:39:07.970673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1307012851.mount: Deactivated successfully. Aug 13 01:39:07.978603 containerd[1576]: time="2025-08-13T01:39:07.978580140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sjhnv,Uid:7723ad53-30d5-4e35-b7c8-fc435759001f,Namespace:kube-system,Attempt:0,}" Aug 13 01:39:07.978999 containerd[1576]: time="2025-08-13T01:39:07.978969174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-868b9987f8-c6whk,Uid:12dec802-7c33-429a-b888-597fd2eba41c,Namespace:calico-system,Attempt:0,}" Aug 13 01:39:07.986510 containerd[1576]: time="2025-08-13T01:39:07.986105491Z" level=info msg="CreateContainer within sandbox \"3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"396048b761cf10ec334881c4402a30a6181b620ffab79bdbe548ae65fbf4d3e3\"" Aug 13 01:39:07.987590 kubelet[1934]: I0813 01:39:07.987566 1934 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 01:39:07.987590 kubelet[1934]: I0813 01:39:07.987592 1934 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 01:39:07.991027 containerd[1576]: time="2025-08-13T01:39:07.990996510Z" level=info msg="StartContainer for \"396048b761cf10ec334881c4402a30a6181b620ffab79bdbe548ae65fbf4d3e3\"" Aug 13 01:39:07.992403 containerd[1576]: time="2025-08-13T01:39:07.992362408Z" level=info msg="connecting to shim 396048b761cf10ec334881c4402a30a6181b620ffab79bdbe548ae65fbf4d3e3" address="unix:///run/containerd/s/3bd07c73d399e461714befac2fb13cd54ac625e7aa500ab464b23f335a5dcc4c" protocol=ttrpc version=3 Aug 13 01:39:08.029167 systemd[1]: Started cri-containerd-396048b761cf10ec334881c4402a30a6181b620ffab79bdbe548ae65fbf4d3e3.scope - libcontainer container 396048b761cf10ec334881c4402a30a6181b620ffab79bdbe548ae65fbf4d3e3. Aug 13 01:39:08.065823 kubelet[1934]: E0813 01:39:08.065792 1934 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:39:08.101325 containerd[1576]: time="2025-08-13T01:39:08.101191322Z" level=info msg="StartContainer for \"396048b761cf10ec334881c4402a30a6181b620ffab79bdbe548ae65fbf4d3e3\" returns successfully" Aug 13 01:39:08.103320 containerd[1576]: time="2025-08-13T01:39:08.103056563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 01:39:08.128750 systemd-networkd[1455]: caliee09fd5223d: Link UP Aug 13 01:39:08.129451 systemd-networkd[1455]: caliee09fd5223d: Gained carrier Aug 13 01:39:08.134963 kubelet[1934]: I0813 01:39:08.134925 1934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-mj94h" podStartSLOduration=22.076255644 podStartE2EDuration="25.134909278s" podCreationTimestamp="2025-08-13 01:38:43 +0000 UTC" firstStartedPulling="2025-08-13 01:39:04.232226222 +0000 UTC m=+21.828221978" lastFinishedPulling="2025-08-13 01:39:07.290879856 +0000 UTC m=+24.886875612" observedRunningTime="2025-08-13 01:39:08.075998974 +0000 UTC m=+25.671994730" watchObservedRunningTime="2025-08-13 01:39:08.134909278 +0000 UTC m=+25.730905034" Aug 13 01:39:08.138852 containerd[1576]: 2025-08-13 01:39:08.041 [INFO][3589] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {192.168.169.77-k8s-coredns--668d6bf9bc--sjhnv-eth0 coredns-668d6bf9bc- kube-system 7723ad53-30d5-4e35-b7c8-fc435759001f 9135 0 2025-08-13 01:35:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 192.168.169.77 coredns-668d6bf9bc-sjhnv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliee09fd5223d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4748070a3895bd4fcdc40415a10de0489ce506fdc808491766789486add95f30" Namespace="kube-system" Pod="coredns-668d6bf9bc-sjhnv" WorkloadEndpoint="192.168.169.77-k8s-coredns--668d6bf9bc--sjhnv-" Aug 13 01:39:08.138852 containerd[1576]: 2025-08-13 01:39:08.041 [INFO][3589] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4748070a3895bd4fcdc40415a10de0489ce506fdc808491766789486add95f30" Namespace="kube-system" Pod="coredns-668d6bf9bc-sjhnv" WorkloadEndpoint="192.168.169.77-k8s-coredns--668d6bf9bc--sjhnv-eth0" Aug 13 01:39:08.138852 containerd[1576]: 2025-08-13 01:39:08.084 [INFO][3635] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4748070a3895bd4fcdc40415a10de0489ce506fdc808491766789486add95f30" HandleID="k8s-pod-network.4748070a3895bd4fcdc40415a10de0489ce506fdc808491766789486add95f30" Workload="192.168.169.77-k8s-coredns--668d6bf9bc--sjhnv-eth0" Aug 13 01:39:08.138852 containerd[1576]: 2025-08-13 01:39:08.084 [INFO][3635] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4748070a3895bd4fcdc40415a10de0489ce506fdc808491766789486add95f30" HandleID="k8s-pod-network.4748070a3895bd4fcdc40415a10de0489ce506fdc808491766789486add95f30" Workload="192.168.169.77-k8s-coredns--668d6bf9bc--sjhnv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f610), Attrs:map[string]string{"namespace":"kube-system", "node":"192.168.169.77", "pod":"coredns-668d6bf9bc-sjhnv", "timestamp":"2025-08-13 01:39:08.084576563 +0000 UTC"}, Hostname:"192.168.169.77", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:39:08.138852 containerd[1576]: 2025-08-13 01:39:08.084 [INFO][3635] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:39:08.138852 containerd[1576]: 2025-08-13 01:39:08.084 [INFO][3635] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:39:08.138852 containerd[1576]: 2025-08-13 01:39:08.085 [INFO][3635] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '192.168.169.77' Aug 13 01:39:08.138852 containerd[1576]: 2025-08-13 01:39:08.093 [INFO][3635] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4748070a3895bd4fcdc40415a10de0489ce506fdc808491766789486add95f30" host="192.168.169.77" Aug 13 01:39:08.138852 containerd[1576]: 2025-08-13 01:39:08.098 [INFO][3635] ipam/ipam.go 394: Looking up existing affinities for host host="192.168.169.77" Aug 13 01:39:08.138852 containerd[1576]: 2025-08-13 01:39:08.104 [INFO][3635] ipam/ipam.go 511: Trying affinity for 192.168.60.192/26 host="192.168.169.77" Aug 13 01:39:08.138852 containerd[1576]: 2025-08-13 01:39:08.107 [INFO][3635] ipam/ipam.go 158: Attempting to load block cidr=192.168.60.192/26 host="192.168.169.77" Aug 13 01:39:08.138852 containerd[1576]: 2025-08-13 01:39:08.110 [INFO][3635] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.60.192/26 host="192.168.169.77" Aug 13 01:39:08.138852 containerd[1576]: 2025-08-13 01:39:08.110 [INFO][3635] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.60.192/26 handle="k8s-pod-network.4748070a3895bd4fcdc40415a10de0489ce506fdc808491766789486add95f30" host="192.168.169.77" Aug 13 01:39:08.138852 containerd[1576]: 2025-08-13 01:39:08.113 [INFO][3635] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4748070a3895bd4fcdc40415a10de0489ce506fdc808491766789486add95f30 Aug 13 01:39:08.138852 containerd[1576]: 2025-08-13 01:39:08.117 [INFO][3635] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.60.192/26 handle="k8s-pod-network.4748070a3895bd4fcdc40415a10de0489ce506fdc808491766789486add95f30" host="192.168.169.77" Aug 13 01:39:08.138852 containerd[1576]: 2025-08-13 01:39:08.122 [INFO][3635] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.60.197/26] block=192.168.60.192/26 handle="k8s-pod-network.4748070a3895bd4fcdc40415a10de0489ce506fdc808491766789486add95f30" host="192.168.169.77" Aug 13 01:39:08.138852 containerd[1576]: 2025-08-13 01:39:08.122 [INFO][3635] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.60.197/26] handle="k8s-pod-network.4748070a3895bd4fcdc40415a10de0489ce506fdc808491766789486add95f30" host="192.168.169.77" Aug 13 01:39:08.138852 containerd[1576]: 2025-08-13 01:39:08.122 [INFO][3635] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:39:08.138852 containerd[1576]: 2025-08-13 01:39:08.122 [INFO][3635] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.60.197/26] IPv6=[] ContainerID="4748070a3895bd4fcdc40415a10de0489ce506fdc808491766789486add95f30" HandleID="k8s-pod-network.4748070a3895bd4fcdc40415a10de0489ce506fdc808491766789486add95f30" Workload="192.168.169.77-k8s-coredns--668d6bf9bc--sjhnv-eth0" Aug 13 01:39:08.139313 containerd[1576]: 2025-08-13 01:39:08.124 [INFO][3589] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4748070a3895bd4fcdc40415a10de0489ce506fdc808491766789486add95f30" Namespace="kube-system" Pod="coredns-668d6bf9bc-sjhnv" WorkloadEndpoint="192.168.169.77-k8s-coredns--668d6bf9bc--sjhnv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.169.77-k8s-coredns--668d6bf9bc--sjhnv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7723ad53-30d5-4e35-b7c8-fc435759001f", ResourceVersion:"9135", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 35, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.169.77", ContainerID:"", Pod:"coredns-668d6bf9bc-sjhnv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliee09fd5223d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:39:08.139313 containerd[1576]: 2025-08-13 01:39:08.125 [INFO][3589] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.197/32] ContainerID="4748070a3895bd4fcdc40415a10de0489ce506fdc808491766789486add95f30" Namespace="kube-system" Pod="coredns-668d6bf9bc-sjhnv" WorkloadEndpoint="192.168.169.77-k8s-coredns--668d6bf9bc--sjhnv-eth0" Aug 13 01:39:08.139313 containerd[1576]: 2025-08-13 01:39:08.125 [INFO][3589] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliee09fd5223d ContainerID="4748070a3895bd4fcdc40415a10de0489ce506fdc808491766789486add95f30" Namespace="kube-system" Pod="coredns-668d6bf9bc-sjhnv" WorkloadEndpoint="192.168.169.77-k8s-coredns--668d6bf9bc--sjhnv-eth0" Aug 13 01:39:08.139313 containerd[1576]: 2025-08-13 01:39:08.128 [INFO][3589] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4748070a3895bd4fcdc40415a10de0489ce506fdc808491766789486add95f30" Namespace="kube-system" Pod="coredns-668d6bf9bc-sjhnv" WorkloadEndpoint="192.168.169.77-k8s-coredns--668d6bf9bc--sjhnv-eth0" Aug 13 01:39:08.139313 containerd[1576]: 2025-08-13 01:39:08.128 [INFO][3589] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4748070a3895bd4fcdc40415a10de0489ce506fdc808491766789486add95f30" Namespace="kube-system" Pod="coredns-668d6bf9bc-sjhnv" WorkloadEndpoint="192.168.169.77-k8s-coredns--668d6bf9bc--sjhnv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.169.77-k8s-coredns--668d6bf9bc--sjhnv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7723ad53-30d5-4e35-b7c8-fc435759001f", ResourceVersion:"9135", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 35, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.169.77", ContainerID:"4748070a3895bd4fcdc40415a10de0489ce506fdc808491766789486add95f30", Pod:"coredns-668d6bf9bc-sjhnv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliee09fd5223d", MAC:"1e:dc:76:b1:bd:33", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:39:08.139313 containerd[1576]: 2025-08-13 01:39:08.135 [INFO][3589] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4748070a3895bd4fcdc40415a10de0489ce506fdc808491766789486add95f30" Namespace="kube-system" Pod="coredns-668d6bf9bc-sjhnv" WorkloadEndpoint="192.168.169.77-k8s-coredns--668d6bf9bc--sjhnv-eth0" Aug 13 01:39:08.161433 containerd[1576]: time="2025-08-13T01:39:08.160720357Z" level=info msg="connecting to shim 4748070a3895bd4fcdc40415a10de0489ce506fdc808491766789486add95f30" address="unix:///run/containerd/s/462cbb9b50e9cb98fb82e8d079941ed7ef462b20aace33bde3348319014e7d21" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:39:08.183376 systemd[1]: Started cri-containerd-4748070a3895bd4fcdc40415a10de0489ce506fdc808491766789486add95f30.scope - libcontainer container 4748070a3895bd4fcdc40415a10de0489ce506fdc808491766789486add95f30. Aug 13 01:39:08.230645 systemd-networkd[1455]: calied2bf6c9c1a: Link UP Aug 13 01:39:08.231891 systemd-networkd[1455]: calied2bf6c9c1a: Gained carrier Aug 13 01:39:08.244164 containerd[1576]: time="2025-08-13T01:39:08.243149195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sjhnv,Uid:7723ad53-30d5-4e35-b7c8-fc435759001f,Namespace:kube-system,Attempt:0,} returns sandbox id \"4748070a3895bd4fcdc40415a10de0489ce506fdc808491766789486add95f30\"" Aug 13 01:39:08.244244 kubelet[1934]: E0813 01:39:08.243790 1934 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:39:08.247452 containerd[1576]: time="2025-08-13T01:39:08.246537466Z" level=info msg="CreateContainer within sandbox \"4748070a3895bd4fcdc40415a10de0489ce506fdc808491766789486add95f30\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 01:39:08.249067 containerd[1576]: 2025-08-13 01:39:08.043 [INFO][3591] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {192.168.169.77-k8s-calico--kube--controllers--868b9987f8--c6whk-eth0 calico-kube-controllers-868b9987f8- calico-system 12dec802-7c33-429a-b888-597fd2eba41c 9147 0 2025-08-13 01:36:14 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:868b9987f8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 192.168.169.77 calico-kube-controllers-868b9987f8-c6whk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calied2bf6c9c1a [] [] }} ContainerID="37c0544d959d5f39a453bf1863808d1873445ffd272562b58dd98687ab01670a" Namespace="calico-system" Pod="calico-kube-controllers-868b9987f8-c6whk" WorkloadEndpoint="192.168.169.77-k8s-calico--kube--controllers--868b9987f8--c6whk-" Aug 13 01:39:08.249067 containerd[1576]: 2025-08-13 01:39:08.043 [INFO][3591] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="37c0544d959d5f39a453bf1863808d1873445ffd272562b58dd98687ab01670a" Namespace="calico-system" Pod="calico-kube-controllers-868b9987f8-c6whk" WorkloadEndpoint="192.168.169.77-k8s-calico--kube--controllers--868b9987f8--c6whk-eth0" Aug 13 01:39:08.249067 containerd[1576]: 2025-08-13 01:39:08.094 [INFO][3637] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="37c0544d959d5f39a453bf1863808d1873445ffd272562b58dd98687ab01670a" HandleID="k8s-pod-network.37c0544d959d5f39a453bf1863808d1873445ffd272562b58dd98687ab01670a" Workload="192.168.169.77-k8s-calico--kube--controllers--868b9987f8--c6whk-eth0" Aug 13 01:39:08.249067 containerd[1576]: 2025-08-13 01:39:08.094 [INFO][3637] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="37c0544d959d5f39a453bf1863808d1873445ffd272562b58dd98687ab01670a" HandleID="k8s-pod-network.37c0544d959d5f39a453bf1863808d1873445ffd272562b58dd98687ab01670a" Workload="192.168.169.77-k8s-calico--kube--controllers--868b9987f8--c6whk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f710), Attrs:map[string]string{"namespace":"calico-system", "node":"192.168.169.77", "pod":"calico-kube-controllers-868b9987f8-c6whk", "timestamp":"2025-08-13 01:39:08.094147284 +0000 UTC"}, Hostname:"192.168.169.77", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:39:08.249067 containerd[1576]: 2025-08-13 01:39:08.094 [INFO][3637] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:39:08.249067 containerd[1576]: 2025-08-13 01:39:08.122 [INFO][3637] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:39:08.249067 containerd[1576]: 2025-08-13 01:39:08.122 [INFO][3637] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '192.168.169.77' Aug 13 01:39:08.249067 containerd[1576]: 2025-08-13 01:39:08.193 [INFO][3637] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.37c0544d959d5f39a453bf1863808d1873445ffd272562b58dd98687ab01670a" host="192.168.169.77" Aug 13 01:39:08.249067 containerd[1576]: 2025-08-13 01:39:08.199 [INFO][3637] ipam/ipam.go 394: Looking up existing affinities for host host="192.168.169.77" Aug 13 01:39:08.249067 containerd[1576]: 2025-08-13 01:39:08.203 [INFO][3637] ipam/ipam.go 511: Trying affinity for 192.168.60.192/26 host="192.168.169.77" Aug 13 01:39:08.249067 containerd[1576]: 2025-08-13 01:39:08.205 [INFO][3637] ipam/ipam.go 158: Attempting to load block cidr=192.168.60.192/26 host="192.168.169.77" Aug 13 01:39:08.249067 containerd[1576]: 2025-08-13 01:39:08.207 [INFO][3637] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.60.192/26 host="192.168.169.77" Aug 13 01:39:08.249067 containerd[1576]: 2025-08-13 01:39:08.207 [INFO][3637] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.60.192/26 handle="k8s-pod-network.37c0544d959d5f39a453bf1863808d1873445ffd272562b58dd98687ab01670a" host="192.168.169.77" Aug 13 01:39:08.249067 containerd[1576]: 2025-08-13 01:39:08.209 [INFO][3637] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.37c0544d959d5f39a453bf1863808d1873445ffd272562b58dd98687ab01670a Aug 13 01:39:08.249067 containerd[1576]: 2025-08-13 01:39:08.213 [INFO][3637] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.60.192/26 handle="k8s-pod-network.37c0544d959d5f39a453bf1863808d1873445ffd272562b58dd98687ab01670a" host="192.168.169.77" Aug 13 01:39:08.249067 containerd[1576]: 2025-08-13 01:39:08.219 [INFO][3637] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.60.198/26] block=192.168.60.192/26 handle="k8s-pod-network.37c0544d959d5f39a453bf1863808d1873445ffd272562b58dd98687ab01670a" host="192.168.169.77" Aug 13 01:39:08.249067 containerd[1576]: 2025-08-13 01:39:08.219 [INFO][3637] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.60.198/26] handle="k8s-pod-network.37c0544d959d5f39a453bf1863808d1873445ffd272562b58dd98687ab01670a" host="192.168.169.77" Aug 13 01:39:08.249067 containerd[1576]: 2025-08-13 01:39:08.219 [INFO][3637] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:39:08.249067 containerd[1576]: 2025-08-13 01:39:08.219 [INFO][3637] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.60.198/26] IPv6=[] ContainerID="37c0544d959d5f39a453bf1863808d1873445ffd272562b58dd98687ab01670a" HandleID="k8s-pod-network.37c0544d959d5f39a453bf1863808d1873445ffd272562b58dd98687ab01670a" Workload="192.168.169.77-k8s-calico--kube--controllers--868b9987f8--c6whk-eth0" Aug 13 01:39:08.249552 containerd[1576]: 2025-08-13 01:39:08.222 [INFO][3591] cni-plugin/k8s.go 418: Populated endpoint ContainerID="37c0544d959d5f39a453bf1863808d1873445ffd272562b58dd98687ab01670a" Namespace="calico-system" Pod="calico-kube-controllers-868b9987f8-c6whk" WorkloadEndpoint="192.168.169.77-k8s-calico--kube--controllers--868b9987f8--c6whk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.169.77-k8s-calico--kube--controllers--868b9987f8--c6whk-eth0", GenerateName:"calico-kube-controllers-868b9987f8-", Namespace:"calico-system", SelfLink:"", UID:"12dec802-7c33-429a-b888-597fd2eba41c", ResourceVersion:"9147", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 36, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"868b9987f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.169.77", ContainerID:"", Pod:"calico-kube-controllers-868b9987f8-c6whk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.60.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calied2bf6c9c1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:39:08.249552 containerd[1576]: 2025-08-13 01:39:08.222 [INFO][3591] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.198/32] ContainerID="37c0544d959d5f39a453bf1863808d1873445ffd272562b58dd98687ab01670a" Namespace="calico-system" Pod="calico-kube-controllers-868b9987f8-c6whk" WorkloadEndpoint="192.168.169.77-k8s-calico--kube--controllers--868b9987f8--c6whk-eth0" Aug 13 01:39:08.249552 containerd[1576]: 2025-08-13 01:39:08.222 [INFO][3591] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied2bf6c9c1a ContainerID="37c0544d959d5f39a453bf1863808d1873445ffd272562b58dd98687ab01670a" Namespace="calico-system" Pod="calico-kube-controllers-868b9987f8-c6whk" WorkloadEndpoint="192.168.169.77-k8s-calico--kube--controllers--868b9987f8--c6whk-eth0" Aug 13 01:39:08.249552 containerd[1576]: 2025-08-13 01:39:08.232 [INFO][3591] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="37c0544d959d5f39a453bf1863808d1873445ffd272562b58dd98687ab01670a" Namespace="calico-system" Pod="calico-kube-controllers-868b9987f8-c6whk" WorkloadEndpoint="192.168.169.77-k8s-calico--kube--controllers--868b9987f8--c6whk-eth0" Aug 13 01:39:08.249552 containerd[1576]: 2025-08-13 01:39:08.233 [INFO][3591] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="37c0544d959d5f39a453bf1863808d1873445ffd272562b58dd98687ab01670a" Namespace="calico-system" Pod="calico-kube-controllers-868b9987f8-c6whk" WorkloadEndpoint="192.168.169.77-k8s-calico--kube--controllers--868b9987f8--c6whk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.169.77-k8s-calico--kube--controllers--868b9987f8--c6whk-eth0", GenerateName:"calico-kube-controllers-868b9987f8-", Namespace:"calico-system", SelfLink:"", UID:"12dec802-7c33-429a-b888-597fd2eba41c", ResourceVersion:"9147", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 36, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"868b9987f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.169.77", ContainerID:"37c0544d959d5f39a453bf1863808d1873445ffd272562b58dd98687ab01670a", Pod:"calico-kube-controllers-868b9987f8-c6whk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.60.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calied2bf6c9c1a", MAC:"02:8c:4d:54:8b:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:39:08.249552 containerd[1576]: 2025-08-13 01:39:08.244 [INFO][3591] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="37c0544d959d5f39a453bf1863808d1873445ffd272562b58dd98687ab01670a" Namespace="calico-system" Pod="calico-kube-controllers-868b9987f8-c6whk" WorkloadEndpoint="192.168.169.77-k8s-calico--kube--controllers--868b9987f8--c6whk-eth0" Aug 13 01:39:08.257731 containerd[1576]: time="2025-08-13T01:39:08.257706348Z" level=info msg="Container cd0d9ee074dfa039f5280bef29e2f54ec28ffcd2be9f34e1883dc7c2599883f1: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:39:08.262896 containerd[1576]: time="2025-08-13T01:39:08.262826335Z" level=info msg="CreateContainer within sandbox \"4748070a3895bd4fcdc40415a10de0489ce506fdc808491766789486add95f30\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cd0d9ee074dfa039f5280bef29e2f54ec28ffcd2be9f34e1883dc7c2599883f1\"" Aug 13 01:39:08.263950 containerd[1576]: time="2025-08-13T01:39:08.263921490Z" level=info msg="StartContainer for \"cd0d9ee074dfa039f5280bef29e2f54ec28ffcd2be9f34e1883dc7c2599883f1\"" Aug 13 01:39:08.265418 containerd[1576]: time="2025-08-13T01:39:08.265397568Z" level=info msg="connecting to shim cd0d9ee074dfa039f5280bef29e2f54ec28ffcd2be9f34e1883dc7c2599883f1" address="unix:///run/containerd/s/462cbb9b50e9cb98fb82e8d079941ed7ef462b20aace33bde3348319014e7d21" protocol=ttrpc version=3 Aug 13 01:39:08.275269 containerd[1576]: time="2025-08-13T01:39:08.275215127Z" level=info msg="connecting to shim 37c0544d959d5f39a453bf1863808d1873445ffd272562b58dd98687ab01670a" address="unix:///run/containerd/s/8b7bf439eb528cf072b8ccbe0a05c058850a1c9031a52b678b16645628b35340" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:39:08.289193 systemd[1]: Started cri-containerd-cd0d9ee074dfa039f5280bef29e2f54ec28ffcd2be9f34e1883dc7c2599883f1.scope - libcontainer container cd0d9ee074dfa039f5280bef29e2f54ec28ffcd2be9f34e1883dc7c2599883f1. Aug 13 01:39:08.318706 systemd[1]: Started cri-containerd-37c0544d959d5f39a453bf1863808d1873445ffd272562b58dd98687ab01670a.scope - libcontainer container 37c0544d959d5f39a453bf1863808d1873445ffd272562b58dd98687ab01670a. Aug 13 01:39:08.349582 containerd[1576]: time="2025-08-13T01:39:08.349512891Z" level=info msg="StartContainer for \"cd0d9ee074dfa039f5280bef29e2f54ec28ffcd2be9f34e1883dc7c2599883f1\" returns successfully" Aug 13 01:39:08.393001 containerd[1576]: time="2025-08-13T01:39:08.392962204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-868b9987f8-c6whk,Uid:12dec802-7c33-429a-b888-597fd2eba41c,Namespace:calico-system,Attempt:0,} returns sandbox id \"37c0544d959d5f39a453bf1863808d1873445ffd272562b58dd98687ab01670a\"" Aug 13 01:39:08.908634 kubelet[1934]: E0813 01:39:08.908593 1934 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:39:09.075558 kubelet[1934]: E0813 01:39:09.075517 1934 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:39:09.085070 kubelet[1934]: E0813 01:39:09.084645 1934 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:39:09.096897 kubelet[1934]: I0813 01:39:09.096452 1934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-sjhnv" podStartSLOduration=190.096436932 podStartE2EDuration="3m10.096436932s" podCreationTimestamp="2025-08-13 01:35:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:39:09.092853533 +0000 UTC m=+26.688849289" watchObservedRunningTime="2025-08-13 01:39:09.096436932 +0000 UTC m=+26.692432688" Aug 13 01:39:09.449968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount758557074.mount: Deactivated successfully. Aug 13 01:39:09.462751 containerd[1576]: time="2025-08-13T01:39:09.462686921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:39:09.464076 containerd[1576]: time="2025-08-13T01:39:09.463803285Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Aug 13 01:39:09.464898 containerd[1576]: time="2025-08-13T01:39:09.464818686Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:39:09.467234 containerd[1576]: time="2025-08-13T01:39:09.466523638Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:39:09.467234 containerd[1576]: time="2025-08-13T01:39:09.467120637Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 1.364038684s" Aug 13 01:39:09.467234 containerd[1576]: time="2025-08-13T01:39:09.467153478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Aug 13 01:39:09.468699 containerd[1576]: time="2025-08-13T01:39:09.468680414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 01:39:09.470732 containerd[1576]: time="2025-08-13T01:39:09.470671394Z" level=info msg="CreateContainer within sandbox \"3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 01:39:09.482102 containerd[1576]: time="2025-08-13T01:39:09.480593627Z" level=info msg="Container 645411f7e75a20d13be72b142c6a8bbfafdc57dc35cf29a194fd2ddf90dc6850: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:39:09.483102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount607724668.mount: Deactivated successfully. Aug 13 01:39:09.485192 systemd-networkd[1455]: calied2bf6c9c1a: Gained IPv6LL Aug 13 01:39:09.492948 containerd[1576]: time="2025-08-13T01:39:09.492879072Z" level=info msg="CreateContainer within sandbox \"3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"645411f7e75a20d13be72b142c6a8bbfafdc57dc35cf29a194fd2ddf90dc6850\"" Aug 13 01:39:09.493501 containerd[1576]: time="2025-08-13T01:39:09.493450760Z" level=info msg="StartContainer for \"645411f7e75a20d13be72b142c6a8bbfafdc57dc35cf29a194fd2ddf90dc6850\"" Aug 13 01:39:09.495612 containerd[1576]: time="2025-08-13T01:39:09.495553204Z" level=info msg="connecting to shim 645411f7e75a20d13be72b142c6a8bbfafdc57dc35cf29a194fd2ddf90dc6850" address="unix:///run/containerd/s/3bd07c73d399e461714befac2fb13cd54ac625e7aa500ab464b23f335a5dcc4c" protocol=ttrpc version=3 Aug 13 01:39:09.525971 systemd[1]: Started cri-containerd-645411f7e75a20d13be72b142c6a8bbfafdc57dc35cf29a194fd2ddf90dc6850.scope - libcontainer container 645411f7e75a20d13be72b142c6a8bbfafdc57dc35cf29a194fd2ddf90dc6850. Aug 13 01:39:09.588343 containerd[1576]: time="2025-08-13T01:39:09.588314382Z" level=info msg="StartContainer for \"645411f7e75a20d13be72b142c6a8bbfafdc57dc35cf29a194fd2ddf90dc6850\" returns successfully" Aug 13 01:39:09.613923 systemd-networkd[1455]: caliee09fd5223d: Gained IPv6LL Aug 13 01:39:09.909563 kubelet[1934]: E0813 01:39:09.909508 1934 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:39:10.062714 containerd[1576]: time="2025-08-13T01:39:10.062573360Z" level=error msg="failed to cleanup \"extract-954170659-Ea_0 sha256:8c887db5e1c1509bbe47d7287572f140b60a8c0adc0202d6183f3ae0c5f0b532\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" Aug 13 01:39:10.063435 containerd[1576]: time="2025-08-13T01:39:10.063329182Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" Aug 13 01:39:10.063435 containerd[1576]: time="2025-08-13T01:39:10.063408114Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=12587242" Aug 13 01:39:10.063886 kubelet[1934]: E0813 01:39:10.063693 1934 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:39:10.063886 kubelet[1934]: E0813 01:39:10.063745 1934 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:39:10.067431 kubelet[1934]: E0813 01:39:10.067358 1934 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t5s6f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-868b9987f8-c6whk_calico-system(12dec802-7c33-429a-b888-597fd2eba41c): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" logger="UnhandledError" Aug 13 01:39:10.068839 kubelet[1934]: E0813 01:39:10.068797 1934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-868b9987f8-c6whk" podUID="12dec802-7c33-429a-b888-597fd2eba41c" Aug 13 01:39:10.087758 containerd[1576]: time="2025-08-13T01:39:10.087654658Z" level=info msg="StopContainer for \"645411f7e75a20d13be72b142c6a8bbfafdc57dc35cf29a194fd2ddf90dc6850\" with timeout 2 (s)" Aug 13 01:39:10.087892 kubelet[1934]: E0813 01:39:10.087867 1934 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:39:10.088020 containerd[1576]: time="2025-08-13T01:39:10.087872504Z" level=info msg="StopContainer for \"396048b761cf10ec334881c4402a30a6181b620ffab79bdbe548ae65fbf4d3e3\" with timeout 2 (s)" Aug 13 01:39:10.088396 containerd[1576]: time="2025-08-13T01:39:10.088358908Z" level=info msg="Stop container \"396048b761cf10ec334881c4402a30a6181b620ffab79bdbe548ae65fbf4d3e3\" with signal terminated" Aug 13 01:39:10.088565 containerd[1576]: time="2025-08-13T01:39:10.088521732Z" level=info msg="Stop container \"645411f7e75a20d13be72b142c6a8bbfafdc57dc35cf29a194fd2ddf90dc6850\" with signal terminated" Aug 13 01:39:10.088742 kubelet[1934]: E0813 01:39:10.088711 1934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-868b9987f8-c6whk" podUID="12dec802-7c33-429a-b888-597fd2eba41c" Aug 13 01:39:10.110917 kubelet[1934]: I0813 01:39:10.110078 1934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6f5c498445-4llhh" podStartSLOduration=168.869995629 podStartE2EDuration="2m53.106030624s" podCreationTimestamp="2025-08-13 01:36:17 +0000 UTC" firstStartedPulling="2025-08-13 01:39:05.231927267 +0000 UTC m=+22.827923023" lastFinishedPulling="2025-08-13 01:39:09.467962262 +0000 UTC m=+27.063958018" observedRunningTime="2025-08-13 01:39:10.103343096 +0000 UTC m=+27.699338862" watchObservedRunningTime="2025-08-13 01:39:10.106030624 +0000 UTC m=+27.702026390" Aug 13 01:39:10.117743 systemd[1]: cri-containerd-396048b761cf10ec334881c4402a30a6181b620ffab79bdbe548ae65fbf4d3e3.scope: Deactivated successfully. Aug 13 01:39:10.119103 systemd[1]: cri-containerd-645411f7e75a20d13be72b142c6a8bbfafdc57dc35cf29a194fd2ddf90dc6850.scope: Deactivated successfully. Aug 13 01:39:10.122804 containerd[1576]: time="2025-08-13T01:39:10.122690870Z" level=info msg="received exit event container_id:\"645411f7e75a20d13be72b142c6a8bbfafdc57dc35cf29a194fd2ddf90dc6850\" id:\"645411f7e75a20d13be72b142c6a8bbfafdc57dc35cf29a194fd2ddf90dc6850\" pid:3842 exit_status:2 exited_at:{seconds:1755049150 nanos:122484965}" Aug 13 01:39:10.122804 containerd[1576]: time="2025-08-13T01:39:10.122769973Z" level=info msg="TaskExit event in podsandbox handler container_id:\"645411f7e75a20d13be72b142c6a8bbfafdc57dc35cf29a194fd2ddf90dc6850\" id:\"645411f7e75a20d13be72b142c6a8bbfafdc57dc35cf29a194fd2ddf90dc6850\" pid:3842 exit_status:2 exited_at:{seconds:1755049150 nanos:122484965}" Aug 13 01:39:10.123587 containerd[1576]: time="2025-08-13T01:39:10.123554006Z" level=info msg="TaskExit event in podsandbox handler container_id:\"396048b761cf10ec334881c4402a30a6181b620ffab79bdbe548ae65fbf4d3e3\" id:\"396048b761cf10ec334881c4402a30a6181b620ffab79bdbe548ae65fbf4d3e3\" pid:3626 exited_at:{seconds:1755049150 nanos:123438402}" Aug 13 01:39:10.123634 containerd[1576]: time="2025-08-13T01:39:10.123603467Z" level=info msg="received exit event container_id:\"396048b761cf10ec334881c4402a30a6181b620ffab79bdbe548ae65fbf4d3e3\" id:\"396048b761cf10ec334881c4402a30a6181b620ffab79bdbe548ae65fbf4d3e3\" pid:3626 exited_at:{seconds:1755049150 nanos:123438402}" Aug 13 01:39:10.148731 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-396048b761cf10ec334881c4402a30a6181b620ffab79bdbe548ae65fbf4d3e3-rootfs.mount: Deactivated successfully. Aug 13 01:39:10.163488 containerd[1576]: time="2025-08-13T01:39:10.163400126Z" level=info msg="StopContainer for \"645411f7e75a20d13be72b142c6a8bbfafdc57dc35cf29a194fd2ddf90dc6850\" returns successfully" Aug 13 01:39:10.163725 containerd[1576]: time="2025-08-13T01:39:10.163695925Z" level=info msg="StopContainer for \"396048b761cf10ec334881c4402a30a6181b620ffab79bdbe548ae65fbf4d3e3\" returns successfully" Aug 13 01:39:10.164891 containerd[1576]: time="2025-08-13T01:39:10.164783735Z" level=info msg="StopPodSandbox for \"3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a\"" Aug 13 01:39:10.165084 containerd[1576]: time="2025-08-13T01:39:10.165033292Z" level=info msg="Container to stop \"396048b761cf10ec334881c4402a30a6181b620ffab79bdbe548ae65fbf4d3e3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:39:10.165181 containerd[1576]: time="2025-08-13T01:39:10.165161186Z" level=info msg="Container to stop \"645411f7e75a20d13be72b142c6a8bbfafdc57dc35cf29a194fd2ddf90dc6850\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:39:10.172564 systemd[1]: cri-containerd-3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a.scope: Deactivated successfully. Aug 13 01:39:10.175850 containerd[1576]: time="2025-08-13T01:39:10.175803201Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a\" id:\"3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a\" pid:3447 exit_status:137 exited_at:{seconds:1755049150 nanos:175432431}" Aug 13 01:39:10.201314 containerd[1576]: time="2025-08-13T01:39:10.201234788Z" level=info msg="shim disconnected" id=3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a namespace=k8s.io Aug 13 01:39:10.201314 containerd[1576]: time="2025-08-13T01:39:10.201259139Z" level=warning msg="cleaning up after shim disconnected" id=3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a namespace=k8s.io Aug 13 01:39:10.201314 containerd[1576]: time="2025-08-13T01:39:10.201266459Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:39:10.214939 containerd[1576]: time="2025-08-13T01:39:10.214133538Z" level=info msg="received exit event sandbox_id:\"3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a\" exit_status:137 exited_at:{seconds:1755049150 nanos:175432431}" Aug 13 01:39:10.263193 systemd-networkd[1455]: cali5512fdf3012: Link DOWN Aug 13 01:39:10.263357 systemd-networkd[1455]: cali5512fdf3012: Lost carrier Aug 13 01:39:10.346539 containerd[1576]: 2025-08-13 01:39:10.262 [INFO][3949] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" Aug 13 01:39:10.346539 containerd[1576]: 2025-08-13 01:39:10.262 [INFO][3949] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" iface="eth0" netns="/var/run/netns/cni-bc6e66bf-b0a7-f996-a6ef-0588cb1d9ea8" Aug 13 01:39:10.346539 containerd[1576]: 2025-08-13 01:39:10.262 [INFO][3949] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" iface="eth0" netns="/var/run/netns/cni-bc6e66bf-b0a7-f996-a6ef-0588cb1d9ea8" Aug 13 01:39:10.346539 containerd[1576]: 2025-08-13 01:39:10.269 [INFO][3949] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" after=6.896217ms iface="eth0" netns="/var/run/netns/cni-bc6e66bf-b0a7-f996-a6ef-0588cb1d9ea8" Aug 13 01:39:10.346539 containerd[1576]: 2025-08-13 01:39:10.269 [INFO][3949] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" Aug 13 01:39:10.346539 containerd[1576]: 2025-08-13 01:39:10.269 [INFO][3949] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" Aug 13 01:39:10.346539 containerd[1576]: 2025-08-13 01:39:10.298 [INFO][3958] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" HandleID="k8s-pod-network.3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" Workload="192.168.169.77-k8s-whisker--6f5c498445--4llhh-eth0" Aug 13 01:39:10.346539 containerd[1576]: 2025-08-13 01:39:10.299 [INFO][3958] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:39:10.346539 containerd[1576]: 2025-08-13 01:39:10.299 [INFO][3958] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:39:10.346539 containerd[1576]: 2025-08-13 01:39:10.340 [INFO][3958] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" HandleID="k8s-pod-network.3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" Workload="192.168.169.77-k8s-whisker--6f5c498445--4llhh-eth0" Aug 13 01:39:10.346539 containerd[1576]: 2025-08-13 01:39:10.340 [INFO][3958] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" HandleID="k8s-pod-network.3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" Workload="192.168.169.77-k8s-whisker--6f5c498445--4llhh-eth0" Aug 13 01:39:10.346539 containerd[1576]: 2025-08-13 01:39:10.342 [INFO][3958] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:39:10.346539 containerd[1576]: 2025-08-13 01:39:10.344 [INFO][3949] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" Aug 13 01:39:10.347529 containerd[1576]: time="2025-08-13T01:39:10.347482554Z" level=info msg="TearDown network for sandbox \"3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a\" successfully" Aug 13 01:39:10.347659 containerd[1576]: time="2025-08-13T01:39:10.347642029Z" level=info msg="StopPodSandbox for \"3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a\" returns successfully" Aug 13 01:39:10.355671 kubelet[1934]: I0813 01:39:10.355647 1934 eviction_manager.go:627] "Eviction manager: pod is evicted successfully" pod="calico-system/whisker-6f5c498445-4llhh" Aug 13 01:39:10.355671 kubelet[1934]: I0813 01:39:10.355671 1934 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-system/whisker-6f5c498445-4llhh"] Aug 13 01:39:10.403736 kubelet[1934]: I0813 01:39:10.403699 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/76b9b46c-a3db-47a1-a6d5-9f38fc763ee1-whisker-backend-key-pair\") pod \"76b9b46c-a3db-47a1-a6d5-9f38fc763ee1\" (UID: \"76b9b46c-a3db-47a1-a6d5-9f38fc763ee1\") " Aug 13 01:39:10.403736 kubelet[1934]: I0813 01:39:10.403740 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76b9b46c-a3db-47a1-a6d5-9f38fc763ee1-whisker-ca-bundle\") pod \"76b9b46c-a3db-47a1-a6d5-9f38fc763ee1\" (UID: \"76b9b46c-a3db-47a1-a6d5-9f38fc763ee1\") " Aug 13 01:39:10.403847 kubelet[1934]: I0813 01:39:10.403771 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8fzx\" (UniqueName: \"kubernetes.io/projected/76b9b46c-a3db-47a1-a6d5-9f38fc763ee1-kube-api-access-w8fzx\") pod \"76b9b46c-a3db-47a1-a6d5-9f38fc763ee1\" (UID: \"76b9b46c-a3db-47a1-a6d5-9f38fc763ee1\") " Aug 13 01:39:10.405710 kubelet[1934]: I0813 01:39:10.405667 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76b9b46c-a3db-47a1-a6d5-9f38fc763ee1-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "76b9b46c-a3db-47a1-a6d5-9f38fc763ee1" (UID: "76b9b46c-a3db-47a1-a6d5-9f38fc763ee1"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 01:39:10.407199 kubelet[1934]: I0813 01:39:10.407164 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76b9b46c-a3db-47a1-a6d5-9f38fc763ee1-kube-api-access-w8fzx" (OuterVolumeSpecName: "kube-api-access-w8fzx") pod "76b9b46c-a3db-47a1-a6d5-9f38fc763ee1" (UID: "76b9b46c-a3db-47a1-a6d5-9f38fc763ee1"). InnerVolumeSpecName "kube-api-access-w8fzx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:39:10.408428 kubelet[1934]: I0813 01:39:10.408378 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76b9b46c-a3db-47a1-a6d5-9f38fc763ee1-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "76b9b46c-a3db-47a1-a6d5-9f38fc763ee1" (UID: "76b9b46c-a3db-47a1-a6d5-9f38fc763ee1"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 01:39:10.448935 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-645411f7e75a20d13be72b142c6a8bbfafdc57dc35cf29a194fd2ddf90dc6850-rootfs.mount: Deactivated successfully. Aug 13 01:39:10.449203 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a-rootfs.mount: Deactivated successfully. Aug 13 01:39:10.449295 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a-shm.mount: Deactivated successfully. Aug 13 01:39:10.449372 systemd[1]: run-netns-cni\x2dbc6e66bf\x2db0a7\x2df996\x2da6ef\x2d0588cb1d9ea8.mount: Deactivated successfully. Aug 13 01:39:10.449444 systemd[1]: var-lib-kubelet-pods-76b9b46c\x2da3db\x2d47a1\x2da6d5\x2d9f38fc763ee1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw8fzx.mount: Deactivated successfully. Aug 13 01:39:10.449522 systemd[1]: var-lib-kubelet-pods-76b9b46c\x2da3db\x2d47a1\x2da6d5\x2d9f38fc763ee1-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 01:39:10.504808 kubelet[1934]: I0813 01:39:10.504778 1934 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w8fzx\" (UniqueName: \"kubernetes.io/projected/76b9b46c-a3db-47a1-a6d5-9f38fc763ee1-kube-api-access-w8fzx\") on node \"192.168.169.77\" DevicePath \"\"" Aug 13 01:39:10.504808 kubelet[1934]: I0813 01:39:10.504807 1934 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/76b9b46c-a3db-47a1-a6d5-9f38fc763ee1-whisker-backend-key-pair\") on node \"192.168.169.77\" DevicePath \"\"" Aug 13 01:39:10.504932 kubelet[1934]: I0813 01:39:10.504821 1934 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76b9b46c-a3db-47a1-a6d5-9f38fc763ee1-whisker-ca-bundle\") on node \"192.168.169.77\" DevicePath \"\"" Aug 13 01:39:10.910583 kubelet[1934]: E0813 01:39:10.910552 1934 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:39:10.974782 systemd[1]: Removed slice kubepods-besteffort-pod76b9b46c_a3db_47a1_a6d5_9f38fc763ee1.slice - libcontainer container kubepods-besteffort-pod76b9b46c_a3db_47a1_a6d5_9f38fc763ee1.slice. Aug 13 01:39:11.093241 kubelet[1934]: I0813 01:39:11.091352 1934 scope.go:117] "RemoveContainer" containerID="645411f7e75a20d13be72b142c6a8bbfafdc57dc35cf29a194fd2ddf90dc6850" Aug 13 01:39:11.093241 kubelet[1934]: E0813 01:39:11.091816 1934 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:39:11.095300 containerd[1576]: time="2025-08-13T01:39:11.095273819Z" level=info msg="RemoveContainer for \"645411f7e75a20d13be72b142c6a8bbfafdc57dc35cf29a194fd2ddf90dc6850\"" Aug 13 01:39:11.099231 containerd[1576]: time="2025-08-13T01:39:11.099209685Z" level=info msg="RemoveContainer for \"645411f7e75a20d13be72b142c6a8bbfafdc57dc35cf29a194fd2ddf90dc6850\" returns successfully" Aug 13 01:39:11.100126 kubelet[1934]: I0813 01:39:11.099350 1934 scope.go:117] "RemoveContainer" containerID="396048b761cf10ec334881c4402a30a6181b620ffab79bdbe548ae65fbf4d3e3" Aug 13 01:39:11.100949 containerd[1576]: time="2025-08-13T01:39:11.100924421Z" level=info msg="RemoveContainer for \"396048b761cf10ec334881c4402a30a6181b620ffab79bdbe548ae65fbf4d3e3\"" Aug 13 01:39:11.106159 containerd[1576]: time="2025-08-13T01:39:11.106124091Z" level=info msg="RemoveContainer for \"396048b761cf10ec334881c4402a30a6181b620ffab79bdbe548ae65fbf4d3e3\" returns successfully" Aug 13 01:39:11.106477 kubelet[1934]: I0813 01:39:11.106318 1934 scope.go:117] "RemoveContainer" containerID="645411f7e75a20d13be72b142c6a8bbfafdc57dc35cf29a194fd2ddf90dc6850" Aug 13 01:39:11.106663 containerd[1576]: time="2025-08-13T01:39:11.106639395Z" level=error msg="ContainerStatus for \"645411f7e75a20d13be72b142c6a8bbfafdc57dc35cf29a194fd2ddf90dc6850\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"645411f7e75a20d13be72b142c6a8bbfafdc57dc35cf29a194fd2ddf90dc6850\": not found" Aug 13 01:39:11.106928 kubelet[1934]: E0813 01:39:11.106820 1934 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"645411f7e75a20d13be72b142c6a8bbfafdc57dc35cf29a194fd2ddf90dc6850\": not found" containerID="645411f7e75a20d13be72b142c6a8bbfafdc57dc35cf29a194fd2ddf90dc6850" Aug 13 01:39:11.106928 kubelet[1934]: I0813 01:39:11.106850 1934 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"645411f7e75a20d13be72b142c6a8bbfafdc57dc35cf29a194fd2ddf90dc6850"} err="failed to get container status \"645411f7e75a20d13be72b142c6a8bbfafdc57dc35cf29a194fd2ddf90dc6850\": rpc error: code = NotFound desc = an error occurred when try to find container \"645411f7e75a20d13be72b142c6a8bbfafdc57dc35cf29a194fd2ddf90dc6850\": not found" Aug 13 01:39:11.106928 kubelet[1934]: I0813 01:39:11.106879 1934 scope.go:117] "RemoveContainer" containerID="396048b761cf10ec334881c4402a30a6181b620ffab79bdbe548ae65fbf4d3e3" Aug 13 01:39:11.107172 containerd[1576]: time="2025-08-13T01:39:11.107130008Z" level=error msg="ContainerStatus for \"396048b761cf10ec334881c4402a30a6181b620ffab79bdbe548ae65fbf4d3e3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"396048b761cf10ec334881c4402a30a6181b620ffab79bdbe548ae65fbf4d3e3\": not found" Aug 13 01:39:11.107323 kubelet[1934]: E0813 01:39:11.107232 1934 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"396048b761cf10ec334881c4402a30a6181b620ffab79bdbe548ae65fbf4d3e3\": not found" containerID="396048b761cf10ec334881c4402a30a6181b620ffab79bdbe548ae65fbf4d3e3" Aug 13 01:39:11.107323 kubelet[1934]: I0813 01:39:11.107263 1934 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"396048b761cf10ec334881c4402a30a6181b620ffab79bdbe548ae65fbf4d3e3"} err="failed to get container status \"396048b761cf10ec334881c4402a30a6181b620ffab79bdbe548ae65fbf4d3e3\": rpc error: code = NotFound desc = an error occurred when try to find container \"396048b761cf10ec334881c4402a30a6181b620ffab79bdbe548ae65fbf4d3e3\": not found" Aug 13 01:39:11.107323 kubelet[1934]: I0813 01:39:11.107276 1934 scope.go:117] "RemoveContainer" containerID="645411f7e75a20d13be72b142c6a8bbfafdc57dc35cf29a194fd2ddf90dc6850" Aug 13 01:39:11.107484 containerd[1576]: time="2025-08-13T01:39:11.107457407Z" level=error msg="ContainerStatus for \"645411f7e75a20d13be72b142c6a8bbfafdc57dc35cf29a194fd2ddf90dc6850\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"645411f7e75a20d13be72b142c6a8bbfafdc57dc35cf29a194fd2ddf90dc6850\": not found" Aug 13 01:39:11.107581 kubelet[1934]: I0813 01:39:11.107558 1934 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"645411f7e75a20d13be72b142c6a8bbfafdc57dc35cf29a194fd2ddf90dc6850"} err="failed to get container status \"645411f7e75a20d13be72b142c6a8bbfafdc57dc35cf29a194fd2ddf90dc6850\": rpc error: code = NotFound desc = an error occurred when try to find container \"645411f7e75a20d13be72b142c6a8bbfafdc57dc35cf29a194fd2ddf90dc6850\": not found" Aug 13 01:39:11.107639 kubelet[1934]: I0813 01:39:11.107632 1934 scope.go:117] "RemoveContainer" containerID="396048b761cf10ec334881c4402a30a6181b620ffab79bdbe548ae65fbf4d3e3" Aug 13 01:39:11.108071 containerd[1576]: time="2025-08-13T01:39:11.107980080Z" level=error msg="ContainerStatus for \"396048b761cf10ec334881c4402a30a6181b620ffab79bdbe548ae65fbf4d3e3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"396048b761cf10ec334881c4402a30a6181b620ffab79bdbe548ae65fbf4d3e3\": not found" Aug 13 01:39:11.108197 kubelet[1934]: I0813 01:39:11.108155 1934 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"396048b761cf10ec334881c4402a30a6181b620ffab79bdbe548ae65fbf4d3e3"} err="failed to get container status \"396048b761cf10ec334881c4402a30a6181b620ffab79bdbe548ae65fbf4d3e3\": rpc error: code = NotFound desc = an error occurred when try to find container \"396048b761cf10ec334881c4402a30a6181b620ffab79bdbe548ae65fbf4d3e3\": not found" Aug 13 01:39:11.356588 kubelet[1934]: I0813 01:39:11.356503 1934 eviction_manager.go:458] "Eviction manager: pods successfully cleaned up" pods=["calico-system/whisker-6f5c498445-4llhh"] Aug 13 01:39:11.372949 kubelet[1934]: I0813 01:39:11.372925 1934 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:39:11.373025 kubelet[1934]: I0813 01:39:11.372956 1934 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:39:11.374279 containerd[1576]: time="2025-08-13T01:39:11.374252783Z" level=info msg="StopPodSandbox for \"3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a\"" Aug 13 01:39:11.434300 containerd[1576]: 2025-08-13 01:39:11.402 [WARNING][3983] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" WorkloadEndpoint="192.168.169.77-k8s-whisker--6f5c498445--4llhh-eth0" Aug 13 01:39:11.434300 containerd[1576]: 2025-08-13 01:39:11.402 [INFO][3983] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" Aug 13 01:39:11.434300 containerd[1576]: 2025-08-13 01:39:11.402 [INFO][3983] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" iface="eth0" netns="" Aug 13 01:39:11.434300 containerd[1576]: 2025-08-13 01:39:11.402 [INFO][3983] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" Aug 13 01:39:11.434300 containerd[1576]: 2025-08-13 01:39:11.402 [INFO][3983] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" Aug 13 01:39:11.434300 containerd[1576]: 2025-08-13 01:39:11.418 [INFO][3990] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" HandleID="k8s-pod-network.3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" Workload="192.168.169.77-k8s-whisker--6f5c498445--4llhh-eth0" Aug 13 01:39:11.434300 containerd[1576]: 2025-08-13 01:39:11.418 [INFO][3990] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:39:11.434300 containerd[1576]: 2025-08-13 01:39:11.418 [INFO][3990] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:39:11.434300 containerd[1576]: 2025-08-13 01:39:11.428 [WARNING][3990] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" HandleID="k8s-pod-network.3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" Workload="192.168.169.77-k8s-whisker--6f5c498445--4llhh-eth0" Aug 13 01:39:11.434300 containerd[1576]: 2025-08-13 01:39:11.428 [INFO][3990] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" HandleID="k8s-pod-network.3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" Workload="192.168.169.77-k8s-whisker--6f5c498445--4llhh-eth0" Aug 13 01:39:11.434300 containerd[1576]: 2025-08-13 01:39:11.430 [INFO][3990] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:39:11.434300 containerd[1576]: 2025-08-13 01:39:11.432 [INFO][3983] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" Aug 13 01:39:11.434656 containerd[1576]: time="2025-08-13T01:39:11.434358709Z" level=info msg="TearDown network for sandbox \"3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a\" successfully" Aug 13 01:39:11.434656 containerd[1576]: time="2025-08-13T01:39:11.434381310Z" level=info msg="StopPodSandbox for \"3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a\" returns successfully" Aug 13 01:39:11.435389 containerd[1576]: time="2025-08-13T01:39:11.435370746Z" level=info msg="RemovePodSandbox for \"3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a\"" Aug 13 01:39:11.435454 containerd[1576]: time="2025-08-13T01:39:11.435397687Z" level=info msg="Forcibly stopping sandbox \"3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a\"" Aug 13 01:39:11.492962 containerd[1576]: 2025-08-13 01:39:11.463 [WARNING][4004] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" WorkloadEndpoint="192.168.169.77-k8s-whisker--6f5c498445--4llhh-eth0" Aug 13 01:39:11.492962 containerd[1576]: 2025-08-13 01:39:11.463 [INFO][4004] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" Aug 13 01:39:11.492962 containerd[1576]: 2025-08-13 01:39:11.463 [INFO][4004] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" iface="eth0" netns="" Aug 13 01:39:11.492962 containerd[1576]: 2025-08-13 01:39:11.463 [INFO][4004] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" Aug 13 01:39:11.492962 containerd[1576]: 2025-08-13 01:39:11.463 [INFO][4004] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" Aug 13 01:39:11.492962 containerd[1576]: 2025-08-13 01:39:11.481 [INFO][4011] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" HandleID="k8s-pod-network.3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" Workload="192.168.169.77-k8s-whisker--6f5c498445--4llhh-eth0" Aug 13 01:39:11.492962 containerd[1576]: 2025-08-13 01:39:11.482 [INFO][4011] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:39:11.492962 containerd[1576]: 2025-08-13 01:39:11.482 [INFO][4011] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:39:11.492962 containerd[1576]: 2025-08-13 01:39:11.487 [WARNING][4011] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" HandleID="k8s-pod-network.3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" Workload="192.168.169.77-k8s-whisker--6f5c498445--4llhh-eth0" Aug 13 01:39:11.492962 containerd[1576]: 2025-08-13 01:39:11.487 [INFO][4011] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" HandleID="k8s-pod-network.3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" Workload="192.168.169.77-k8s-whisker--6f5c498445--4llhh-eth0" Aug 13 01:39:11.492962 containerd[1576]: 2025-08-13 01:39:11.489 [INFO][4011] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:39:11.492962 containerd[1576]: 2025-08-13 01:39:11.491 [INFO][4004] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a" Aug 13 01:39:11.493339 containerd[1576]: time="2025-08-13T01:39:11.493004964Z" level=info msg="TearDown network for sandbox \"3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a\" successfully" Aug 13 01:39:11.498226 containerd[1576]: time="2025-08-13T01:39:11.498179603Z" level=info msg="Ensure that sandbox 3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a in task-service has been cleanup successfully" Aug 13 01:39:11.500409 containerd[1576]: time="2025-08-13T01:39:11.500387553Z" level=info msg="RemovePodSandbox \"3addc6b2bf0b2139b033d55c179c3dedb73f1ba6a05df5da2fbb24b702b1d52a\" returns successfully" Aug 13 01:39:11.500993 kubelet[1934]: I0813 01:39:11.500966 1934 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:39:11.511626 kubelet[1934]: I0813 01:39:11.511603 1934 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:39:11.511699 kubelet[1934]: I0813 01:39:11.511683 1934 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-868b9987f8-c6whk","default/nginx-deployment-7fcdb87857-gncrg","tigera-operator/tigera-operator-747864d56d-kdxxp","kube-system/coredns-668d6bf9bc-sjhnv","kube-system/coredns-668d6bf9bc-lk7hd","calico-system/calico-node-hxxs9","kube-system/kube-proxy-rm5rw","calico-system/csi-node-driver-mj94h"] Aug 13 01:39:11.511738 kubelet[1934]: E0813 01:39:11.511709 1934 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-868b9987f8-c6whk" Aug 13 01:39:11.512307 containerd[1576]: time="2025-08-13T01:39:11.512281172Z" level=info msg="StopContainer for \"39abb692d3e45de312c27cd1786e4fd3e9fbf583e295fb9d125fd7a92ab623e0\" with timeout 2 (s)" Aug 13 01:39:11.512575 containerd[1576]: time="2025-08-13T01:39:11.512557920Z" level=info msg="Stop container \"39abb692d3e45de312c27cd1786e4fd3e9fbf583e295fb9d125fd7a92ab623e0\" with signal quit" Aug 13 01:39:11.533355 systemd[1]: cri-containerd-39abb692d3e45de312c27cd1786e4fd3e9fbf583e295fb9d125fd7a92ab623e0.scope: Deactivated successfully. Aug 13 01:39:11.535283 containerd[1576]: time="2025-08-13T01:39:11.535026703Z" level=info msg="received exit event container_id:\"39abb692d3e45de312c27cd1786e4fd3e9fbf583e295fb9d125fd7a92ab623e0\" id:\"39abb692d3e45de312c27cd1786e4fd3e9fbf583e295fb9d125fd7a92ab623e0\" pid:3144 exited_at:{seconds:1755049151 nanos:534619832}" Aug 13 01:39:11.535585 containerd[1576]: time="2025-08-13T01:39:11.535170777Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39abb692d3e45de312c27cd1786e4fd3e9fbf583e295fb9d125fd7a92ab623e0\" id:\"39abb692d3e45de312c27cd1786e4fd3e9fbf583e295fb9d125fd7a92ab623e0\" pid:3144 exited_at:{seconds:1755049151 nanos:534619832}" Aug 13 01:39:11.554917 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39abb692d3e45de312c27cd1786e4fd3e9fbf583e295fb9d125fd7a92ab623e0-rootfs.mount: Deactivated successfully. Aug 13 01:39:11.561136 containerd[1576]: time="2025-08-13T01:39:11.561062393Z" level=info msg="StopContainer for \"39abb692d3e45de312c27cd1786e4fd3e9fbf583e295fb9d125fd7a92ab623e0\" returns successfully" Aug 13 01:39:11.561817 containerd[1576]: time="2025-08-13T01:39:11.561787932Z" level=info msg="StopPodSandbox for \"2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f\"" Aug 13 01:39:11.561869 containerd[1576]: time="2025-08-13T01:39:11.561845193Z" level=info msg="Container to stop \"39abb692d3e45de312c27cd1786e4fd3e9fbf583e295fb9d125fd7a92ab623e0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:39:11.567814 systemd[1]: cri-containerd-2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f.scope: Deactivated successfully. Aug 13 01:39:11.570693 containerd[1576]: time="2025-08-13T01:39:11.570645890Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f\" id:\"2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f\" pid:3034 exit_status:137 exited_at:{seconds:1755049151 nanos:570230618}" Aug 13 01:39:11.591798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f-rootfs.mount: Deactivated successfully. Aug 13 01:39:11.593229 containerd[1576]: time="2025-08-13T01:39:11.593144094Z" level=info msg="received exit event sandbox_id:\"2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f\" exit_status:137 exited_at:{seconds:1755049151 nanos:570230618}" Aug 13 01:39:11.595850 containerd[1576]: time="2025-08-13T01:39:11.595823116Z" level=info msg="shim disconnected" id=2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f namespace=k8s.io Aug 13 01:39:11.595905 containerd[1576]: time="2025-08-13T01:39:11.595849447Z" level=warning msg="cleaning up after shim disconnected" id=2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f namespace=k8s.io Aug 13 01:39:11.595905 containerd[1576]: time="2025-08-13T01:39:11.595857547Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:39:11.596293 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f-shm.mount: Deactivated successfully. Aug 13 01:39:11.637651 systemd-networkd[1455]: cali43d09e7fe56: Link DOWN Aug 13 01:39:11.637659 systemd-networkd[1455]: cali43d09e7fe56: Lost carrier Aug 13 01:39:11.696099 containerd[1576]: 2025-08-13 01:39:11.636 [INFO][4083] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" Aug 13 01:39:11.696099 containerd[1576]: 2025-08-13 01:39:11.636 [INFO][4083] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" iface="eth0" netns="/var/run/netns/cni-4168fef6-3286-99b6-e1a1-936bd0ccc577" Aug 13 01:39:11.696099 containerd[1576]: 2025-08-13 01:39:11.636 [INFO][4083] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" iface="eth0" netns="/var/run/netns/cni-4168fef6-3286-99b6-e1a1-936bd0ccc577" Aug 13 01:39:11.696099 containerd[1576]: 2025-08-13 01:39:11.644 [INFO][4083] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" after=7.412739ms iface="eth0" netns="/var/run/netns/cni-4168fef6-3286-99b6-e1a1-936bd0ccc577" Aug 13 01:39:11.696099 containerd[1576]: 2025-08-13 01:39:11.644 [INFO][4083] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" Aug 13 01:39:11.696099 containerd[1576]: 2025-08-13 01:39:11.645 [INFO][4083] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" Aug 13 01:39:11.696099 containerd[1576]: 2025-08-13 01:39:11.664 [INFO][4096] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" HandleID="k8s-pod-network.2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" Workload="192.168.169.77-k8s-nginx--deployment--7fcdb87857--gncrg-eth0" Aug 13 01:39:11.696099 containerd[1576]: 2025-08-13 01:39:11.664 [INFO][4096] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:39:11.696099 containerd[1576]: 2025-08-13 01:39:11.664 [INFO][4096] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:39:11.696099 containerd[1576]: 2025-08-13 01:39:11.691 [INFO][4096] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" HandleID="k8s-pod-network.2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" Workload="192.168.169.77-k8s-nginx--deployment--7fcdb87857--gncrg-eth0" Aug 13 01:39:11.696099 containerd[1576]: 2025-08-13 01:39:11.691 [INFO][4096] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" HandleID="k8s-pod-network.2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" Workload="192.168.169.77-k8s-nginx--deployment--7fcdb87857--gncrg-eth0" Aug 13 01:39:11.696099 containerd[1576]: 2025-08-13 01:39:11.692 [INFO][4096] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:39:11.696099 containerd[1576]: 2025-08-13 01:39:11.694 [INFO][4083] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" Aug 13 01:39:11.698254 systemd[1]: run-netns-cni\x2d4168fef6\x2d3286\x2d99b6\x2de1a1\x2d936bd0ccc577.mount: Deactivated successfully. Aug 13 01:39:11.698741 containerd[1576]: time="2025-08-13T01:39:11.698708720Z" level=info msg="TearDown network for sandbox \"2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f\" successfully" Aug 13 01:39:11.698784 containerd[1576]: time="2025-08-13T01:39:11.698736251Z" level=info msg="StopPodSandbox for \"2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f\" returns successfully" Aug 13 01:39:11.704651 kubelet[1934]: I0813 01:39:11.704631 1934 eviction_manager.go:627] "Eviction manager: pod is evicted successfully" pod="default/nginx-deployment-7fcdb87857-gncrg" Aug 13 01:39:11.704651 kubelet[1934]: I0813 01:39:11.704652 1934 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["default/nginx-deployment-7fcdb87857-gncrg"] Aug 13 01:39:11.812289 kubelet[1934]: I0813 01:39:11.811992 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f5xr\" (UniqueName: \"kubernetes.io/projected/a6fc4261-d1e0-4df1-b913-90692b3a76b6-kube-api-access-5f5xr\") pod \"a6fc4261-d1e0-4df1-b913-90692b3a76b6\" (UID: \"a6fc4261-d1e0-4df1-b913-90692b3a76b6\") " Aug 13 01:39:11.815356 kubelet[1934]: I0813 01:39:11.815334 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6fc4261-d1e0-4df1-b913-90692b3a76b6-kube-api-access-5f5xr" (OuterVolumeSpecName: "kube-api-access-5f5xr") pod "a6fc4261-d1e0-4df1-b913-90692b3a76b6" (UID: "a6fc4261-d1e0-4df1-b913-90692b3a76b6"). InnerVolumeSpecName "kube-api-access-5f5xr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:39:11.817833 systemd[1]: var-lib-kubelet-pods-a6fc4261\x2dd1e0\x2d4df1\x2db913\x2d90692b3a76b6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5f5xr.mount: Deactivated successfully. Aug 13 01:39:11.911157 kubelet[1934]: E0813 01:39:11.911077 1934 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:39:11.913284 kubelet[1934]: I0813 01:39:11.913257 1934 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5f5xr\" (UniqueName: \"kubernetes.io/projected/a6fc4261-d1e0-4df1-b913-90692b3a76b6-kube-api-access-5f5xr\") on node \"192.168.169.77\" DevicePath \"\"" Aug 13 01:39:12.095385 kubelet[1934]: I0813 01:39:12.095015 1934 scope.go:117] "RemoveContainer" containerID="39abb692d3e45de312c27cd1786e4fd3e9fbf583e295fb9d125fd7a92ab623e0" Aug 13 01:39:12.097009 containerd[1576]: time="2025-08-13T01:39:12.096842877Z" level=info msg="RemoveContainer for \"39abb692d3e45de312c27cd1786e4fd3e9fbf583e295fb9d125fd7a92ab623e0\"" Aug 13 01:39:12.101124 containerd[1576]: time="2025-08-13T01:39:12.101085344Z" level=info msg="RemoveContainer for \"39abb692d3e45de312c27cd1786e4fd3e9fbf583e295fb9d125fd7a92ab623e0\" returns successfully" Aug 13 01:39:12.101395 systemd[1]: Removed slice kubepods-besteffort-poda6fc4261_d1e0_4df1_b913_90692b3a76b6.slice - libcontainer container kubepods-besteffort-poda6fc4261_d1e0_4df1_b913_90692b3a76b6.slice. Aug 13 01:39:12.102365 kubelet[1934]: I0813 01:39:12.102343 1934 scope.go:117] "RemoveContainer" containerID="39abb692d3e45de312c27cd1786e4fd3e9fbf583e295fb9d125fd7a92ab623e0" Aug 13 01:39:12.102775 containerd[1576]: time="2025-08-13T01:39:12.102735265Z" level=error msg="ContainerStatus for \"39abb692d3e45de312c27cd1786e4fd3e9fbf583e295fb9d125fd7a92ab623e0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"39abb692d3e45de312c27cd1786e4fd3e9fbf583e295fb9d125fd7a92ab623e0\": not found" Aug 13 01:39:12.103013 kubelet[1934]: E0813 01:39:12.102988 1934 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"39abb692d3e45de312c27cd1786e4fd3e9fbf583e295fb9d125fd7a92ab623e0\": not found" containerID="39abb692d3e45de312c27cd1786e4fd3e9fbf583e295fb9d125fd7a92ab623e0" Aug 13 01:39:12.103074 kubelet[1934]: I0813 01:39:12.103016 1934 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"39abb692d3e45de312c27cd1786e4fd3e9fbf583e295fb9d125fd7a92ab623e0"} err="failed to get container status \"39abb692d3e45de312c27cd1786e4fd3e9fbf583e295fb9d125fd7a92ab623e0\": rpc error: code = NotFound desc = an error occurred when try to find container \"39abb692d3e45de312c27cd1786e4fd3e9fbf583e295fb9d125fd7a92ab623e0\": not found" Aug 13 01:39:12.705556 kubelet[1934]: I0813 01:39:12.705507 1934 eviction_manager.go:458] "Eviction manager: pods successfully cleaned up" pods=["default/nginx-deployment-7fcdb87857-gncrg"] Aug 13 01:39:12.717303 kubelet[1934]: I0813 01:39:12.717141 1934 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:39:12.717303 kubelet[1934]: I0813 01:39:12.717171 1934 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:39:12.718669 containerd[1576]: time="2025-08-13T01:39:12.718638375Z" level=info msg="StopPodSandbox for \"2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f\"" Aug 13 01:39:12.782017 containerd[1576]: 2025-08-13 01:39:12.751 [INFO][4117] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" Aug 13 01:39:12.782017 containerd[1576]: 2025-08-13 01:39:12.751 [INFO][4117] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" iface="eth0" netns="" Aug 13 01:39:12.782017 containerd[1576]: 2025-08-13 01:39:12.751 [INFO][4117] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" Aug 13 01:39:12.782017 containerd[1576]: 2025-08-13 01:39:12.751 [INFO][4117] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" Aug 13 01:39:12.782017 containerd[1576]: 2025-08-13 01:39:12.769 [INFO][4124] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" HandleID="k8s-pod-network.2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" Workload="192.168.169.77-k8s-nginx--deployment--7fcdb87857--gncrg-eth0" Aug 13 01:39:12.782017 containerd[1576]: 2025-08-13 01:39:12.769 [INFO][4124] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:39:12.782017 containerd[1576]: 2025-08-13 01:39:12.770 [INFO][4124] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:39:12.782017 containerd[1576]: 2025-08-13 01:39:12.776 [WARNING][4124] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" HandleID="k8s-pod-network.2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" Workload="192.168.169.77-k8s-nginx--deployment--7fcdb87857--gncrg-eth0" Aug 13 01:39:12.782017 containerd[1576]: 2025-08-13 01:39:12.776 [INFO][4124] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" HandleID="k8s-pod-network.2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" Workload="192.168.169.77-k8s-nginx--deployment--7fcdb87857--gncrg-eth0" Aug 13 01:39:12.782017 containerd[1576]: 2025-08-13 01:39:12.777 [INFO][4124] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:39:12.782017 containerd[1576]: 2025-08-13 01:39:12.779 [INFO][4117] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" Aug 13 01:39:12.782460 containerd[1576]: time="2025-08-13T01:39:12.782068695Z" level=info msg="TearDown network for sandbox \"2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f\" successfully" Aug 13 01:39:12.782460 containerd[1576]: time="2025-08-13T01:39:12.782092416Z" level=info msg="StopPodSandbox for \"2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f\" returns successfully" Aug 13 01:39:12.782886 containerd[1576]: time="2025-08-13T01:39:12.782865155Z" level=info msg="RemovePodSandbox for \"2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f\"" Aug 13 01:39:12.782939 containerd[1576]: time="2025-08-13T01:39:12.782895455Z" level=info msg="Forcibly stopping sandbox \"2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f\"" Aug 13 01:39:12.846984 containerd[1576]: 2025-08-13 01:39:12.815 [INFO][4138] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" Aug 13 01:39:12.846984 containerd[1576]: 2025-08-13 01:39:12.816 [INFO][4138] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" iface="eth0" netns="" Aug 13 01:39:12.846984 containerd[1576]: 2025-08-13 01:39:12.816 [INFO][4138] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" Aug 13 01:39:12.846984 containerd[1576]: 2025-08-13 01:39:12.816 [INFO][4138] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" Aug 13 01:39:12.846984 containerd[1576]: 2025-08-13 01:39:12.835 [INFO][4146] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" HandleID="k8s-pod-network.2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" Workload="192.168.169.77-k8s-nginx--deployment--7fcdb87857--gncrg-eth0" Aug 13 01:39:12.846984 containerd[1576]: 2025-08-13 01:39:12.835 [INFO][4146] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:39:12.846984 containerd[1576]: 2025-08-13 01:39:12.835 [INFO][4146] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:39:12.846984 containerd[1576]: 2025-08-13 01:39:12.841 [WARNING][4146] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" HandleID="k8s-pod-network.2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" Workload="192.168.169.77-k8s-nginx--deployment--7fcdb87857--gncrg-eth0" Aug 13 01:39:12.846984 containerd[1576]: 2025-08-13 01:39:12.841 [INFO][4146] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" HandleID="k8s-pod-network.2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" Workload="192.168.169.77-k8s-nginx--deployment--7fcdb87857--gncrg-eth0" Aug 13 01:39:12.846984 containerd[1576]: 2025-08-13 01:39:12.842 [INFO][4146] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:39:12.846984 containerd[1576]: 2025-08-13 01:39:12.844 [INFO][4138] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f" Aug 13 01:39:12.846984 containerd[1576]: time="2025-08-13T01:39:12.846980441Z" level=info msg="TearDown network for sandbox \"2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f\" successfully" Aug 13 01:39:12.848555 containerd[1576]: time="2025-08-13T01:39:12.848535351Z" level=info msg="Ensure that sandbox 2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f in task-service has been cleanup successfully" Aug 13 01:39:12.851321 containerd[1576]: time="2025-08-13T01:39:12.851299240Z" level=info msg="RemovePodSandbox \"2f985b9d2d9d114f0ce2e79a069beea99784eef056586f9e3a3e3f2414d22f1f\" returns successfully" Aug 13 01:39:12.851809 kubelet[1934]: I0813 01:39:12.851782 1934 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:39:12.868570 kubelet[1934]: I0813 01:39:12.868533 1934 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:39:12.868644 kubelet[1934]: I0813 01:39:12.868598 1934 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-868b9987f8-c6whk","tigera-operator/tigera-operator-747864d56d-kdxxp","kube-system/coredns-668d6bf9bc-sjhnv","kube-system/coredns-668d6bf9bc-lk7hd","calico-system/calico-node-hxxs9","kube-system/kube-proxy-rm5rw","calico-system/csi-node-driver-mj94h"] Aug 13 01:39:12.868644 kubelet[1934]: E0813 01:39:12.868623 1934 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-868b9987f8-c6whk" Aug 13 01:39:12.869234 containerd[1576]: time="2025-08-13T01:39:12.869159271Z" level=info msg="StopContainer for \"be09e3f3180c5b39ef07a111b40403b602ef26b564c66430ca6dee10122be2eb\" with timeout 2 (s)" Aug 13 01:39:12.869561 containerd[1576]: time="2025-08-13T01:39:12.869534431Z" level=info msg="Stop container \"be09e3f3180c5b39ef07a111b40403b602ef26b564c66430ca6dee10122be2eb\" with signal terminated" Aug 13 01:39:12.911775 kubelet[1934]: E0813 01:39:12.911751 1934 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:39:12.969833 kubelet[1934]: I0813 01:39:12.969257 1934 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76b9b46c-a3db-47a1-a6d5-9f38fc763ee1" path="/var/lib/kubelet/pods/76b9b46c-a3db-47a1-a6d5-9f38fc763ee1/volumes" Aug 13 01:39:13.455321 systemd[1]: cri-containerd-be09e3f3180c5b39ef07a111b40403b602ef26b564c66430ca6dee10122be2eb.scope: Deactivated successfully. Aug 13 01:39:13.456123 systemd[1]: cri-containerd-be09e3f3180c5b39ef07a111b40403b602ef26b564c66430ca6dee10122be2eb.scope: Consumed 831ms CPU time, 70.8M memory peak. Aug 13 01:39:13.457031 containerd[1576]: time="2025-08-13T01:39:13.456297203Z" level=info msg="TaskExit event in podsandbox handler container_id:\"be09e3f3180c5b39ef07a111b40403b602ef26b564c66430ca6dee10122be2eb\" id:\"be09e3f3180c5b39ef07a111b40403b602ef26b564c66430ca6dee10122be2eb\" pid:2415 exited_at:{seconds:1755049153 nanos:455858032}" Aug 13 01:39:13.457031 containerd[1576]: time="2025-08-13T01:39:13.456867086Z" level=info msg="received exit event container_id:\"be09e3f3180c5b39ef07a111b40403b602ef26b564c66430ca6dee10122be2eb\" id:\"be09e3f3180c5b39ef07a111b40403b602ef26b564c66430ca6dee10122be2eb\" pid:2415 exited_at:{seconds:1755049153 nanos:455858032}" Aug 13 01:39:13.476369 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be09e3f3180c5b39ef07a111b40403b602ef26b564c66430ca6dee10122be2eb-rootfs.mount: Deactivated successfully. Aug 13 01:39:13.483932 containerd[1576]: time="2025-08-13T01:39:13.483902336Z" level=info msg="StopContainer for \"be09e3f3180c5b39ef07a111b40403b602ef26b564c66430ca6dee10122be2eb\" returns successfully" Aug 13 01:39:13.484334 containerd[1576]: time="2025-08-13T01:39:13.484306616Z" level=info msg="StopPodSandbox for \"53cfbd45a779721ba931df13bbf96c3769a339a554742b0929a5cd33d09d60f9\"" Aug 13 01:39:13.484439 containerd[1576]: time="2025-08-13T01:39:13.484396408Z" level=info msg="Container to stop \"be09e3f3180c5b39ef07a111b40403b602ef26b564c66430ca6dee10122be2eb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:39:13.490074 systemd[1]: cri-containerd-53cfbd45a779721ba931df13bbf96c3769a339a554742b0929a5cd33d09d60f9.scope: Deactivated successfully. Aug 13 01:39:13.491544 containerd[1576]: time="2025-08-13T01:39:13.491443404Z" level=info msg="TaskExit event in podsandbox handler container_id:\"53cfbd45a779721ba931df13bbf96c3769a339a554742b0929a5cd33d09d60f9\" id:\"53cfbd45a779721ba931df13bbf96c3769a339a554742b0929a5cd33d09d60f9\" pid:2134 exit_status:137 exited_at:{seconds:1755049153 nanos:490801499}" Aug 13 01:39:13.513995 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53cfbd45a779721ba931df13bbf96c3769a339a554742b0929a5cd33d09d60f9-rootfs.mount: Deactivated successfully. Aug 13 01:39:13.514860 containerd[1576]: time="2025-08-13T01:39:13.514825288Z" level=info msg="shim disconnected" id=53cfbd45a779721ba931df13bbf96c3769a339a554742b0929a5cd33d09d60f9 namespace=k8s.io Aug 13 01:39:13.515000 containerd[1576]: time="2025-08-13T01:39:13.514983441Z" level=info msg="received exit event sandbox_id:\"53cfbd45a779721ba931df13bbf96c3769a339a554742b0929a5cd33d09d60f9\" exit_status:137 exited_at:{seconds:1755049153 nanos:490801499}" Aug 13 01:39:13.515441 containerd[1576]: time="2025-08-13T01:39:13.515099064Z" level=warning msg="cleaning up after shim disconnected" id=53cfbd45a779721ba931df13bbf96c3769a339a554742b0929a5cd33d09d60f9 namespace=k8s.io Aug 13 01:39:13.515537 containerd[1576]: time="2025-08-13T01:39:13.515513904Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:39:13.515835 containerd[1576]: time="2025-08-13T01:39:13.515658287Z" level=info msg="TearDown network for sandbox \"53cfbd45a779721ba931df13bbf96c3769a339a554742b0929a5cd33d09d60f9\" successfully" Aug 13 01:39:13.515835 containerd[1576]: time="2025-08-13T01:39:13.515780330Z" level=info msg="StopPodSandbox for \"53cfbd45a779721ba931df13bbf96c3769a339a554742b0929a5cd33d09d60f9\" returns successfully" Aug 13 01:39:13.521633 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-53cfbd45a779721ba931df13bbf96c3769a339a554742b0929a5cd33d09d60f9-shm.mount: Deactivated successfully. Aug 13 01:39:13.527949 kubelet[1934]: I0813 01:39:13.527920 1934 eviction_manager.go:627] "Eviction manager: pod is evicted successfully" pod="tigera-operator/tigera-operator-747864d56d-kdxxp" Aug 13 01:39:13.527949 kubelet[1934]: I0813 01:39:13.527942 1934 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["tigera-operator/tigera-operator-747864d56d-kdxxp"] Aug 13 01:39:13.548241 kubelet[1934]: I0813 01:39:13.548205 1934 kubelet.go:2351] "Pod admission denied" podUID="3a1a7604-ca9f-4db0-b990-db3f0b63196e" pod="tigera-operator/tigera-operator-747864d56d-xzpr5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:13.576316 kubelet[1934]: I0813 01:39:13.576031 1934 kubelet.go:2351] "Pod admission denied" podUID="feb06da9-2049-406c-8d03-ecdcefcb3d5b" pod="tigera-operator/tigera-operator-747864d56d-wq44d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:13.596896 kubelet[1934]: I0813 01:39:13.596876 1934 kubelet.go:2351] "Pod admission denied" podUID="95c4c8ae-aaa5-430d-878c-deb15458fae1" pod="tigera-operator/tigera-operator-747864d56d-sxvbg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:13.619588 kubelet[1934]: I0813 01:39:13.619569 1934 kubelet.go:2351] "Pod admission denied" podUID="bc7262ab-9799-4c08-b455-413413eeef39" pod="tigera-operator/tigera-operator-747864d56d-7d8qj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:13.621978 kubelet[1934]: I0813 01:39:13.621958 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ztsf\" (UniqueName: \"kubernetes.io/projected/63c661de-d2c6-4fcc-93a0-f9d7857c2d35-kube-api-access-6ztsf\") pod \"63c661de-d2c6-4fcc-93a0-f9d7857c2d35\" (UID: \"63c661de-d2c6-4fcc-93a0-f9d7857c2d35\") " Aug 13 01:39:13.622372 kubelet[1934]: I0813 01:39:13.621983 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/63c661de-d2c6-4fcc-93a0-f9d7857c2d35-var-lib-calico\") pod \"63c661de-d2c6-4fcc-93a0-f9d7857c2d35\" (UID: \"63c661de-d2c6-4fcc-93a0-f9d7857c2d35\") " Aug 13 01:39:13.622372 kubelet[1934]: I0813 01:39:13.622057 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63c661de-d2c6-4fcc-93a0-f9d7857c2d35-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "63c661de-d2c6-4fcc-93a0-f9d7857c2d35" (UID: "63c661de-d2c6-4fcc-93a0-f9d7857c2d35"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:39:13.625859 systemd[1]: var-lib-kubelet-pods-63c661de\x2dd2c6\x2d4fcc\x2d93a0\x2df9d7857c2d35-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6ztsf.mount: Deactivated successfully. Aug 13 01:39:13.626529 kubelet[1934]: I0813 01:39:13.626508 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63c661de-d2c6-4fcc-93a0-f9d7857c2d35-kube-api-access-6ztsf" (OuterVolumeSpecName: "kube-api-access-6ztsf") pod "63c661de-d2c6-4fcc-93a0-f9d7857c2d35" (UID: "63c661de-d2c6-4fcc-93a0-f9d7857c2d35"). InnerVolumeSpecName "kube-api-access-6ztsf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:39:13.638986 kubelet[1934]: I0813 01:39:13.638963 1934 kubelet.go:2351] "Pod admission denied" podUID="935a4823-3020-44c4-80f2-f64ea63e6d28" pod="tigera-operator/tigera-operator-747864d56d-chgzn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:13.667578 kubelet[1934]: I0813 01:39:13.667559 1934 kubelet.go:2351] "Pod admission denied" podUID="9f132c2a-3a15-40bf-9709-e1f456cc8464" pod="tigera-operator/tigera-operator-747864d56d-cc2dz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:13.696311 kubelet[1934]: I0813 01:39:13.696278 1934 kubelet.go:2351] "Pod admission denied" podUID="9ba36d0a-0feb-4b29-bae6-959e078cd841" pod="tigera-operator/tigera-operator-747864d56d-gstb4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:13.698397 kubelet[1934]: I0813 01:39:13.698359 1934 status_manager.go:890] "Failed to get status for pod" podUID="9ba36d0a-0feb-4b29-bae6-959e078cd841" pod="tigera-operator/tigera-operator-747864d56d-gstb4" err="pods \"tigera-operator-747864d56d-gstb4\" is forbidden: User \"system:node:192.168.169.77\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node '192.168.169.77' and this object" Aug 13 01:39:13.722481 kubelet[1934]: I0813 01:39:13.722418 1934 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6ztsf\" (UniqueName: \"kubernetes.io/projected/63c661de-d2c6-4fcc-93a0-f9d7857c2d35-kube-api-access-6ztsf\") on node \"192.168.169.77\" DevicePath \"\"" Aug 13 01:39:13.722481 kubelet[1934]: I0813 01:39:13.722433 1934 reconciler_common.go:299] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/63c661de-d2c6-4fcc-93a0-f9d7857c2d35-var-lib-calico\") on node \"192.168.169.77\" DevicePath \"\"" Aug 13 01:39:13.912532 kubelet[1934]: E0813 01:39:13.912501 1934 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:39:14.100126 kubelet[1934]: I0813 01:39:14.099806 1934 scope.go:117] "RemoveContainer" containerID="be09e3f3180c5b39ef07a111b40403b602ef26b564c66430ca6dee10122be2eb" Aug 13 01:39:14.102737 containerd[1576]: time="2025-08-13T01:39:14.102712889Z" level=info msg="RemoveContainer for \"be09e3f3180c5b39ef07a111b40403b602ef26b564c66430ca6dee10122be2eb\"" Aug 13 01:39:14.106574 systemd[1]: Removed slice kubepods-besteffort-pod63c661de_d2c6_4fcc_93a0_f9d7857c2d35.slice - libcontainer container kubepods-besteffort-pod63c661de_d2c6_4fcc_93a0_f9d7857c2d35.slice. Aug 13 01:39:14.106726 containerd[1576]: time="2025-08-13T01:39:14.106576955Z" level=info msg="RemoveContainer for \"be09e3f3180c5b39ef07a111b40403b602ef26b564c66430ca6dee10122be2eb\" returns successfully" Aug 13 01:39:14.106664 systemd[1]: kubepods-besteffort-pod63c661de_d2c6_4fcc_93a0_f9d7857c2d35.slice: Consumed 860ms CPU time, 71M memory peak. Aug 13 01:39:14.107443 kubelet[1934]: I0813 01:39:14.107220 1934 scope.go:117] "RemoveContainer" containerID="be09e3f3180c5b39ef07a111b40403b602ef26b564c66430ca6dee10122be2eb" Aug 13 01:39:14.107707 containerd[1576]: time="2025-08-13T01:39:14.107686149Z" level=error msg="ContainerStatus for \"be09e3f3180c5b39ef07a111b40403b602ef26b564c66430ca6dee10122be2eb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"be09e3f3180c5b39ef07a111b40403b602ef26b564c66430ca6dee10122be2eb\": not found" Aug 13 01:39:14.107894 kubelet[1934]: E0813 01:39:14.107874 1934 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"be09e3f3180c5b39ef07a111b40403b602ef26b564c66430ca6dee10122be2eb\": not found" containerID="be09e3f3180c5b39ef07a111b40403b602ef26b564c66430ca6dee10122be2eb" Aug 13 01:39:14.107967 kubelet[1934]: I0813 01:39:14.107897 1934 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"be09e3f3180c5b39ef07a111b40403b602ef26b564c66430ca6dee10122be2eb"} err="failed to get container status \"be09e3f3180c5b39ef07a111b40403b602ef26b564c66430ca6dee10122be2eb\": rpc error: code = NotFound desc = an error occurred when try to find container \"be09e3f3180c5b39ef07a111b40403b602ef26b564c66430ca6dee10122be2eb\": not found" Aug 13 01:39:14.126720 kubelet[1934]: I0813 01:39:14.126699 1934 kubelet.go:2351] "Pod admission denied" podUID="dea41e86-7720-4718-ab11-4f355fddd81a" pod="tigera-operator/tigera-operator-747864d56d-7575v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:14.146670 kubelet[1934]: I0813 01:39:14.146650 1934 kubelet.go:2351] "Pod admission denied" podUID="f1bd2335-ed62-436b-a49d-a9eb05dde96f" pod="tigera-operator/tigera-operator-747864d56d-dg7q8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:14.166523 kubelet[1934]: I0813 01:39:14.166503 1934 kubelet.go:2351] "Pod admission denied" podUID="dae033c2-13f6-4282-bcf5-faa5ede0ece8" pod="tigera-operator/tigera-operator-747864d56d-b2tdv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:14.190715 kubelet[1934]: I0813 01:39:14.190683 1934 kubelet.go:2351] "Pod admission denied" podUID="b1aff063-fe68-401d-9094-9e6e64e09fe9" pod="tigera-operator/tigera-operator-747864d56d-lx7wl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:14.297382 kubelet[1934]: I0813 01:39:14.297349 1934 kubelet.go:2351] "Pod admission denied" podUID="aa195c4e-8645-4ccf-8c1c-052fbdf918fa" pod="tigera-operator/tigera-operator-747864d56d-nhsnl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:14.396469 kubelet[1934]: I0813 01:39:14.396363 1934 kubelet.go:2351] "Pod admission denied" podUID="dc9fc90c-3107-4a11-9292-ed729041ae17" pod="tigera-operator/tigera-operator-747864d56d-58f56" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:14.528229 kubelet[1934]: I0813 01:39:14.528196 1934 eviction_manager.go:458] "Eviction manager: pods successfully cleaned up" pods=["tigera-operator/tigera-operator-747864d56d-kdxxp"] Aug 13 01:39:14.542796 kubelet[1934]: I0813 01:39:14.542769 1934 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:39:14.542796 kubelet[1934]: I0813 01:39:14.542797 1934 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:39:14.547986 containerd[1576]: time="2025-08-13T01:39:14.547941093Z" level=info msg="StopPodSandbox for \"53cfbd45a779721ba931df13bbf96c3769a339a554742b0929a5cd33d09d60f9\"" Aug 13 01:39:14.548535 containerd[1576]: time="2025-08-13T01:39:14.548113907Z" level=info msg="TearDown network for sandbox \"53cfbd45a779721ba931df13bbf96c3769a339a554742b0929a5cd33d09d60f9\" successfully" Aug 13 01:39:14.548535 containerd[1576]: time="2025-08-13T01:39:14.548128007Z" level=info msg="StopPodSandbox for \"53cfbd45a779721ba931df13bbf96c3769a339a554742b0929a5cd33d09d60f9\" returns successfully" Aug 13 01:39:14.548535 containerd[1576]: time="2025-08-13T01:39:14.548299261Z" level=info msg="RemovePodSandbox for \"53cfbd45a779721ba931df13bbf96c3769a339a554742b0929a5cd33d09d60f9\"" Aug 13 01:39:14.548535 containerd[1576]: time="2025-08-13T01:39:14.548318421Z" level=info msg="Forcibly stopping sandbox \"53cfbd45a779721ba931df13bbf96c3769a339a554742b0929a5cd33d09d60f9\"" Aug 13 01:39:14.548535 containerd[1576]: time="2025-08-13T01:39:14.548367732Z" level=info msg="TearDown network for sandbox \"53cfbd45a779721ba931df13bbf96c3769a339a554742b0929a5cd33d09d60f9\" successfully" Aug 13 01:39:14.549738 containerd[1576]: time="2025-08-13T01:39:14.549707722Z" level=info msg="Ensure that sandbox 53cfbd45a779721ba931df13bbf96c3769a339a554742b0929a5cd33d09d60f9 in task-service has been cleanup successfully" Aug 13 01:39:14.551814 containerd[1576]: time="2025-08-13T01:39:14.551767818Z" level=info msg="RemovePodSandbox \"53cfbd45a779721ba931df13bbf96c3769a339a554742b0929a5cd33d09d60f9\" returns successfully" Aug 13 01:39:14.552196 kubelet[1934]: I0813 01:39:14.552178 1934 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:39:14.561319 kubelet[1934]: I0813 01:39:14.561296 1934 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:39:14.561399 kubelet[1934]: I0813 01:39:14.561363 1934 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-868b9987f8-c6whk","kube-system/coredns-668d6bf9bc-sjhnv","kube-system/coredns-668d6bf9bc-lk7hd","calico-system/calico-node-hxxs9","kube-system/kube-proxy-rm5rw","calico-system/csi-node-driver-mj94h"] Aug 13 01:39:14.561399 kubelet[1934]: E0813 01:39:14.561384 1934 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-868b9987f8-c6whk" Aug 13 01:39:14.561399 kubelet[1934]: E0813 01:39:14.561399 1934 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-sjhnv" Aug 13 01:39:14.561508 kubelet[1934]: E0813 01:39:14.561407 1934 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-lk7hd" Aug 13 01:39:14.561508 kubelet[1934]: E0813 01:39:14.561417 1934 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-hxxs9" Aug 13 01:39:14.561508 kubelet[1934]: E0813 01:39:14.561425 1934 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-rm5rw" Aug 13 01:39:14.561508 kubelet[1934]: E0813 01:39:14.561435 1934 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-mj94h" Aug 13 01:39:14.561508 kubelet[1934]: I0813 01:39:14.561445 1934 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:39:14.598051 kubelet[1934]: I0813 01:39:14.598018 1934 kubelet.go:2351] "Pod admission denied" podUID="262177e0-6ca9-4312-8a8b-f4b75cbd29dc" pod="tigera-operator/tigera-operator-747864d56d-4tkq6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:14.696882 kubelet[1934]: I0813 01:39:14.696766 1934 kubelet.go:2351] "Pod admission denied" podUID="259e2df9-b89a-4517-8076-53e633da926b" pod="tigera-operator/tigera-operator-747864d56d-vjtfn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:14.745589 kubelet[1934]: I0813 01:39:14.745466 1934 kubelet.go:2351] "Pod admission denied" podUID="771362b4-0fb4-48fd-ba54-604ba7f825a3" pod="tigera-operator/tigera-operator-747864d56d-whh9c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:14.847810 kubelet[1934]: I0813 01:39:14.847771 1934 kubelet.go:2351] "Pod admission denied" podUID="3b602600-c59f-4d04-ae32-04e4a8831c54" pod="tigera-operator/tigera-operator-747864d56d-2mrgr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:14.913148 kubelet[1934]: E0813 01:39:14.913111 1934 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:39:14.947056 kubelet[1934]: I0813 01:39:14.946957 1934 kubelet.go:2351] "Pod admission denied" podUID="6cd622e2-2ba8-4248-8438-865778a6a869" pod="tigera-operator/tigera-operator-747864d56d-2djpg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:15.048785 kubelet[1934]: I0813 01:39:15.048760 1934 kubelet.go:2351] "Pod admission denied" podUID="5fd3c540-0556-4ca9-bded-1da3f29d7d37" pod="tigera-operator/tigera-operator-747864d56d-gpwr2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:15.248176 kubelet[1934]: I0813 01:39:15.248097 1934 kubelet.go:2351] "Pod admission denied" podUID="e7652b23-1d5c-40a2-95d3-ea4aeab4b41e" pod="tigera-operator/tigera-operator-747864d56d-ndwzw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:15.347056 kubelet[1934]: I0813 01:39:15.347018 1934 kubelet.go:2351] "Pod admission denied" podUID="924e6d4e-0a9b-443c-b5e2-9260af6b2ea5" pod="tigera-operator/tigera-operator-747864d56d-kgpcv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:15.448776 kubelet[1934]: I0813 01:39:15.448623 1934 kubelet.go:2351] "Pod admission denied" podUID="8144b9f2-3876-4eb5-925e-00e895939974" pod="tigera-operator/tigera-operator-747864d56d-278zj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:15.656977 kubelet[1934]: I0813 01:39:15.656957 1934 kubelet.go:2351] "Pod admission denied" podUID="c649629c-2ad0-46aa-aa95-1fc5415e4be7" pod="tigera-operator/tigera-operator-747864d56d-md8vh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:15.779324 kubelet[1934]: I0813 01:39:15.779290 1934 kubelet.go:2351] "Pod admission denied" podUID="8fafc731-04a2-4eae-9743-7ea8e6b4aa30" pod="tigera-operator/tigera-operator-747864d56d-dlmx9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:15.829564 kubelet[1934]: I0813 01:39:15.829547 1934 kubelet.go:2351] "Pod admission denied" podUID="c561859e-3573-498a-8881-891367939e71" pod="tigera-operator/tigera-operator-747864d56d-cm8b5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:15.914209 kubelet[1934]: E0813 01:39:15.914135 1934 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:39:15.954156 kubelet[1934]: I0813 01:39:15.954136 1934 kubelet.go:2351] "Pod admission denied" podUID="fa3813c1-4005-424c-be1f-8925dd02cb1f" pod="tigera-operator/tigera-operator-747864d56d-884cc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:16.050680 kubelet[1934]: I0813 01:39:16.050653 1934 kubelet.go:2351] "Pod admission denied" podUID="3ebf33bd-4732-4ae8-9444-473b4f25ae48" pod="tigera-operator/tigera-operator-747864d56d-lrpg5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:16.151068 kubelet[1934]: I0813 01:39:16.151024 1934 kubelet.go:2351] "Pod admission denied" podUID="ffc39f28-db55-4376-8ef7-93c5955d8595" pod="tigera-operator/tigera-operator-747864d56d-5hx7s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:16.251709 kubelet[1934]: I0813 01:39:16.251639 1934 kubelet.go:2351] "Pod admission denied" podUID="959797c6-97d0-4712-8280-e86d97b3f773" pod="tigera-operator/tigera-operator-747864d56d-mvrjb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:16.350257 kubelet[1934]: I0813 01:39:16.350227 1934 kubelet.go:2351] "Pod admission denied" podUID="f3289f30-99e6-4f7b-8918-fd81327f998c" pod="tigera-operator/tigera-operator-747864d56d-4ss2b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:16.453530 kubelet[1934]: I0813 01:39:16.453498 1934 kubelet.go:2351] "Pod admission denied" podUID="b8c0a313-9f4f-4be1-be6d-82ac20d0cbc3" pod="tigera-operator/tigera-operator-747864d56d-mkjzk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:16.556421 kubelet[1934]: I0813 01:39:16.556211 1934 kubelet.go:2351] "Pod admission denied" podUID="69a82614-4105-4b45-b34c-d031a37ec0df" pod="tigera-operator/tigera-operator-747864d56d-hwml7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:16.671782 kubelet[1934]: I0813 01:39:16.671745 1934 kubelet.go:2351] "Pod admission denied" podUID="ce28f51f-50e3-442a-8536-ee02ca0c690e" pod="tigera-operator/tigera-operator-747864d56d-zkq92" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:16.755027 kubelet[1934]: I0813 01:39:16.755002 1934 kubelet.go:2351] "Pod admission denied" podUID="b09025e3-ea46-4c19-9e28-19780db53088" pod="tigera-operator/tigera-operator-747864d56d-8fr5p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:39:16.804981 update_engine[1541]: I20250813 01:39:16.804927 1541 update_attempter.cc:509] Updating boot flags...