Aug 13 07:08:58.978707 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 22:14:58 -00 2025 Aug 13 07:08:58.978741 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:08:58.978757 kernel: BIOS-provided physical RAM map: Aug 13 07:08:58.978764 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 13 07:08:58.978773 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 13 07:08:58.978785 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 07:08:58.978798 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Aug 13 07:08:58.978811 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Aug 13 07:08:58.978822 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 07:08:58.978837 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 07:08:58.978847 kernel: NX (Execute Disable) protection: active Aug 13 07:08:58.978859 kernel: APIC: Static calls initialized Aug 13 07:08:58.978876 kernel: SMBIOS 2.8 present. Aug 13 07:08:58.978888 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Aug 13 07:08:58.978899 kernel: Hypervisor detected: KVM Aug 13 07:08:58.978915 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 07:08:58.978932 kernel: kvm-clock: using sched offset of 2881190990 cycles Aug 13 07:08:58.978941 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 07:08:58.978955 kernel: tsc: Detected 2494.138 MHz processor Aug 13 07:08:58.978969 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 07:08:58.978982 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 07:08:58.978993 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Aug 13 07:08:58.979006 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 07:08:58.979016 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 07:08:58.979027 kernel: ACPI: Early table checksum verification disabled Aug 13 07:08:58.979039 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Aug 13 07:08:58.979047 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:08:58.979056 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:08:58.979064 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:08:58.979076 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 13 07:08:58.979084 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:08:58.979096 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:08:58.979105 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:08:58.979119 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:08:58.979131 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Aug 13 07:08:58.979139 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Aug 13 07:08:58.979147 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 13 07:08:58.979154 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Aug 13 07:08:58.979162 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Aug 13 07:08:58.979170 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Aug 13 07:08:58.979185 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Aug 13 07:08:58.979197 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 07:08:58.979207 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 13 07:08:58.979215 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 13 07:08:58.979223 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Aug 13 07:08:58.979235 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Aug 13 07:08:58.979244 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Aug 13 07:08:58.979257 kernel: Zone ranges: Aug 13 07:08:58.979270 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 07:08:58.979282 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Aug 13 07:08:58.979292 kernel: Normal empty Aug 13 07:08:58.979304 kernel: Movable zone start for each node Aug 13 07:08:58.979314 kernel: Early memory node ranges Aug 13 07:08:58.979326 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 07:08:58.979334 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Aug 13 07:08:58.979343 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Aug 13 07:08:58.979355 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 07:08:58.979363 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 07:08:58.979374 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Aug 13 07:08:58.979383 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 07:08:58.979391 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 07:08:58.981811 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 07:08:58.981842 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 07:08:58.981852 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 07:08:58.981868 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 07:08:58.981890 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 07:08:58.981898 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 07:08:58.981907 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 07:08:58.981916 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 07:08:58.981928 kernel: TSC deadline timer available Aug 13 07:08:58.981940 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 13 07:08:58.981949 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 07:08:58.981958 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Aug 13 07:08:58.981977 kernel: Booting paravirtualized kernel on KVM Aug 13 07:08:58.981991 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 07:08:58.982007 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 07:08:58.982020 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Aug 13 07:08:58.982029 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Aug 13 07:08:58.982041 kernel: pcpu-alloc: [0] 0 1 Aug 13 07:08:58.982055 kernel: kvm-guest: PV spinlocks disabled, no host support Aug 13 07:08:58.982070 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:08:58.982084 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 07:08:58.982096 kernel: random: crng init done Aug 13 07:08:58.982109 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 07:08:58.982122 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 07:08:58.982134 kernel: Fallback order for Node 0: 0 Aug 13 07:08:58.982145 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Aug 13 07:08:58.982169 kernel: Policy zone: DMA32 Aug 13 07:08:58.982183 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 07:08:58.982194 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 125148K reserved, 0K cma-reserved) Aug 13 07:08:58.982207 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 07:08:58.982221 kernel: Kernel/User page tables isolation: enabled Aug 13 07:08:58.982230 kernel: ftrace: allocating 37968 entries in 149 pages Aug 13 07:08:58.982238 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 07:08:58.982247 kernel: Dynamic Preempt: voluntary Aug 13 07:08:58.982256 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 07:08:58.982267 kernel: rcu: RCU event tracing is enabled. Aug 13 07:08:58.982285 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 07:08:58.982299 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 07:08:58.982314 kernel: Rude variant of Tasks RCU enabled. Aug 13 07:08:58.982346 kernel: Tracing variant of Tasks RCU enabled. Aug 13 07:08:58.982380 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 07:08:58.982420 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 07:08:58.982435 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 07:08:58.982448 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 07:08:58.982465 kernel: Console: colour VGA+ 80x25 Aug 13 07:08:58.982479 kernel: printk: console [tty0] enabled Aug 13 07:08:58.982491 kernel: printk: console [ttyS0] enabled Aug 13 07:08:58.982501 kernel: ACPI: Core revision 20230628 Aug 13 07:08:58.982510 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 07:08:58.982529 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 07:08:58.982542 kernel: x2apic enabled Aug 13 07:08:58.982554 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 07:08:58.982568 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 07:08:58.982580 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Aug 13 07:08:58.982594 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Aug 13 07:08:58.982604 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Aug 13 07:08:58.982626 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Aug 13 07:08:58.982661 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 07:08:58.982681 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 07:08:58.982694 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 07:08:58.982707 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 07:08:58.982717 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 07:08:58.982726 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 07:08:58.982735 kernel: MDS: Mitigation: Clear CPU buffers Aug 13 07:08:58.982769 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 07:08:58.982778 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 07:08:58.982794 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 07:08:58.982803 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 07:08:58.982812 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 07:08:58.982821 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 07:08:58.982830 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 13 07:08:58.982840 kernel: Freeing SMP alternatives memory: 32K Aug 13 07:08:58.982848 kernel: pid_max: default: 32768 minimum: 301 Aug 13 07:08:58.982858 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 07:08:58.982871 kernel: landlock: Up and running. Aug 13 07:08:58.982884 kernel: SELinux: Initializing. Aug 13 07:08:58.982899 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 07:08:58.982915 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 07:08:58.982924 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Aug 13 07:08:58.982940 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:08:58.982950 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:08:58.982966 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:08:58.982981 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Aug 13 07:08:58.982995 kernel: signal: max sigframe size: 1776 Aug 13 07:08:58.983010 kernel: rcu: Hierarchical SRCU implementation. Aug 13 07:08:58.983021 kernel: rcu: Max phase no-delay instances is 400. Aug 13 07:08:58.983030 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 07:08:58.983039 kernel: smp: Bringing up secondary CPUs ... Aug 13 07:08:58.983049 kernel: smpboot: x86: Booting SMP configuration: Aug 13 07:08:58.983058 kernel: .... node #0, CPUs: #1 Aug 13 07:08:58.983067 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 07:08:58.983079 kernel: smpboot: Max logical packages: 1 Aug 13 07:08:58.983092 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Aug 13 07:08:58.983101 kernel: devtmpfs: initialized Aug 13 07:08:58.983111 kernel: x86/mm: Memory block size: 128MB Aug 13 07:08:58.983120 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 07:08:58.983129 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 07:08:58.983143 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 07:08:58.983152 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 07:08:58.983165 kernel: audit: initializing netlink subsys (disabled) Aug 13 07:08:58.983175 kernel: audit: type=2000 audit(1755068937.376:1): state=initialized audit_enabled=0 res=1 Aug 13 07:08:58.983194 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 07:08:58.983204 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 07:08:58.983213 kernel: cpuidle: using governor menu Aug 13 07:08:58.983222 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 07:08:58.983231 kernel: dca service started, version 1.12.1 Aug 13 07:08:58.983240 kernel: PCI: Using configuration type 1 for base access Aug 13 07:08:58.983251 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 07:08:58.983265 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 07:08:58.983279 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 07:08:58.983303 kernel: ACPI: Added _OSI(Module Device) Aug 13 07:08:58.983318 kernel: ACPI: Added _OSI(Processor Device) Aug 13 07:08:58.983332 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 07:08:58.983346 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 07:08:58.983361 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 07:08:58.983376 kernel: ACPI: Interpreter enabled Aug 13 07:08:58.983385 kernel: ACPI: PM: (supports S0 S5) Aug 13 07:08:58.983397 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 07:08:58.985473 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 07:08:58.985493 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 07:08:58.985503 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Aug 13 07:08:58.985512 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 07:08:58.985763 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 13 07:08:58.985876 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Aug 13 07:08:58.985977 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Aug 13 07:08:58.985989 kernel: acpiphp: Slot [3] registered Aug 13 07:08:58.986003 kernel: acpiphp: Slot [4] registered Aug 13 07:08:58.986012 kernel: acpiphp: Slot [5] registered Aug 13 07:08:58.986022 kernel: acpiphp: Slot [6] registered Aug 13 07:08:58.986036 kernel: acpiphp: Slot [7] registered Aug 13 07:08:58.986050 kernel: acpiphp: Slot [8] registered Aug 13 07:08:58.986063 kernel: acpiphp: Slot [9] registered Aug 13 07:08:58.986077 kernel: acpiphp: Slot [10] registered Aug 13 07:08:58.986090 kernel: acpiphp: Slot [11] registered Aug 13 07:08:58.986105 kernel: acpiphp: Slot [12] registered Aug 13 07:08:58.986123 kernel: acpiphp: Slot [13] registered Aug 13 07:08:58.986136 kernel: acpiphp: Slot [14] registered Aug 13 07:08:58.986150 kernel: acpiphp: Slot [15] registered Aug 13 07:08:58.986164 kernel: acpiphp: Slot [16] registered Aug 13 07:08:58.986178 kernel: acpiphp: Slot [17] registered Aug 13 07:08:58.986191 kernel: acpiphp: Slot [18] registered Aug 13 07:08:58.986206 kernel: acpiphp: Slot [19] registered Aug 13 07:08:58.986221 kernel: acpiphp: Slot [20] registered Aug 13 07:08:58.986237 kernel: acpiphp: Slot [21] registered Aug 13 07:08:58.986252 kernel: acpiphp: Slot [22] registered Aug 13 07:08:58.986272 kernel: acpiphp: Slot [23] registered Aug 13 07:08:58.986287 kernel: acpiphp: Slot [24] registered Aug 13 07:08:58.986301 kernel: acpiphp: Slot [25] registered Aug 13 07:08:58.986314 kernel: acpiphp: Slot [26] registered Aug 13 07:08:58.986326 kernel: acpiphp: Slot [27] registered Aug 13 07:08:58.986335 kernel: acpiphp: Slot [28] registered Aug 13 07:08:58.986347 kernel: acpiphp: Slot [29] registered Aug 13 07:08:58.986360 kernel: acpiphp: Slot [30] registered Aug 13 07:08:58.986369 kernel: acpiphp: Slot [31] registered Aug 13 07:08:58.986383 kernel: PCI host bridge to bus 0000:00 Aug 13 07:08:58.988700 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 07:08:58.988853 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 07:08:58.988956 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 07:08:58.989045 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Aug 13 07:08:58.989133 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Aug 13 07:08:58.989225 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 07:08:58.989481 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 13 07:08:58.989649 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Aug 13 07:08:58.989784 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Aug 13 07:08:58.989884 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Aug 13 07:08:58.989984 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Aug 13 07:08:58.990094 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Aug 13 07:08:58.990259 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Aug 13 07:08:58.990375 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Aug 13 07:08:58.990529 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Aug 13 07:08:58.990631 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Aug 13 07:08:58.990741 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Aug 13 07:08:58.990841 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Aug 13 07:08:58.990938 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Aug 13 07:08:58.991064 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Aug 13 07:08:58.991165 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Aug 13 07:08:58.991265 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Aug 13 07:08:58.991365 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Aug 13 07:08:58.993628 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Aug 13 07:08:58.993738 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 07:08:58.993884 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Aug 13 07:08:58.994035 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Aug 13 07:08:58.994183 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Aug 13 07:08:58.994398 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Aug 13 07:08:58.994556 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 13 07:08:58.994661 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Aug 13 07:08:58.994761 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Aug 13 07:08:58.994867 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Aug 13 07:08:58.994994 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Aug 13 07:08:58.995149 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Aug 13 07:08:58.995309 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Aug 13 07:08:58.997525 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Aug 13 07:08:58.997733 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Aug 13 07:08:58.997893 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Aug 13 07:08:58.998060 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Aug 13 07:08:58.998201 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Aug 13 07:08:58.998354 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Aug 13 07:08:58.998476 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Aug 13 07:08:58.998644 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Aug 13 07:08:58.998821 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Aug 13 07:08:58.998974 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Aug 13 07:08:58.999087 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Aug 13 07:08:58.999203 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Aug 13 07:08:58.999216 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 07:08:58.999226 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 07:08:58.999236 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 07:08:58.999246 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 07:08:58.999255 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 13 07:08:58.999269 kernel: iommu: Default domain type: Translated Aug 13 07:08:58.999278 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 07:08:58.999288 kernel: PCI: Using ACPI for IRQ routing Aug 13 07:08:58.999297 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 07:08:58.999306 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 13 07:08:58.999316 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Aug 13 07:08:59.001542 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Aug 13 07:08:59.001697 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Aug 13 07:08:59.001803 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 07:08:59.001826 kernel: vgaarb: loaded Aug 13 07:08:59.001836 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 07:08:59.001846 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 07:08:59.001855 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 07:08:59.001865 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 07:08:59.001875 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 07:08:59.001885 kernel: pnp: PnP ACPI init Aug 13 07:08:59.001894 kernel: pnp: PnP ACPI: found 4 devices Aug 13 07:08:59.001904 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 07:08:59.001917 kernel: NET: Registered PF_INET protocol family Aug 13 07:08:59.001927 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 07:08:59.001937 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 13 07:08:59.001946 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 07:08:59.001955 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 07:08:59.001965 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 13 07:08:59.001974 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 13 07:08:59.001983 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 07:08:59.001993 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 07:08:59.002006 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 07:08:59.002016 kernel: NET: Registered PF_XDP protocol family Aug 13 07:08:59.002172 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 07:08:59.002280 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 07:08:59.002373 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 07:08:59.004847 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Aug 13 07:08:59.005088 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Aug 13 07:08:59.005286 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Aug 13 07:08:59.005518 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 13 07:08:59.005549 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Aug 13 07:08:59.005689 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 30418 usecs Aug 13 07:08:59.005710 kernel: PCI: CLS 0 bytes, default 64 Aug 13 07:08:59.005725 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 07:08:59.005743 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Aug 13 07:08:59.005760 kernel: Initialise system trusted keyrings Aug 13 07:08:59.005776 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 13 07:08:59.005800 kernel: Key type asymmetric registered Aug 13 07:08:59.005817 kernel: Asymmetric key parser 'x509' registered Aug 13 07:08:59.005832 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 07:08:59.005847 kernel: io scheduler mq-deadline registered Aug 13 07:08:59.005864 kernel: io scheduler kyber registered Aug 13 07:08:59.005880 kernel: io scheduler bfq registered Aug 13 07:08:59.005894 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 07:08:59.005910 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Aug 13 07:08:59.005925 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Aug 13 07:08:59.005940 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Aug 13 07:08:59.005960 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 07:08:59.005973 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 07:08:59.005987 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 07:08:59.006002 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 07:08:59.006018 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 07:08:59.006245 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 07:08:59.006272 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 07:08:59.006433 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 07:08:59.006600 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T07:08:58 UTC (1755068938) Aug 13 07:08:59.006765 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Aug 13 07:08:59.006784 kernel: intel_pstate: CPU model not supported Aug 13 07:08:59.006800 kernel: NET: Registered PF_INET6 protocol family Aug 13 07:08:59.006814 kernel: Segment Routing with IPv6 Aug 13 07:08:59.006829 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 07:08:59.006845 kernel: NET: Registered PF_PACKET protocol family Aug 13 07:08:59.006868 kernel: Key type dns_resolver registered Aug 13 07:08:59.006894 kernel: IPI shorthand broadcast: enabled Aug 13 07:08:59.006910 kernel: sched_clock: Marking stable (940005662, 93643120)->(1134904401, -101255619) Aug 13 07:08:59.006926 kernel: registered taskstats version 1 Aug 13 07:08:59.006943 kernel: Loading compiled-in X.509 certificates Aug 13 07:08:59.006960 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 264e720147fa8df9744bb9dc1c08171c0cb20041' Aug 13 07:08:59.006978 kernel: Key type .fscrypt registered Aug 13 07:08:59.006995 kernel: Key type fscrypt-provisioning registered Aug 13 07:08:59.007012 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 07:08:59.007029 kernel: ima: Allocated hash algorithm: sha1 Aug 13 07:08:59.007049 kernel: ima: No architecture policies found Aug 13 07:08:59.007063 kernel: clk: Disabling unused clocks Aug 13 07:08:59.007078 kernel: Freeing unused kernel image (initmem) memory: 42876K Aug 13 07:08:59.007093 kernel: Write protecting the kernel read-only data: 36864k Aug 13 07:08:59.007109 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Aug 13 07:08:59.007157 kernel: Run /init as init process Aug 13 07:08:59.007214 kernel: with arguments: Aug 13 07:08:59.007232 kernel: /init Aug 13 07:08:59.007249 kernel: with environment: Aug 13 07:08:59.007270 kernel: HOME=/ Aug 13 07:08:59.007286 kernel: TERM=linux Aug 13 07:08:59.007303 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 07:08:59.007325 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:08:59.007346 systemd[1]: Detected virtualization kvm. Aug 13 07:08:59.007363 systemd[1]: Detected architecture x86-64. Aug 13 07:08:59.007380 systemd[1]: Running in initrd. Aug 13 07:08:59.007397 systemd[1]: No hostname configured, using default hostname. Aug 13 07:08:59.009487 systemd[1]: Hostname set to . Aug 13 07:08:59.009508 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:08:59.009527 systemd[1]: Queued start job for default target initrd.target. Aug 13 07:08:59.009546 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:08:59.009565 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:08:59.009586 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 07:08:59.009599 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:08:59.009616 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 07:08:59.009633 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 07:08:59.009652 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 07:08:59.009671 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 07:08:59.009689 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:08:59.009706 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:08:59.009724 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:08:59.009751 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:08:59.009789 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:08:59.009831 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:08:59.009876 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:08:59.009917 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:08:59.009957 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:08:59.010002 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:08:59.010040 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:08:59.010082 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:08:59.010124 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:08:59.010168 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:08:59.010206 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 07:08:59.010225 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:08:59.010244 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 07:08:59.010268 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 07:08:59.010284 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:08:59.010300 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:08:59.010317 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:08:59.010337 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 07:08:59.010354 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:08:59.010372 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 07:08:59.010395 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:08:59.010525 systemd-journald[183]: Collecting audit messages is disabled. Aug 13 07:08:59.010586 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:08:59.010604 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 07:08:59.010625 systemd-journald[183]: Journal started Aug 13 07:08:59.010674 systemd-journald[183]: Runtime Journal (/run/log/journal/7fea10503578457cb8d6366a51a5cd45) is 4.9M, max 39.3M, 34.4M free. Aug 13 07:08:58.982720 systemd-modules-load[184]: Inserted module 'overlay' Aug 13 07:08:59.025819 kernel: Bridge firewalling registered Aug 13 07:08:59.025856 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:08:59.013491 systemd-modules-load[184]: Inserted module 'br_netfilter' Aug 13 07:08:59.026462 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:08:59.030826 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:08:59.038701 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:08:59.040754 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:08:59.046748 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:08:59.049664 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:08:59.078728 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:08:59.081348 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:08:59.082701 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:08:59.089667 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 07:08:59.090373 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:08:59.103784 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:08:59.132295 dracut-cmdline[217]: dracut-dracut-053 Aug 13 07:08:59.138980 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:08:59.155973 systemd-resolved[219]: Positive Trust Anchors: Aug 13 07:08:59.155995 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:08:59.156045 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:08:59.161041 systemd-resolved[219]: Defaulting to hostname 'linux'. Aug 13 07:08:59.163596 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:08:59.164825 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:08:59.280445 kernel: SCSI subsystem initialized Aug 13 07:08:59.292479 kernel: Loading iSCSI transport class v2.0-870. Aug 13 07:08:59.306479 kernel: iscsi: registered transport (tcp) Aug 13 07:08:59.333486 kernel: iscsi: registered transport (qla4xxx) Aug 13 07:08:59.333598 kernel: QLogic iSCSI HBA Driver Aug 13 07:08:59.411435 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 07:08:59.419800 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 07:08:59.457513 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 07:08:59.457641 kernel: device-mapper: uevent: version 1.0.3 Aug 13 07:08:59.459871 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 07:08:59.511509 kernel: raid6: avx2x4 gen() 14197 MB/s Aug 13 07:08:59.528475 kernel: raid6: avx2x2 gen() 14537 MB/s Aug 13 07:08:59.545703 kernel: raid6: avx2x1 gen() 10825 MB/s Aug 13 07:08:59.545808 kernel: raid6: using algorithm avx2x2 gen() 14537 MB/s Aug 13 07:08:59.563626 kernel: raid6: .... xor() 10838 MB/s, rmw enabled Aug 13 07:08:59.563715 kernel: raid6: using avx2x2 recovery algorithm Aug 13 07:08:59.596453 kernel: xor: automatically using best checksumming function avx Aug 13 07:08:59.784474 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 07:08:59.802600 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:08:59.809763 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:08:59.837807 systemd-udevd[402]: Using default interface naming scheme 'v255'. Aug 13 07:08:59.845141 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:08:59.852007 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 07:08:59.887458 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Aug 13 07:08:59.940027 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:08:59.945784 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:09:00.033461 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:09:00.043775 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 07:09:00.086461 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 07:09:00.088873 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:09:00.090734 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:09:00.092462 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:09:00.100359 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 07:09:00.147525 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:09:00.166476 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Aug 13 07:09:00.171670 kernel: scsi host0: Virtio SCSI HBA Aug 13 07:09:00.186448 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Aug 13 07:09:00.193908 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 07:09:00.226371 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 07:09:00.226514 kernel: GPT:9289727 != 125829119 Aug 13 07:09:00.226537 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 07:09:00.226556 kernel: GPT:9289727 != 125829119 Aug 13 07:09:00.226574 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 07:09:00.226594 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:09:00.241463 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Aug 13 07:09:00.243052 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 07:09:00.249546 kernel: AES CTR mode by8 optimization enabled Aug 13 07:09:00.249648 kernel: ACPI: bus type USB registered Aug 13 07:09:00.249663 kernel: usbcore: registered new interface driver usbfs Aug 13 07:09:00.249677 kernel: usbcore: registered new interface driver hub Aug 13 07:09:00.250607 kernel: usbcore: registered new device driver usb Aug 13 07:09:00.253426 kernel: virtio_blk virtio5: [vdb] 932 512-byte logical blocks (477 kB/466 KiB) Aug 13 07:09:00.279109 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:09:00.280041 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:09:00.282858 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:09:00.284202 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:09:00.284503 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:09:00.286034 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:09:00.299493 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:09:00.371110 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Aug 13 07:09:00.371976 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Aug 13 07:09:00.372225 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Aug 13 07:09:00.372400 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Aug 13 07:09:00.378863 kernel: hub 1-0:1.0: USB hub found Aug 13 07:09:00.379274 kernel: hub 1-0:1.0: 2 ports detected Aug 13 07:09:00.376372 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 13 07:09:00.431215 kernel: BTRFS: device fsid 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (460) Aug 13 07:09:00.431254 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (458) Aug 13 07:09:00.431275 kernel: libata version 3.00 loaded. Aug 13 07:09:00.431303 kernel: ata_piix 0000:00:01.1: version 2.13 Aug 13 07:09:00.427292 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:09:00.436526 kernel: scsi host1: ata_piix Aug 13 07:09:00.441452 kernel: scsi host2: ata_piix Aug 13 07:09:00.441874 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Aug 13 07:09:00.441906 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Aug 13 07:09:00.446226 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 13 07:09:00.467909 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 13 07:09:00.468885 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 13 07:09:00.478395 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:09:00.484889 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 07:09:00.500306 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:09:00.507728 disk-uuid[541]: Primary Header is updated. Aug 13 07:09:00.507728 disk-uuid[541]: Secondary Entries is updated. Aug 13 07:09:00.507728 disk-uuid[541]: Secondary Header is updated. Aug 13 07:09:00.525259 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:09:00.535447 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:09:00.544633 kernel: GPT:disk_guids don't match. Aug 13 07:09:00.544831 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 07:09:00.545717 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:09:00.613787 kernel: hrtimer: interrupt took 4185438 ns Aug 13 07:09:01.554910 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:09:01.555007 disk-uuid[544]: The operation has completed successfully. Aug 13 07:09:01.615676 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 07:09:01.615881 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 07:09:01.630866 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 07:09:01.647168 sh[563]: Success Aug 13 07:09:01.666308 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 07:09:01.759773 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 07:09:01.778633 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 07:09:01.782529 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 07:09:01.832349 kernel: BTRFS info (device dm-0): first mount of filesystem 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad Aug 13 07:09:01.832498 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:09:01.832527 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 07:09:01.832551 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 07:09:01.832574 kernel: BTRFS info (device dm-0): using free space tree Aug 13 07:09:01.848808 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 07:09:01.851023 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 07:09:01.860749 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 07:09:01.866695 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 07:09:01.890698 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:09:01.890799 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:09:01.890816 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:09:01.895441 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:09:01.914226 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 07:09:01.914992 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:09:01.930201 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 07:09:01.938076 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 07:09:02.127398 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:09:02.138593 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:09:02.178068 systemd-networkd[752]: lo: Link UP Aug 13 07:09:02.178084 systemd-networkd[752]: lo: Gained carrier Aug 13 07:09:02.183281 ignition[661]: Ignition 2.19.0 Aug 13 07:09:02.183304 ignition[661]: Stage: fetch-offline Aug 13 07:09:02.183367 ignition[661]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:09:02.183382 ignition[661]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:09:02.185812 systemd-networkd[752]: Enumeration completed Aug 13 07:09:02.184627 ignition[661]: parsed url from cmdline: "" Aug 13 07:09:02.185990 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:09:02.184635 ignition[661]: no config URL provided Aug 13 07:09:02.186666 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Aug 13 07:09:02.184648 ignition[661]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:09:02.186673 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Aug 13 07:09:02.184682 ignition[661]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:09:02.187494 systemd[1]: Reached target network.target - Network. Aug 13 07:09:02.184692 ignition[661]: failed to fetch config: resource requires networking Aug 13 07:09:02.189325 systemd-networkd[752]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:09:02.185041 ignition[661]: Ignition finished successfully Aug 13 07:09:02.189332 systemd-networkd[752]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:09:02.193820 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:09:02.194483 systemd-networkd[752]: eth0: Link UP Aug 13 07:09:02.194494 systemd-networkd[752]: eth0: Gained carrier Aug 13 07:09:02.194516 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Aug 13 07:09:02.198951 systemd-networkd[752]: eth1: Link UP Aug 13 07:09:02.198959 systemd-networkd[752]: eth1: Gained carrier Aug 13 07:09:02.198980 systemd-networkd[752]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:09:02.207781 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 07:09:02.214525 systemd-networkd[752]: eth1: DHCPv4 address 10.124.0.16/20 acquired from 169.254.169.253 Aug 13 07:09:02.220557 systemd-networkd[752]: eth0: DHCPv4 address 164.92.99.201/19, gateway 164.92.96.1 acquired from 169.254.169.253 Aug 13 07:09:02.254177 ignition[757]: Ignition 2.19.0 Aug 13 07:09:02.254196 ignition[757]: Stage: fetch Aug 13 07:09:02.254542 ignition[757]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:09:02.254564 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:09:02.254749 ignition[757]: parsed url from cmdline: "" Aug 13 07:09:02.254755 ignition[757]: no config URL provided Aug 13 07:09:02.254766 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:09:02.254785 ignition[757]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:09:02.254815 ignition[757]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Aug 13 07:09:02.279033 ignition[757]: GET result: OK Aug 13 07:09:02.279754 ignition[757]: parsing config with SHA512: ec663cf760ae442835d8acc69a02719bc48d87cd49fbf2596f858d9dfe697d1f787a4dfea0d8ba22bc8f313f814ae595709d377993f354c6e5b7d35ac648f4f7 Aug 13 07:09:02.287987 unknown[757]: fetched base config from "system" Aug 13 07:09:02.288005 unknown[757]: fetched base config from "system" Aug 13 07:09:02.288016 unknown[757]: fetched user config from "digitalocean" Aug 13 07:09:02.289277 ignition[757]: fetch: fetch complete Aug 13 07:09:02.289287 ignition[757]: fetch: fetch passed Aug 13 07:09:02.291273 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 07:09:02.289385 ignition[757]: Ignition finished successfully Aug 13 07:09:02.298012 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 07:09:02.386913 ignition[764]: Ignition 2.19.0 Aug 13 07:09:02.386937 ignition[764]: Stage: kargs Aug 13 07:09:02.387293 ignition[764]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:09:02.387313 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:09:02.388943 ignition[764]: kargs: kargs passed Aug 13 07:09:02.389050 ignition[764]: Ignition finished successfully Aug 13 07:09:02.392887 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 07:09:02.408875 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 07:09:02.436230 ignition[771]: Ignition 2.19.0 Aug 13 07:09:02.437409 ignition[771]: Stage: disks Aug 13 07:09:02.437824 ignition[771]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:09:02.437845 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:09:02.440068 ignition[771]: disks: disks passed Aug 13 07:09:02.440181 ignition[771]: Ignition finished successfully Aug 13 07:09:02.449194 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 07:09:02.462374 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 07:09:02.463187 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:09:02.464388 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:09:02.466022 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:09:02.467037 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:09:02.482271 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 07:09:02.512853 systemd-fsck[779]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 13 07:09:02.525495 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 07:09:02.534851 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 07:09:02.691555 kernel: EXT4-fs (vda9): mounted filesystem 98cc0201-e9ec-4d2c-8a62-5b521bf9317d r/w with ordered data mode. Quota mode: none. Aug 13 07:09:02.693278 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 07:09:02.695333 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 07:09:02.714746 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:09:02.723152 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 07:09:02.727001 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Aug 13 07:09:02.737349 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 13 07:09:02.740206 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (787) Aug 13 07:09:02.738904 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 07:09:02.738954 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:09:02.749661 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:09:02.749703 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:09:02.749724 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:09:02.755433 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:09:02.756093 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 07:09:02.759493 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:09:02.767964 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 07:09:02.864449 coreos-metadata[790]: Aug 13 07:09:02.863 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 07:09:02.875439 initrd-setup-root[817]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 07:09:02.878477 coreos-metadata[790]: Aug 13 07:09:02.876 INFO Fetch successful Aug 13 07:09:02.884989 coreos-metadata[789]: Aug 13 07:09:02.884 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 07:09:02.887890 coreos-metadata[790]: Aug 13 07:09:02.885 INFO wrote hostname ci-4081.3.5-e-dc2da44dd2 to /sysroot/etc/hostname Aug 13 07:09:02.887535 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 07:09:02.891484 initrd-setup-root[824]: cut: /sysroot/etc/group: No such file or directory Aug 13 07:09:02.897867 initrd-setup-root[832]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 07:09:02.903510 coreos-metadata[789]: Aug 13 07:09:02.902 INFO Fetch successful Aug 13 07:09:02.909010 initrd-setup-root[839]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 07:09:02.912520 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Aug 13 07:09:02.912781 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Aug 13 07:09:03.060120 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 07:09:03.065617 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 07:09:03.068905 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 07:09:03.093436 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:09:03.093413 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 07:09:03.114315 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 07:09:03.143864 ignition[908]: INFO : Ignition 2.19.0 Aug 13 07:09:03.145954 ignition[908]: INFO : Stage: mount Aug 13 07:09:03.147077 ignition[908]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:09:03.147077 ignition[908]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:09:03.148781 ignition[908]: INFO : mount: mount passed Aug 13 07:09:03.148781 ignition[908]: INFO : Ignition finished successfully Aug 13 07:09:03.150151 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 07:09:03.157619 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 07:09:03.182799 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:09:03.199464 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (919) Aug 13 07:09:03.201616 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:09:03.201739 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:09:03.203587 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:09:03.208453 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:09:03.211538 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:09:03.258516 ignition[936]: INFO : Ignition 2.19.0 Aug 13 07:09:03.258516 ignition[936]: INFO : Stage: files Aug 13 07:09:03.260106 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:09:03.260106 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:09:03.262075 ignition[936]: DEBUG : files: compiled without relabeling support, skipping Aug 13 07:09:03.263136 ignition[936]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 07:09:03.263136 ignition[936]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 07:09:03.267544 ignition[936]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 07:09:03.268474 ignition[936]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 07:09:03.268474 ignition[936]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 07:09:03.268285 unknown[936]: wrote ssh authorized keys file for user: core Aug 13 07:09:03.271198 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 07:09:03.272313 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 07:09:03.272313 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 07:09:03.272313 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 07:09:03.272313 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:09:03.278387 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:09:03.278387 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:09:03.278387 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:09:03.278387 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:09:03.278387 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 07:09:03.543740 systemd-networkd[752]: eth0: Gained IPv6LL Aug 13 07:09:03.760198 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Aug 13 07:09:04.180380 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:09:04.180380 ignition[936]: INFO : files: op(8): [started] processing unit "containerd.service" Aug 13 07:09:04.183362 ignition[936]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 07:09:04.183362 ignition[936]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 07:09:04.183362 ignition[936]: INFO : files: op(8): [finished] processing unit "containerd.service" Aug 13 07:09:04.190862 ignition[936]: INFO : files: createResultFile: createFiles: op(a): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:09:04.190862 ignition[936]: INFO : files: createResultFile: createFiles: op(a): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:09:04.190862 ignition[936]: INFO : files: files passed Aug 13 07:09:04.190862 ignition[936]: INFO : Ignition finished successfully Aug 13 07:09:04.185706 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 07:09:04.186195 systemd-networkd[752]: eth1: Gained IPv6LL Aug 13 07:09:04.202876 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 07:09:04.208960 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 07:09:04.210151 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 07:09:04.211591 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 07:09:04.245812 initrd-setup-root-after-ignition[965]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:09:04.245812 initrd-setup-root-after-ignition[965]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:09:04.248284 initrd-setup-root-after-ignition[969]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:09:04.250323 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:09:04.251935 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 07:09:04.259822 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 07:09:04.312395 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 07:09:04.312618 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 07:09:04.313990 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 07:09:04.314794 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 07:09:04.315798 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 07:09:04.328941 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 07:09:04.352148 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:09:04.364810 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 07:09:04.384730 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:09:04.385730 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:09:04.387091 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 07:09:04.388190 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 07:09:04.388475 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:09:04.390199 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 07:09:04.391916 systemd[1]: Stopped target basic.target - Basic System. Aug 13 07:09:04.393017 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 07:09:04.394563 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:09:04.395885 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 07:09:04.397325 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 07:09:04.398397 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:09:04.399540 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 07:09:04.400951 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 07:09:04.401945 systemd[1]: Stopped target swap.target - Swaps. Aug 13 07:09:04.402636 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 07:09:04.402959 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:09:04.404142 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:09:04.404897 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:09:04.405732 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 07:09:04.407500 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:09:04.409026 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 07:09:04.409317 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 07:09:04.410810 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 07:09:04.411105 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:09:04.413022 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 07:09:04.413334 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 07:09:04.415156 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 13 07:09:04.415453 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 07:09:04.428446 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 07:09:04.429132 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 07:09:04.429472 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:09:04.433888 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 07:09:04.434582 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 07:09:04.434975 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:09:04.435940 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 07:09:04.438706 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:09:04.450785 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 07:09:04.450973 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 07:09:04.472565 ignition[989]: INFO : Ignition 2.19.0 Aug 13 07:09:04.472565 ignition[989]: INFO : Stage: umount Aug 13 07:09:04.474143 ignition[989]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:09:04.474143 ignition[989]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:09:04.475927 ignition[989]: INFO : umount: umount passed Aug 13 07:09:04.477580 ignition[989]: INFO : Ignition finished successfully Aug 13 07:09:04.479382 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 07:09:04.479587 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 07:09:04.480946 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 07:09:04.481120 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 07:09:04.482211 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 07:09:04.482295 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 07:09:04.483116 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 07:09:04.483186 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 07:09:04.484022 systemd[1]: Stopped target network.target - Network. Aug 13 07:09:04.485741 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 07:09:04.485839 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:09:04.486436 systemd[1]: Stopped target paths.target - Path Units. Aug 13 07:09:04.486797 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 07:09:04.496935 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:09:04.497501 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 07:09:04.497943 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 07:09:04.498511 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 07:09:04.498598 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:09:04.499007 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 07:09:04.499050 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:09:04.499392 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 07:09:04.499503 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 07:09:04.500593 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 07:09:04.500761 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 07:09:04.502172 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 07:09:04.504067 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 07:09:04.505503 systemd-networkd[752]: eth0: DHCPv6 lease lost Aug 13 07:09:04.507175 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 07:09:04.508979 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 07:09:04.509186 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 07:09:04.509562 systemd-networkd[752]: eth1: DHCPv6 lease lost Aug 13 07:09:04.511621 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 07:09:04.511776 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 07:09:04.516262 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 07:09:04.516377 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:09:04.518880 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 07:09:04.518967 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 07:09:04.526691 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 07:09:04.527191 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 07:09:04.527290 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:09:04.530086 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:09:04.533445 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 07:09:04.534582 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 07:09:04.547112 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:09:04.547209 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:09:04.548612 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 07:09:04.548838 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 07:09:04.549973 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 07:09:04.550062 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:09:04.551329 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 07:09:04.551594 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:09:04.553437 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 07:09:04.553574 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 07:09:04.556003 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 07:09:04.556077 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 07:09:04.557161 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 07:09:04.557213 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:09:04.558144 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 07:09:04.558327 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:09:04.559975 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 07:09:04.560059 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 07:09:04.561026 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:09:04.561114 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:09:04.568863 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 07:09:04.570156 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 07:09:04.570268 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:09:04.572245 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:09:04.572333 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:09:04.580013 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 07:09:04.580995 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 07:09:04.582275 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 07:09:04.588768 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 07:09:04.601935 systemd[1]: Switching root. Aug 13 07:09:04.636599 systemd-journald[183]: Journal stopped Aug 13 07:09:06.185240 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Aug 13 07:09:06.185354 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 07:09:06.185378 kernel: SELinux: policy capability open_perms=1 Aug 13 07:09:06.186479 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 07:09:06.186523 kernel: SELinux: policy capability always_check_network=0 Aug 13 07:09:06.186545 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 07:09:06.186565 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 07:09:06.186595 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 07:09:06.186613 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 07:09:06.186639 kernel: audit: type=1403 audit(1755068944.893:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 07:09:06.186661 systemd[1]: Successfully loaded SELinux policy in 47.301ms. Aug 13 07:09:06.186697 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.183ms. Aug 13 07:09:06.186725 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:09:06.186746 systemd[1]: Detected virtualization kvm. Aug 13 07:09:06.186765 systemd[1]: Detected architecture x86-64. Aug 13 07:09:06.186783 systemd[1]: Detected first boot. Aug 13 07:09:06.186802 systemd[1]: Hostname set to . Aug 13 07:09:06.186824 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:09:06.186844 zram_generator::config[1051]: No configuration found. Aug 13 07:09:06.186864 systemd[1]: Populated /etc with preset unit settings. Aug 13 07:09:06.186888 systemd[1]: Queued start job for default target multi-user.target. Aug 13 07:09:06.186910 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 13 07:09:06.186932 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 07:09:06.186952 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 07:09:06.186972 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 07:09:06.186992 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 07:09:06.187013 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 07:09:06.187033 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 07:09:06.187060 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 07:09:06.187085 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 07:09:06.187105 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:09:06.187125 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:09:06.187142 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 07:09:06.187162 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 07:09:06.187183 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 07:09:06.187203 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:09:06.187223 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 07:09:06.187248 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:09:06.187267 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 07:09:06.187288 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:09:06.187308 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:09:06.187327 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:09:06.187345 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:09:06.187364 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 07:09:06.187390 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 07:09:06.187444 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:09:06.187466 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:09:06.187484 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:09:06.187503 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:09:06.187523 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:09:06.187544 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 07:09:06.187563 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 07:09:06.187582 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 07:09:06.187614 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 07:09:06.187634 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:09:06.187654 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 07:09:06.187673 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 07:09:06.187692 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 07:09:06.187711 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 07:09:06.187731 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:09:06.187749 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:09:06.187774 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 07:09:06.187795 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:09:06.187816 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:09:06.187836 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:09:06.187855 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 07:09:06.187874 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:09:06.187895 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 07:09:06.187916 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Aug 13 07:09:06.187945 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Aug 13 07:09:06.187968 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:09:06.187995 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:09:06.188015 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 07:09:06.188034 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 07:09:06.188059 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:09:06.188081 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:09:06.188099 kernel: loop: module loaded Aug 13 07:09:06.188120 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 07:09:06.188143 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 07:09:06.188163 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 07:09:06.188234 systemd-journald[1141]: Collecting audit messages is disabled. Aug 13 07:09:06.188277 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 07:09:06.188301 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 07:09:06.188324 systemd-journald[1141]: Journal started Aug 13 07:09:06.188369 systemd-journald[1141]: Runtime Journal (/run/log/journal/7fea10503578457cb8d6366a51a5cd45) is 4.9M, max 39.3M, 34.4M free. Aug 13 07:09:06.193449 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:09:06.200663 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 07:09:06.201526 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:09:06.204057 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 07:09:06.204351 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 07:09:06.205526 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:09:06.205810 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:09:06.210380 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:09:06.210701 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:09:06.212228 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:09:06.212529 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:09:06.224669 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:09:06.226060 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 07:09:06.236016 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 07:09:06.249213 kernel: fuse: init (API version 7.39) Aug 13 07:09:06.251698 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 07:09:06.263588 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 07:09:06.264291 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 07:09:06.286453 kernel: ACPI: bus type drm_connector registered Aug 13 07:09:06.300639 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 07:09:06.304637 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 07:09:06.308100 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:09:06.321781 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 07:09:06.322355 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:09:06.324846 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:09:06.329742 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:09:06.334837 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 07:09:06.335924 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:09:06.336225 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:09:06.337172 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 07:09:06.341840 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 07:09:06.345229 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 07:09:06.359709 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 07:09:06.370254 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 07:09:06.383139 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 07:09:06.383922 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 07:09:06.387240 systemd-journald[1141]: Time spent on flushing to /var/log/journal/7fea10503578457cb8d6366a51a5cd45 is 69.896ms for 962 entries. Aug 13 07:09:06.387240 systemd-journald[1141]: System Journal (/var/log/journal/7fea10503578457cb8d6366a51a5cd45) is 8.0M, max 195.6M, 187.6M free. Aug 13 07:09:06.488750 systemd-journald[1141]: Received client request to flush runtime journal. Aug 13 07:09:06.459097 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:09:06.479160 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:09:06.492804 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 07:09:06.495109 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Aug 13 07:09:06.495139 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Aug 13 07:09:06.497127 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 07:09:06.511960 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:09:06.534689 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 07:09:06.553752 udevadm[1206]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 07:09:06.597996 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 07:09:06.610836 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:09:06.636009 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Aug 13 07:09:06.636034 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Aug 13 07:09:06.644287 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:09:07.378721 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 07:09:07.386811 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:09:07.438073 systemd-udevd[1221]: Using default interface naming scheme 'v255'. Aug 13 07:09:07.471287 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:09:07.482730 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:09:07.521299 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 07:09:07.554294 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Aug 13 07:09:07.656106 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:09:07.656479 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:09:07.664709 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:09:07.667457 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1232) Aug 13 07:09:07.676680 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:09:07.689276 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:09:07.690540 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 07:09:07.690614 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 07:09:07.690693 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:09:07.701050 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 07:09:07.703688 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:09:07.703978 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:09:07.734993 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:09:07.735346 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:09:07.758766 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:09:07.759984 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:09:07.773216 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:09:07.774637 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:09:07.850444 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 13 07:09:07.872482 kernel: ACPI: button: Power Button [PWRF] Aug 13 07:09:07.887837 systemd-networkd[1226]: lo: Link UP Aug 13 07:09:07.887850 systemd-networkd[1226]: lo: Gained carrier Aug 13 07:09:07.892149 systemd-networkd[1226]: Enumeration completed Aug 13 07:09:07.892374 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:09:07.895356 systemd-networkd[1226]: eth0: Configuring with /run/systemd/network/10-f2:35:02:14:a4:7d.network. Aug 13 07:09:07.896783 systemd-networkd[1226]: eth1: Configuring with /run/systemd/network/10-26:3e:02:a5:3a:cc.network. Aug 13 07:09:07.898571 systemd-networkd[1226]: eth0: Link UP Aug 13 07:09:07.898687 systemd-networkd[1226]: eth0: Gained carrier Aug 13 07:09:07.904557 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Aug 13 07:09:07.907333 systemd-networkd[1226]: eth1: Link UP Aug 13 07:09:07.907340 systemd-networkd[1226]: eth1: Gained carrier Aug 13 07:09:07.911785 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 07:09:07.927441 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 13 07:09:07.974196 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:09:08.005619 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 07:09:08.024798 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:09:08.044262 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Aug 13 07:09:08.044438 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Aug 13 07:09:08.044896 kernel: Console: switching to colour dummy device 80x25 Aug 13 07:09:08.046433 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Aug 13 07:09:08.046502 kernel: [drm] features: -context_init Aug 13 07:09:08.052444 kernel: [drm] number of scanouts: 1 Aug 13 07:09:08.052542 kernel: [drm] number of cap sets: 0 Aug 13 07:09:08.054446 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Aug 13 07:09:08.061436 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Aug 13 07:09:08.061567 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 07:09:08.078503 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Aug 13 07:09:08.150605 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:09:08.151288 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:09:08.160936 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:09:08.224473 kernel: EDAC MC: Ver: 3.0.0 Aug 13 07:09:08.223415 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:09:08.254407 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 07:09:08.261746 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 07:09:08.292922 lvm[1286]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:09:08.326425 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 07:09:08.327851 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:09:08.335730 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 07:09:08.352066 lvm[1289]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:09:08.390298 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 07:09:08.391354 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:09:08.401606 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Aug 13 07:09:08.403220 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 07:09:08.403277 systemd[1]: Reached target machines.target - Containers. Aug 13 07:09:08.406653 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 07:09:08.425458 kernel: ISO 9660 Extensions: RRIP_1991A Aug 13 07:09:08.429613 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Aug 13 07:09:08.432337 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:09:08.435922 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 07:09:08.444847 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 07:09:08.453882 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 07:09:08.456107 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:09:08.460763 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 07:09:08.469737 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 07:09:08.478367 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 07:09:08.484996 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 07:09:08.514612 kernel: loop0: detected capacity change from 0 to 8 Aug 13 07:09:08.533037 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 07:09:08.541120 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 07:09:08.542326 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 07:09:08.567138 kernel: loop1: detected capacity change from 0 to 142488 Aug 13 07:09:08.610797 kernel: loop2: detected capacity change from 0 to 140768 Aug 13 07:09:08.674902 kernel: loop3: detected capacity change from 0 to 221472 Aug 13 07:09:08.725804 kernel: loop4: detected capacity change from 0 to 8 Aug 13 07:09:08.729620 kernel: loop5: detected capacity change from 0 to 142488 Aug 13 07:09:08.762040 kernel: loop6: detected capacity change from 0 to 140768 Aug 13 07:09:08.777465 kernel: loop7: detected capacity change from 0 to 221472 Aug 13 07:09:08.792781 (sd-merge)[1314]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Aug 13 07:09:08.793479 (sd-merge)[1314]: Merged extensions into '/usr'. Aug 13 07:09:08.818714 systemd[1]: Reloading requested from client PID 1303 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 07:09:08.818740 systemd[1]: Reloading... Aug 13 07:09:08.947616 zram_generator::config[1342]: No configuration found. Aug 13 07:09:09.190673 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:09:09.213514 ldconfig[1300]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 07:09:09.288309 systemd[1]: Reloading finished in 468 ms. Aug 13 07:09:09.303619 systemd-networkd[1226]: eth1: Gained IPv6LL Aug 13 07:09:09.310226 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 07:09:09.314219 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 07:09:09.317310 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 07:09:09.336839 systemd[1]: Starting ensure-sysext.service... Aug 13 07:09:09.342713 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:09:09.351395 systemd[1]: Reloading requested from client PID 1394 ('systemctl') (unit ensure-sysext.service)... Aug 13 07:09:09.353516 systemd[1]: Reloading... Aug 13 07:09:09.416661 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 07:09:09.417145 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 07:09:09.420797 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 07:09:09.421611 systemd-tmpfiles[1395]: ACLs are not supported, ignoring. Aug 13 07:09:09.421733 systemd-tmpfiles[1395]: ACLs are not supported, ignoring. Aug 13 07:09:09.429969 systemd-tmpfiles[1395]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:09:09.430174 systemd-tmpfiles[1395]: Skipping /boot Aug 13 07:09:09.451012 systemd-tmpfiles[1395]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:09:09.451269 systemd-tmpfiles[1395]: Skipping /boot Aug 13 07:09:09.474440 zram_generator::config[1419]: No configuration found. Aug 13 07:09:09.688667 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:09:09.751594 systemd-networkd[1226]: eth0: Gained IPv6LL Aug 13 07:09:09.769505 systemd[1]: Reloading finished in 415 ms. Aug 13 07:09:09.786098 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:09:09.804761 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:09:09.817650 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 07:09:09.822065 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 07:09:09.837360 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:09:09.854829 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 07:09:09.870313 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:09:09.872308 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:09:09.876757 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:09:09.896599 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:09:09.910732 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:09:09.913361 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:09:09.913557 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:09:09.919916 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:09:09.920129 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:09:09.926608 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:09:09.926809 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:09:09.941950 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:09:09.944341 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:09:09.953624 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:09:09.967478 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:09:09.970260 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:09:09.970523 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:09:09.987725 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 07:09:09.990207 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 07:09:09.991295 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:09:09.997837 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:09:10.000942 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 07:09:10.003699 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:09:10.004003 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:09:10.021300 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:09:10.027689 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:09:10.034745 augenrules[1513]: No rules Aug 13 07:09:10.039645 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:09:10.052272 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:09:10.053928 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:09:10.059773 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:09:10.069647 systemd-resolved[1477]: Positive Trust Anchors: Aug 13 07:09:10.069906 systemd-resolved[1477]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:09:10.069964 systemd-resolved[1477]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:09:10.074699 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:09:10.082119 systemd-resolved[1477]: Using system hostname 'ci-4081.3.5-e-dc2da44dd2'. Aug 13 07:09:10.088743 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:09:10.095725 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:09:10.096519 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:09:10.109682 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 07:09:10.112012 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 07:09:10.112072 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:09:10.116513 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:09:10.127803 systemd[1]: Finished ensure-sysext.service. Aug 13 07:09:10.130051 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:09:10.130369 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:09:10.133362 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:09:10.133772 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:09:10.134854 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:09:10.135103 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:09:10.138511 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:09:10.138848 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:09:10.149736 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 07:09:10.155321 systemd[1]: Reached target network.target - Network. Aug 13 07:09:10.157297 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 07:09:10.157838 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:09:10.159157 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:09:10.159289 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:09:10.166734 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 07:09:10.264151 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 07:09:10.265259 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:09:10.269717 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 07:09:10.270627 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 07:09:10.271249 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 07:09:10.272100 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 07:09:10.272154 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:09:10.272940 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 07:09:10.273958 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 07:09:10.274695 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 07:09:10.275159 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:09:10.278750 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 07:09:10.282916 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 07:09:10.288200 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 07:09:10.291481 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 07:09:10.292127 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:09:10.292737 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:09:10.294270 systemd[1]: System is tainted: cgroupsv1 Aug 13 07:09:10.294379 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:09:10.294440 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:09:10.296123 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 07:09:10.306786 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 07:09:10.311976 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 07:09:10.325612 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 07:09:10.339611 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 07:09:10.340235 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 07:09:10.347615 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:09:10.349682 coreos-metadata[1546]: Aug 13 07:09:10.348 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 07:09:10.364501 coreos-metadata[1546]: Aug 13 07:09:10.363 INFO Fetch successful Aug 13 07:09:10.367294 jq[1549]: false Aug 13 07:09:10.368692 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 07:09:10.382674 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 07:09:10.398129 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 07:09:10.404723 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 07:09:10.423298 dbus-daemon[1548]: [system] SELinux support is enabled Aug 13 07:09:10.427662 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 07:09:10.430019 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 07:09:10.437505 extend-filesystems[1552]: Found loop4 Aug 13 07:09:10.445754 extend-filesystems[1552]: Found loop5 Aug 13 07:09:10.445754 extend-filesystems[1552]: Found loop6 Aug 13 07:09:10.445754 extend-filesystems[1552]: Found loop7 Aug 13 07:09:10.445754 extend-filesystems[1552]: Found vda Aug 13 07:09:10.445754 extend-filesystems[1552]: Found vda1 Aug 13 07:09:10.445754 extend-filesystems[1552]: Found vda2 Aug 13 07:09:10.445754 extend-filesystems[1552]: Found vda3 Aug 13 07:09:10.445754 extend-filesystems[1552]: Found usr Aug 13 07:09:10.445754 extend-filesystems[1552]: Found vda4 Aug 13 07:09:10.445754 extend-filesystems[1552]: Found vda6 Aug 13 07:09:10.445754 extend-filesystems[1552]: Found vda7 Aug 13 07:09:10.445754 extend-filesystems[1552]: Found vda9 Aug 13 07:09:10.445754 extend-filesystems[1552]: Checking size of /dev/vda9 Aug 13 07:09:10.449326 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 07:09:10.470723 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 07:09:10.485618 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 07:09:10.499859 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 07:09:10.509914 jq[1569]: true Aug 13 07:09:10.506960 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 07:09:10.522654 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 07:09:10.523058 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 07:09:10.524107 systemd-timesyncd[1541]: Contacted time server 23.111.186.186:123 (0.flatcar.pool.ntp.org). Aug 13 07:09:10.524193 systemd-timesyncd[1541]: Initial clock synchronization to Wed 2025-08-13 07:09:10.410178 UTC. Aug 13 07:09:10.539724 update_engine[1565]: I20250813 07:09:10.534919 1565 main.cc:92] Flatcar Update Engine starting Aug 13 07:09:10.556176 update_engine[1565]: I20250813 07:09:10.553720 1565 update_check_scheduler.cc:74] Next update check in 4m13s Aug 13 07:09:10.588109 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 07:09:10.588196 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 07:09:10.588898 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 07:09:10.589056 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Aug 13 07:09:10.589081 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 07:09:10.598150 extend-filesystems[1552]: Resized partition /dev/vda9 Aug 13 07:09:10.603306 jq[1584]: true Aug 13 07:09:10.629609 extend-filesystems[1598]: resize2fs 1.47.1 (20-May-2024) Aug 13 07:09:10.614205 (ntainerd)[1592]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 07:09:10.646382 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Aug 13 07:09:10.626302 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 07:09:10.626707 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 07:09:10.630205 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 07:09:10.657313 systemd-logind[1563]: New seat seat0. Aug 13 07:09:10.663413 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 07:09:10.676761 systemd[1]: Started update-engine.service - Update Engine. Aug 13 07:09:10.679055 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 07:09:10.684709 systemd-logind[1563]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 07:09:10.684737 systemd-logind[1563]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 07:09:10.686844 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 07:09:10.693012 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 07:09:10.698319 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 07:09:10.839769 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Aug 13 07:09:10.858436 extend-filesystems[1598]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 07:09:10.858436 extend-filesystems[1598]: old_desc_blocks = 1, new_desc_blocks = 8 Aug 13 07:09:10.858436 extend-filesystems[1598]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Aug 13 07:09:10.878687 bash[1628]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:09:10.860039 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 07:09:10.878964 extend-filesystems[1552]: Resized filesystem in /dev/vda9 Aug 13 07:09:10.878964 extend-filesystems[1552]: Found vdb Aug 13 07:09:10.860459 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 07:09:10.871028 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 07:09:10.898899 systemd[1]: Starting sshkeys.service... Aug 13 07:09:10.964429 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1637) Aug 13 07:09:10.966137 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 07:09:10.979939 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 07:09:11.099929 locksmithd[1611]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 07:09:11.119250 coreos-metadata[1650]: Aug 13 07:09:11.119 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 07:09:11.136162 coreos-metadata[1650]: Aug 13 07:09:11.136 INFO Fetch successful Aug 13 07:09:11.165585 unknown[1650]: wrote ssh authorized keys file for user: core Aug 13 07:09:11.198498 update-ssh-keys[1661]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:09:11.199321 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 07:09:11.215169 systemd[1]: Finished sshkeys.service. Aug 13 07:09:11.251377 containerd[1592]: time="2025-08-13T07:09:11.249901073Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 07:09:11.329236 containerd[1592]: time="2025-08-13T07:09:11.328754769Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:09:11.332703 containerd[1592]: time="2025-08-13T07:09:11.332649478Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:09:11.332703 containerd[1592]: time="2025-08-13T07:09:11.332690595Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 07:09:11.332703 containerd[1592]: time="2025-08-13T07:09:11.332708907Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 07:09:11.333004 containerd[1592]: time="2025-08-13T07:09:11.332881344Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 07:09:11.333004 containerd[1592]: time="2025-08-13T07:09:11.332907309Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 07:09:11.333004 containerd[1592]: time="2025-08-13T07:09:11.332963194Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:09:11.333004 containerd[1592]: time="2025-08-13T07:09:11.332975366Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:09:11.333228 containerd[1592]: time="2025-08-13T07:09:11.333199784Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:09:11.333228 containerd[1592]: time="2025-08-13T07:09:11.333220874Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 07:09:11.333298 containerd[1592]: time="2025-08-13T07:09:11.333235001Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:09:11.333298 containerd[1592]: time="2025-08-13T07:09:11.333244638Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 07:09:11.333361 containerd[1592]: time="2025-08-13T07:09:11.333331649Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:09:11.333612 containerd[1592]: time="2025-08-13T07:09:11.333583610Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:09:11.333764 containerd[1592]: time="2025-08-13T07:09:11.333746356Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:09:11.333790 containerd[1592]: time="2025-08-13T07:09:11.333763947Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 07:09:11.335429 containerd[1592]: time="2025-08-13T07:09:11.333836705Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 07:09:11.335429 containerd[1592]: time="2025-08-13T07:09:11.333880593Z" level=info msg="metadata content store policy set" policy=shared Aug 13 07:09:11.342484 containerd[1592]: time="2025-08-13T07:09:11.342438182Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 07:09:11.342725 containerd[1592]: time="2025-08-13T07:09:11.342501923Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 07:09:11.342725 containerd[1592]: time="2025-08-13T07:09:11.342519858Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 07:09:11.342725 containerd[1592]: time="2025-08-13T07:09:11.342535774Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 07:09:11.342725 containerd[1592]: time="2025-08-13T07:09:11.342549457Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 07:09:11.342725 containerd[1592]: time="2025-08-13T07:09:11.342712063Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 07:09:11.343379 containerd[1592]: time="2025-08-13T07:09:11.343036051Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 07:09:11.343379 containerd[1592]: time="2025-08-13T07:09:11.343153184Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 07:09:11.343379 containerd[1592]: time="2025-08-13T07:09:11.343176291Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 07:09:11.343379 containerd[1592]: time="2025-08-13T07:09:11.343193302Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 07:09:11.343379 containerd[1592]: time="2025-08-13T07:09:11.343207935Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 07:09:11.343379 containerd[1592]: time="2025-08-13T07:09:11.343229648Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 07:09:11.343379 containerd[1592]: time="2025-08-13T07:09:11.343243528Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 07:09:11.343379 containerd[1592]: time="2025-08-13T07:09:11.343256477Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 07:09:11.343379 containerd[1592]: time="2025-08-13T07:09:11.343270641Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 07:09:11.343379 containerd[1592]: time="2025-08-13T07:09:11.343282363Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 07:09:11.343379 containerd[1592]: time="2025-08-13T07:09:11.343294227Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 07:09:11.343379 containerd[1592]: time="2025-08-13T07:09:11.343306325Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 07:09:11.343379 containerd[1592]: time="2025-08-13T07:09:11.343327110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 07:09:11.343379 containerd[1592]: time="2025-08-13T07:09:11.343340695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 07:09:11.343983 containerd[1592]: time="2025-08-13T07:09:11.343364077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 07:09:11.343983 containerd[1592]: time="2025-08-13T07:09:11.343379579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 07:09:11.343983 containerd[1592]: time="2025-08-13T07:09:11.343396667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 07:09:11.343983 containerd[1592]: time="2025-08-13T07:09:11.343422446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 07:09:11.343983 containerd[1592]: time="2025-08-13T07:09:11.343434285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 07:09:11.343983 containerd[1592]: time="2025-08-13T07:09:11.343446223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 07:09:11.343983 containerd[1592]: time="2025-08-13T07:09:11.343458412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 07:09:11.343983 containerd[1592]: time="2025-08-13T07:09:11.343475612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 07:09:11.343983 containerd[1592]: time="2025-08-13T07:09:11.343489735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 07:09:11.343983 containerd[1592]: time="2025-08-13T07:09:11.343501769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 07:09:11.343983 containerd[1592]: time="2025-08-13T07:09:11.343513832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 07:09:11.343983 containerd[1592]: time="2025-08-13T07:09:11.343529052Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 07:09:11.343983 containerd[1592]: time="2025-08-13T07:09:11.343549325Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 07:09:11.343983 containerd[1592]: time="2025-08-13T07:09:11.343559879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 07:09:11.343983 containerd[1592]: time="2025-08-13T07:09:11.343576243Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 07:09:11.344397 containerd[1592]: time="2025-08-13T07:09:11.343617580Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 07:09:11.344397 containerd[1592]: time="2025-08-13T07:09:11.343649470Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 07:09:11.344397 containerd[1592]: time="2025-08-13T07:09:11.343665380Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 07:09:11.344397 containerd[1592]: time="2025-08-13T07:09:11.343681723Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 07:09:11.344397 containerd[1592]: time="2025-08-13T07:09:11.343691252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 07:09:11.344397 containerd[1592]: time="2025-08-13T07:09:11.343702746Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 07:09:11.344397 containerd[1592]: time="2025-08-13T07:09:11.343712289Z" level=info msg="NRI interface is disabled by configuration." Aug 13 07:09:11.344397 containerd[1592]: time="2025-08-13T07:09:11.343722360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 07:09:11.344577 containerd[1592]: time="2025-08-13T07:09:11.343983809Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 07:09:11.344577 containerd[1592]: time="2025-08-13T07:09:11.344038070Z" level=info msg="Connect containerd service" Aug 13 07:09:11.344577 containerd[1592]: time="2025-08-13T07:09:11.344087708Z" level=info msg="using legacy CRI server" Aug 13 07:09:11.344577 containerd[1592]: time="2025-08-13T07:09:11.344096726Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 07:09:11.344577 containerd[1592]: time="2025-08-13T07:09:11.344197261Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 07:09:11.354884 containerd[1592]: time="2025-08-13T07:09:11.354283835Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:09:11.354884 containerd[1592]: time="2025-08-13T07:09:11.354718990Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 07:09:11.354884 containerd[1592]: time="2025-08-13T07:09:11.354770605Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 07:09:11.354884 containerd[1592]: time="2025-08-13T07:09:11.354812748Z" level=info msg="Start subscribing containerd event" Aug 13 07:09:11.354884 containerd[1592]: time="2025-08-13T07:09:11.354860158Z" level=info msg="Start recovering state" Aug 13 07:09:11.356322 containerd[1592]: time="2025-08-13T07:09:11.354932850Z" level=info msg="Start event monitor" Aug 13 07:09:11.356322 containerd[1592]: time="2025-08-13T07:09:11.354952489Z" level=info msg="Start snapshots syncer" Aug 13 07:09:11.356322 containerd[1592]: time="2025-08-13T07:09:11.354967954Z" level=info msg="Start cni network conf syncer for default" Aug 13 07:09:11.356322 containerd[1592]: time="2025-08-13T07:09:11.354975384Z" level=info msg="Start streaming server" Aug 13 07:09:11.355179 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 07:09:11.360215 containerd[1592]: time="2025-08-13T07:09:11.360161350Z" level=info msg="containerd successfully booted in 0.112724s" Aug 13 07:09:11.505982 sshd_keygen[1594]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 07:09:11.540936 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 07:09:11.550917 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 07:09:11.574240 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 07:09:11.574617 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 07:09:11.591369 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 07:09:11.612781 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 07:09:11.625268 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 07:09:11.634901 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 07:09:11.640458 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 07:09:12.224643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:09:12.227155 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 07:09:12.230966 systemd[1]: Startup finished in 7.262s (kernel) + 7.383s (userspace) = 14.645s. Aug 13 07:09:12.240189 (kubelet)[1701]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:09:12.910295 kubelet[1701]: E0813 07:09:12.910171 1701 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:09:12.913468 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:09:12.913878 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:09:13.132845 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 07:09:13.139897 systemd[1]: Started sshd@0-164.92.99.201:22-139.178.89.65:41354.service - OpenSSH per-connection server daemon (139.178.89.65:41354). Aug 13 07:09:13.225991 sshd[1713]: Accepted publickey for core from 139.178.89.65 port 41354 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:09:13.230022 sshd[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:13.241557 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 07:09:13.249786 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 07:09:13.253730 systemd-logind[1563]: New session 1 of user core. Aug 13 07:09:13.272673 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 07:09:13.290021 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 07:09:13.294876 (systemd)[1719]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 07:09:13.440054 systemd[1719]: Queued start job for default target default.target. Aug 13 07:09:13.440730 systemd[1719]: Created slice app.slice - User Application Slice. Aug 13 07:09:13.440766 systemd[1719]: Reached target paths.target - Paths. Aug 13 07:09:13.440788 systemd[1719]: Reached target timers.target - Timers. Aug 13 07:09:13.453677 systemd[1719]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 07:09:13.463879 systemd[1719]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 07:09:13.463984 systemd[1719]: Reached target sockets.target - Sockets. Aug 13 07:09:13.464000 systemd[1719]: Reached target basic.target - Basic System. Aug 13 07:09:13.464053 systemd[1719]: Reached target default.target - Main User Target. Aug 13 07:09:13.464087 systemd[1719]: Startup finished in 160ms. Aug 13 07:09:13.465059 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 07:09:13.470986 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 07:09:13.535893 systemd[1]: Started sshd@1-164.92.99.201:22-139.178.89.65:41366.service - OpenSSH per-connection server daemon (139.178.89.65:41366). Aug 13 07:09:13.608265 sshd[1731]: Accepted publickey for core from 139.178.89.65 port 41366 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:09:13.610571 sshd[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:13.618291 systemd-logind[1563]: New session 2 of user core. Aug 13 07:09:13.623971 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 07:09:13.694297 sshd[1731]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:13.704347 systemd[1]: Started sshd@2-164.92.99.201:22-139.178.89.65:41376.service - OpenSSH per-connection server daemon (139.178.89.65:41376). Aug 13 07:09:13.705605 systemd[1]: sshd@1-164.92.99.201:22-139.178.89.65:41366.service: Deactivated successfully. Aug 13 07:09:13.713864 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 07:09:13.715612 systemd-logind[1563]: Session 2 logged out. Waiting for processes to exit. Aug 13 07:09:13.719649 systemd-logind[1563]: Removed session 2. Aug 13 07:09:13.757622 sshd[1736]: Accepted publickey for core from 139.178.89.65 port 41376 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:09:13.759454 sshd[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:13.764823 systemd-logind[1563]: New session 3 of user core. Aug 13 07:09:13.775050 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 07:09:13.833681 sshd[1736]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:13.838083 systemd[1]: sshd@2-164.92.99.201:22-139.178.89.65:41376.service: Deactivated successfully. Aug 13 07:09:13.842037 systemd-logind[1563]: Session 3 logged out. Waiting for processes to exit. Aug 13 07:09:13.842278 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 07:09:13.848784 systemd[1]: Started sshd@3-164.92.99.201:22-139.178.89.65:41382.service - OpenSSH per-connection server daemon (139.178.89.65:41382). Aug 13 07:09:13.849784 systemd-logind[1563]: Removed session 3. Aug 13 07:09:13.893879 sshd[1747]: Accepted publickey for core from 139.178.89.65 port 41382 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:09:13.896112 sshd[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:13.902934 systemd-logind[1563]: New session 4 of user core. Aug 13 07:09:13.905729 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 07:09:13.970691 sshd[1747]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:13.977853 systemd[1]: Started sshd@4-164.92.99.201:22-139.178.89.65:41390.service - OpenSSH per-connection server daemon (139.178.89.65:41390). Aug 13 07:09:13.979503 systemd[1]: sshd@3-164.92.99.201:22-139.178.89.65:41382.service: Deactivated successfully. Aug 13 07:09:13.984649 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 07:09:13.985896 systemd-logind[1563]: Session 4 logged out. Waiting for processes to exit. Aug 13 07:09:13.991674 systemd-logind[1563]: Removed session 4. Aug 13 07:09:14.028569 sshd[1752]: Accepted publickey for core from 139.178.89.65 port 41390 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:09:14.030320 sshd[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:14.035867 systemd-logind[1563]: New session 5 of user core. Aug 13 07:09:14.051982 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 07:09:14.123259 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 07:09:14.124238 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:09:14.137726 sudo[1759]: pam_unix(sudo:session): session closed for user root Aug 13 07:09:14.142139 sshd[1752]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:14.151857 systemd[1]: Started sshd@5-164.92.99.201:22-139.178.89.65:41402.service - OpenSSH per-connection server daemon (139.178.89.65:41402). Aug 13 07:09:14.152719 systemd[1]: sshd@4-164.92.99.201:22-139.178.89.65:41390.service: Deactivated successfully. Aug 13 07:09:14.155276 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 07:09:14.157730 systemd-logind[1563]: Session 5 logged out. Waiting for processes to exit. Aug 13 07:09:14.160483 systemd-logind[1563]: Removed session 5. Aug 13 07:09:14.201586 sshd[1762]: Accepted publickey for core from 139.178.89.65 port 41402 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:09:14.203635 sshd[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:14.209620 systemd-logind[1563]: New session 6 of user core. Aug 13 07:09:14.222833 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 07:09:14.284496 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 07:09:14.284875 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:09:14.289177 sudo[1769]: pam_unix(sudo:session): session closed for user root Aug 13 07:09:14.295864 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 07:09:14.296247 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:09:14.316962 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 07:09:14.318691 auditctl[1772]: No rules Aug 13 07:09:14.320490 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 07:09:14.320846 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 07:09:14.327281 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:09:14.372759 augenrules[1791]: No rules Aug 13 07:09:14.374257 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:09:14.379583 sudo[1768]: pam_unix(sudo:session): session closed for user root Aug 13 07:09:14.385316 sshd[1762]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:14.395694 systemd[1]: Started sshd@6-164.92.99.201:22-139.178.89.65:41412.service - OpenSSH per-connection server daemon (139.178.89.65:41412). Aug 13 07:09:14.396152 systemd[1]: sshd@5-164.92.99.201:22-139.178.89.65:41402.service: Deactivated successfully. Aug 13 07:09:14.403786 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 07:09:14.404941 systemd-logind[1563]: Session 6 logged out. Waiting for processes to exit. Aug 13 07:09:14.407510 systemd-logind[1563]: Removed session 6. Aug 13 07:09:14.434258 sshd[1797]: Accepted publickey for core from 139.178.89.65 port 41412 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:09:14.436061 sshd[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:14.441071 systemd-logind[1563]: New session 7 of user core. Aug 13 07:09:14.450178 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 07:09:14.510091 sudo[1804]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 07:09:14.510526 sudo[1804]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:09:15.209009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:09:15.215825 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:09:15.262352 systemd[1]: Reloading requested from client PID 1837 ('systemctl') (unit session-7.scope)... Aug 13 07:09:15.262378 systemd[1]: Reloading... Aug 13 07:09:15.424437 zram_generator::config[1876]: No configuration found. Aug 13 07:09:15.599338 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:09:15.681596 systemd[1]: Reloading finished in 418 ms. Aug 13 07:09:15.739793 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 07:09:15.739873 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 07:09:15.740275 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:09:15.750834 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:09:15.883610 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:09:15.898087 (kubelet)[1941]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:09:15.954328 kubelet[1941]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:09:15.954328 kubelet[1941]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 07:09:15.954328 kubelet[1941]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:09:15.954897 kubelet[1941]: I0813 07:09:15.954375 1941 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:09:16.655183 kubelet[1941]: I0813 07:09:16.655134 1941 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 07:09:16.657214 kubelet[1941]: I0813 07:09:16.655325 1941 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:09:16.657214 kubelet[1941]: I0813 07:09:16.655625 1941 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 07:09:16.677758 kubelet[1941]: I0813 07:09:16.677661 1941 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:09:16.689557 kubelet[1941]: E0813 07:09:16.689470 1941 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:09:16.689557 kubelet[1941]: I0813 07:09:16.689523 1941 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:09:16.695372 kubelet[1941]: I0813 07:09:16.695307 1941 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:09:16.696450 kubelet[1941]: I0813 07:09:16.696387 1941 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 07:09:16.696700 kubelet[1941]: I0813 07:09:16.696642 1941 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:09:16.696904 kubelet[1941]: I0813 07:09:16.696692 1941 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"164.92.99.201","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 07:09:16.697053 kubelet[1941]: I0813 07:09:16.696907 1941 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:09:16.697053 kubelet[1941]: I0813 07:09:16.696925 1941 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 07:09:16.697118 kubelet[1941]: I0813 07:09:16.697066 1941 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:09:16.700208 kubelet[1941]: I0813 07:09:16.699699 1941 kubelet.go:408] "Attempting to sync node with API server" Aug 13 07:09:16.700208 kubelet[1941]: I0813 07:09:16.699737 1941 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:09:16.700208 kubelet[1941]: I0813 07:09:16.699779 1941 kubelet.go:314] "Adding apiserver pod source" Aug 13 07:09:16.700208 kubelet[1941]: I0813 07:09:16.699809 1941 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:09:16.702147 kubelet[1941]: E0813 07:09:16.702103 1941 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:16.702221 kubelet[1941]: E0813 07:09:16.702170 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:16.705486 kubelet[1941]: I0813 07:09:16.705460 1941 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:09:16.706139 kubelet[1941]: I0813 07:09:16.706118 1941 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:09:16.706758 kubelet[1941]: W0813 07:09:16.706737 1941 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 07:09:16.709429 kubelet[1941]: I0813 07:09:16.709345 1941 server.go:1274] "Started kubelet" Aug 13 07:09:16.711618 kubelet[1941]: I0813 07:09:16.711585 1941 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:09:16.720117 kubelet[1941]: I0813 07:09:16.718837 1941 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:09:16.720305 kubelet[1941]: I0813 07:09:16.720269 1941 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:09:16.721084 kubelet[1941]: I0813 07:09:16.720914 1941 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:09:16.721300 kubelet[1941]: I0813 07:09:16.721230 1941 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:09:16.722931 kubelet[1941]: I0813 07:09:16.722672 1941 server.go:449] "Adding debug handlers to kubelet server" Aug 13 07:09:16.726279 kubelet[1941]: E0813 07:09:16.724725 1941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{164.92.99.201.185b41ead8f91295 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:164.92.99.201,UID:164.92.99.201,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:164.92.99.201,},FirstTimestamp:2025-08-13 07:09:16.709294741 +0000 UTC m=+0.806013730,LastTimestamp:2025-08-13 07:09:16.709294741 +0000 UTC m=+0.806013730,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:164.92.99.201,}" Aug 13 07:09:16.726470 kubelet[1941]: W0813 07:09:16.726342 1941 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "164.92.99.201" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Aug 13 07:09:16.726470 kubelet[1941]: E0813 07:09:16.726375 1941 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"164.92.99.201\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Aug 13 07:09:16.727419 kubelet[1941]: W0813 07:09:16.726536 1941 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Aug 13 07:09:16.727419 kubelet[1941]: E0813 07:09:16.726555 1941 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Aug 13 07:09:16.727419 kubelet[1941]: I0813 07:09:16.727124 1941 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 07:09:16.727419 kubelet[1941]: I0813 07:09:16.727286 1941 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 07:09:16.727419 kubelet[1941]: I0813 07:09:16.727337 1941 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:09:16.730419 kubelet[1941]: E0813 07:09:16.727628 1941 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"164.92.99.201\" not found" Aug 13 07:09:16.730419 kubelet[1941]: E0813 07:09:16.728269 1941 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:09:16.730419 kubelet[1941]: I0813 07:09:16.728583 1941 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:09:16.730419 kubelet[1941]: I0813 07:09:16.728706 1941 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:09:16.730419 kubelet[1941]: E0813 07:09:16.729488 1941 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"164.92.99.201\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Aug 13 07:09:16.730419 kubelet[1941]: E0813 07:09:16.729523 1941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{164.92.99.201.185b41ead91ee98a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:164.92.99.201,UID:164.92.99.201,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.,Source:EventSource{Component:kubelet,Host:164.92.99.201,},FirstTimestamp:2025-08-13 07:09:16.711774602 +0000 UTC m=+0.808493590,LastTimestamp:2025-08-13 07:09:16.711774602 +0000 UTC m=+0.808493590,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:164.92.99.201,}" Aug 13 07:09:16.730706 kubelet[1941]: W0813 07:09:16.729833 1941 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Aug 13 07:09:16.730706 kubelet[1941]: E0813 07:09:16.729855 1941 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Aug 13 07:09:16.735291 kubelet[1941]: I0813 07:09:16.735256 1941 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:09:16.745028 kubelet[1941]: E0813 07:09:16.739764 1941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{164.92.99.201.185b41eada1a6964 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:164.92.99.201,UID:164.92.99.201,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:164.92.99.201,},FirstTimestamp:2025-08-13 07:09:16.728256868 +0000 UTC m=+0.824975855,LastTimestamp:2025-08-13 07:09:16.728256868 +0000 UTC m=+0.824975855,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:164.92.99.201,}" Aug 13 07:09:16.767058 kubelet[1941]: I0813 07:09:16.767027 1941 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 07:09:16.767058 kubelet[1941]: I0813 07:09:16.767057 1941 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 07:09:16.767218 kubelet[1941]: I0813 07:09:16.767084 1941 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:09:16.769937 kubelet[1941]: I0813 07:09:16.769896 1941 policy_none.go:49] "None policy: Start" Aug 13 07:09:16.771025 kubelet[1941]: I0813 07:09:16.771000 1941 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 07:09:16.771114 kubelet[1941]: I0813 07:09:16.771042 1941 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:09:16.779061 kubelet[1941]: I0813 07:09:16.779030 1941 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:09:16.780430 kubelet[1941]: I0813 07:09:16.779767 1941 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:09:16.780430 kubelet[1941]: I0813 07:09:16.779786 1941 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:09:16.782935 kubelet[1941]: I0813 07:09:16.782910 1941 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:09:16.787195 kubelet[1941]: E0813 07:09:16.787132 1941 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"164.92.99.201\" not found" Aug 13 07:09:16.811829 kubelet[1941]: I0813 07:09:16.811782 1941 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:09:16.814061 kubelet[1941]: I0813 07:09:16.814026 1941 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:09:16.814211 kubelet[1941]: I0813 07:09:16.814201 1941 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 07:09:16.814286 kubelet[1941]: I0813 07:09:16.814278 1941 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 07:09:16.814605 kubelet[1941]: E0813 07:09:16.814506 1941 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Aug 13 07:09:16.881265 kubelet[1941]: I0813 07:09:16.881222 1941 kubelet_node_status.go:72] "Attempting to register node" node="164.92.99.201" Aug 13 07:09:16.895945 kubelet[1941]: I0813 07:09:16.895895 1941 kubelet_node_status.go:75] "Successfully registered node" node="164.92.99.201" Aug 13 07:09:16.895945 kubelet[1941]: E0813 07:09:16.895936 1941 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"164.92.99.201\": node \"164.92.99.201\" not found" Aug 13 07:09:16.916606 kubelet[1941]: E0813 07:09:16.916381 1941 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"164.92.99.201\" not found" Aug 13 07:09:16.930574 sudo[1804]: pam_unix(sudo:session): session closed for user root Aug 13 07:09:16.934873 sshd[1797]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:16.940504 systemd-logind[1563]: Session 7 logged out. Waiting for processes to exit. Aug 13 07:09:16.941889 systemd[1]: sshd@6-164.92.99.201:22-139.178.89.65:41412.service: Deactivated successfully. Aug 13 07:09:16.946468 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 07:09:16.948493 systemd-logind[1563]: Removed session 7. Aug 13 07:09:17.017210 kubelet[1941]: E0813 07:09:17.017124 1941 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"164.92.99.201\" not found" Aug 13 07:09:17.117815 kubelet[1941]: E0813 07:09:17.117736 1941 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"164.92.99.201\" not found" Aug 13 07:09:17.218916 kubelet[1941]: E0813 07:09:17.218735 1941 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"164.92.99.201\" not found" Aug 13 07:09:17.319488 kubelet[1941]: E0813 07:09:17.319385 1941 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"164.92.99.201\" not found" Aug 13 07:09:17.420547 kubelet[1941]: E0813 07:09:17.420484 1941 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"164.92.99.201\" not found" Aug 13 07:09:17.521550 kubelet[1941]: E0813 07:09:17.521355 1941 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"164.92.99.201\" not found" Aug 13 07:09:17.622184 kubelet[1941]: E0813 07:09:17.622126 1941 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"164.92.99.201\" not found" Aug 13 07:09:17.657867 kubelet[1941]: I0813 07:09:17.657801 1941 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Aug 13 07:09:17.658067 kubelet[1941]: W0813 07:09:17.658028 1941 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Aug 13 07:09:17.702380 kubelet[1941]: E0813 07:09:17.702306 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:17.723746 kubelet[1941]: I0813 07:09:17.723605 1941 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Aug 13 07:09:17.724332 containerd[1592]: time="2025-08-13T07:09:17.724262792Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 07:09:17.724846 kubelet[1941]: I0813 07:09:17.724599 1941 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Aug 13 07:09:18.702654 kubelet[1941]: E0813 07:09:18.702596 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:18.702654 kubelet[1941]: I0813 07:09:18.702680 1941 apiserver.go:52] "Watching apiserver" Aug 13 07:09:18.712377 kubelet[1941]: E0813 07:09:18.711265 1941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jl8pz" podUID="863ff698-bcf1-43dc-8890-89f9cd527211" Aug 13 07:09:18.728962 kubelet[1941]: I0813 07:09:18.728923 1941 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 07:09:18.736803 kubelet[1941]: I0813 07:09:18.736754 1941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ec55e7a6-fe64-438a-802b-0a936c8a1bea-cni-bin-dir\") pod \"calico-node-5jlpz\" (UID: \"ec55e7a6-fe64-438a-802b-0a936c8a1bea\") " pod="calico-system/calico-node-5jlpz" Aug 13 07:09:18.738426 kubelet[1941]: I0813 07:09:18.737073 1941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ec55e7a6-fe64-438a-802b-0a936c8a1bea-cni-log-dir\") pod \"calico-node-5jlpz\" (UID: \"ec55e7a6-fe64-438a-802b-0a936c8a1bea\") " pod="calico-system/calico-node-5jlpz" Aug 13 07:09:18.738426 kubelet[1941]: I0813 07:09:18.737106 1941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ec55e7a6-fe64-438a-802b-0a936c8a1bea-cni-net-dir\") pod \"calico-node-5jlpz\" (UID: \"ec55e7a6-fe64-438a-802b-0a936c8a1bea\") " pod="calico-system/calico-node-5jlpz" Aug 13 07:09:18.738426 kubelet[1941]: I0813 07:09:18.737796 1941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ec55e7a6-fe64-438a-802b-0a936c8a1bea-policysync\") pod \"calico-node-5jlpz\" (UID: \"ec55e7a6-fe64-438a-802b-0a936c8a1bea\") " pod="calico-system/calico-node-5jlpz" Aug 13 07:09:18.738426 kubelet[1941]: I0813 07:09:18.737835 1941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec55e7a6-fe64-438a-802b-0a936c8a1bea-tigera-ca-bundle\") pod \"calico-node-5jlpz\" (UID: \"ec55e7a6-fe64-438a-802b-0a936c8a1bea\") " pod="calico-system/calico-node-5jlpz" Aug 13 07:09:18.738426 kubelet[1941]: I0813 07:09:18.737858 1941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/144517d3-bb6f-4f9f-885e-fc69077ed86f-xtables-lock\") pod \"kube-proxy-ntrkc\" (UID: \"144517d3-bb6f-4f9f-885e-fc69077ed86f\") " pod="kube-system/kube-proxy-ntrkc" Aug 13 07:09:18.738749 kubelet[1941]: I0813 07:09:18.737891 1941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/144517d3-bb6f-4f9f-885e-fc69077ed86f-lib-modules\") pod \"kube-proxy-ntrkc\" (UID: \"144517d3-bb6f-4f9f-885e-fc69077ed86f\") " pod="kube-system/kube-proxy-ntrkc" Aug 13 07:09:18.738749 kubelet[1941]: I0813 07:09:18.737919 1941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ec55e7a6-fe64-438a-802b-0a936c8a1bea-flexvol-driver-host\") pod \"calico-node-5jlpz\" (UID: \"ec55e7a6-fe64-438a-802b-0a936c8a1bea\") " pod="calico-system/calico-node-5jlpz" Aug 13 07:09:18.738749 kubelet[1941]: I0813 07:09:18.737947 1941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec55e7a6-fe64-438a-802b-0a936c8a1bea-lib-modules\") pod \"calico-node-5jlpz\" (UID: \"ec55e7a6-fe64-438a-802b-0a936c8a1bea\") " pod="calico-system/calico-node-5jlpz" Aug 13 07:09:18.738749 kubelet[1941]: I0813 07:09:18.737979 1941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ec55e7a6-fe64-438a-802b-0a936c8a1bea-xtables-lock\") pod \"calico-node-5jlpz\" (UID: \"ec55e7a6-fe64-438a-802b-0a936c8a1bea\") " pod="calico-system/calico-node-5jlpz" Aug 13 07:09:18.738749 kubelet[1941]: I0813 07:09:18.738011 1941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/863ff698-bcf1-43dc-8890-89f9cd527211-kubelet-dir\") pod \"csi-node-driver-jl8pz\" (UID: \"863ff698-bcf1-43dc-8890-89f9cd527211\") " pod="calico-system/csi-node-driver-jl8pz" Aug 13 07:09:18.738906 kubelet[1941]: I0813 07:09:18.738040 1941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/144517d3-bb6f-4f9f-885e-fc69077ed86f-kube-proxy\") pod \"kube-proxy-ntrkc\" (UID: \"144517d3-bb6f-4f9f-885e-fc69077ed86f\") " pod="kube-system/kube-proxy-ntrkc" Aug 13 07:09:18.738906 kubelet[1941]: I0813 07:09:18.738066 1941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h7cg\" (UniqueName: \"kubernetes.io/projected/144517d3-bb6f-4f9f-885e-fc69077ed86f-kube-api-access-5h7cg\") pod \"kube-proxy-ntrkc\" (UID: \"144517d3-bb6f-4f9f-885e-fc69077ed86f\") " pod="kube-system/kube-proxy-ntrkc" Aug 13 07:09:18.738906 kubelet[1941]: I0813 07:09:18.738137 1941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ec55e7a6-fe64-438a-802b-0a936c8a1bea-var-lib-calico\") pod \"calico-node-5jlpz\" (UID: \"ec55e7a6-fe64-438a-802b-0a936c8a1bea\") " pod="calico-system/calico-node-5jlpz" Aug 13 07:09:18.738906 kubelet[1941]: I0813 07:09:18.738172 1941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw75p\" (UniqueName: \"kubernetes.io/projected/ec55e7a6-fe64-438a-802b-0a936c8a1bea-kube-api-access-pw75p\") pod \"calico-node-5jlpz\" (UID: \"ec55e7a6-fe64-438a-802b-0a936c8a1bea\") " pod="calico-system/calico-node-5jlpz" Aug 13 07:09:18.738906 kubelet[1941]: I0813 07:09:18.738206 1941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/863ff698-bcf1-43dc-8890-89f9cd527211-varrun\") pod \"csi-node-driver-jl8pz\" (UID: \"863ff698-bcf1-43dc-8890-89f9cd527211\") " pod="calico-system/csi-node-driver-jl8pz" Aug 13 07:09:18.739069 kubelet[1941]: I0813 07:09:18.738241 1941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzk9g\" (UniqueName: \"kubernetes.io/projected/863ff698-bcf1-43dc-8890-89f9cd527211-kube-api-access-nzk9g\") pod \"csi-node-driver-jl8pz\" (UID: \"863ff698-bcf1-43dc-8890-89f9cd527211\") " pod="calico-system/csi-node-driver-jl8pz" Aug 13 07:09:18.739069 kubelet[1941]: I0813 07:09:18.738268 1941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ec55e7a6-fe64-438a-802b-0a936c8a1bea-node-certs\") pod \"calico-node-5jlpz\" (UID: \"ec55e7a6-fe64-438a-802b-0a936c8a1bea\") " pod="calico-system/calico-node-5jlpz" Aug 13 07:09:18.739069 kubelet[1941]: I0813 07:09:18.738294 1941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ec55e7a6-fe64-438a-802b-0a936c8a1bea-var-run-calico\") pod \"calico-node-5jlpz\" (UID: \"ec55e7a6-fe64-438a-802b-0a936c8a1bea\") " pod="calico-system/calico-node-5jlpz" Aug 13 07:09:18.739069 kubelet[1941]: I0813 07:09:18.738316 1941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/863ff698-bcf1-43dc-8890-89f9cd527211-registration-dir\") pod \"csi-node-driver-jl8pz\" (UID: \"863ff698-bcf1-43dc-8890-89f9cd527211\") " pod="calico-system/csi-node-driver-jl8pz" Aug 13 07:09:18.739212 kubelet[1941]: I0813 07:09:18.738331 1941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/863ff698-bcf1-43dc-8890-89f9cd527211-socket-dir\") pod \"csi-node-driver-jl8pz\" (UID: \"863ff698-bcf1-43dc-8890-89f9cd527211\") " pod="calico-system/csi-node-driver-jl8pz" Aug 13 07:09:18.850465 kubelet[1941]: E0813 07:09:18.849500 1941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:09:18.850725 kubelet[1941]: W0813 07:09:18.850698 1941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:09:18.850850 kubelet[1941]: E0813 07:09:18.850833 1941 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:09:18.862554 kubelet[1941]: E0813 07:09:18.860918 1941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:09:18.862554 kubelet[1941]: W0813 07:09:18.860975 1941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:09:18.862554 kubelet[1941]: E0813 07:09:18.861020 1941 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:09:18.870360 kubelet[1941]: E0813 07:09:18.870325 1941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:09:18.870360 kubelet[1941]: W0813 07:09:18.870351 1941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:09:18.870554 kubelet[1941]: E0813 07:09:18.870387 1941 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:09:18.874006 kubelet[1941]: E0813 07:09:18.873877 1941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:09:18.874006 kubelet[1941]: W0813 07:09:18.873925 1941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:09:18.874006 kubelet[1941]: E0813 07:09:18.873956 1941 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:09:19.013200 kubelet[1941]: E0813 07:09:19.012952 1941 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:19.015513 containerd[1592]: time="2025-08-13T07:09:19.014532843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ntrkc,Uid:144517d3-bb6f-4f9f-885e-fc69077ed86f,Namespace:kube-system,Attempt:0,}" Aug 13 07:09:19.016308 containerd[1592]: time="2025-08-13T07:09:19.016261614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5jlpz,Uid:ec55e7a6-fe64-438a-802b-0a936c8a1bea,Namespace:calico-system,Attempt:0,}" Aug 13 07:09:19.553988 containerd[1592]: time="2025-08-13T07:09:19.553922158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:09:19.555252 containerd[1592]: time="2025-08-13T07:09:19.555196563Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:09:19.556253 containerd[1592]: time="2025-08-13T07:09:19.555977524Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:09:19.556253 containerd[1592]: time="2025-08-13T07:09:19.556136310Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 13 07:09:19.556253 containerd[1592]: time="2025-08-13T07:09:19.556220351Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:09:19.559912 containerd[1592]: time="2025-08-13T07:09:19.559861161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:09:19.561028 containerd[1592]: time="2025-08-13T07:09:19.560711387Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 544.362004ms" Aug 13 07:09:19.561880 containerd[1592]: time="2025-08-13T07:09:19.561840233Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 547.166246ms" Aug 13 07:09:19.697983 containerd[1592]: time="2025-08-13T07:09:19.697869264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:09:19.698235 containerd[1592]: time="2025-08-13T07:09:19.698176909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:09:19.699162 containerd[1592]: time="2025-08-13T07:09:19.698218126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:19.699250 containerd[1592]: time="2025-08-13T07:09:19.699096318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:19.703365 kubelet[1941]: E0813 07:09:19.703276 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:19.705206 containerd[1592]: time="2025-08-13T07:09:19.704772417Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:09:19.705206 containerd[1592]: time="2025-08-13T07:09:19.704991012Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:09:19.705206 containerd[1592]: time="2025-08-13T07:09:19.705147115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:19.708236 containerd[1592]: time="2025-08-13T07:09:19.707741683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:19.831246 containerd[1592]: time="2025-08-13T07:09:19.831132171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5jlpz,Uid:ec55e7a6-fe64-438a-802b-0a936c8a1bea,Namespace:calico-system,Attempt:0,} returns sandbox id \"0de9c9de6675780ccc649cd72c40ec4da2fa32523280695ac51db693d35c26c1\"" Aug 13 07:09:19.835340 containerd[1592]: time="2025-08-13T07:09:19.835272125Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 07:09:19.844206 containerd[1592]: time="2025-08-13T07:09:19.844044767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ntrkc,Uid:144517d3-bb6f-4f9f-885e-fc69077ed86f,Namespace:kube-system,Attempt:0,} returns sandbox id \"06927c30571544fa85be1fccf205f02b1ecdcfd23bac85a7b74e86629116227c\"" Aug 13 07:09:19.845214 kubelet[1941]: E0813 07:09:19.844818 1941 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:19.850892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3013755716.mount: Deactivated successfully. Aug 13 07:09:20.704273 kubelet[1941]: E0813 07:09:20.704224 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:20.815960 kubelet[1941]: E0813 07:09:20.815418 1941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jl8pz" podUID="863ff698-bcf1-43dc-8890-89f9cd527211" Aug 13 07:09:21.000818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2569505504.mount: Deactivated successfully. Aug 13 07:09:21.091715 containerd[1592]: time="2025-08-13T07:09:21.090708532Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:21.091715 containerd[1592]: time="2025-08-13T07:09:21.091644635Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=5939797" Aug 13 07:09:21.092285 containerd[1592]: time="2025-08-13T07:09:21.092253209Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:21.094426 containerd[1592]: time="2025-08-13T07:09:21.094356092Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:21.095375 containerd[1592]: time="2025-08-13T07:09:21.095340735Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.260025997s" Aug 13 07:09:21.095663 containerd[1592]: time="2025-08-13T07:09:21.095630042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 07:09:21.097651 containerd[1592]: time="2025-08-13T07:09:21.097615410Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 07:09:21.101561 containerd[1592]: time="2025-08-13T07:09:21.101502813Z" level=info msg="CreateContainer within sandbox \"0de9c9de6675780ccc649cd72c40ec4da2fa32523280695ac51db693d35c26c1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 07:09:21.114390 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount844989142.mount: Deactivated successfully. Aug 13 07:09:21.119840 containerd[1592]: time="2025-08-13T07:09:21.119795508Z" level=info msg="CreateContainer within sandbox \"0de9c9de6675780ccc649cd72c40ec4da2fa32523280695ac51db693d35c26c1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5402e062bf3ff623a43085e0466f266682711238c75dbcb671736af7746de82a\"" Aug 13 07:09:21.120937 containerd[1592]: time="2025-08-13T07:09:21.120901244Z" level=info msg="StartContainer for \"5402e062bf3ff623a43085e0466f266682711238c75dbcb671736af7746de82a\"" Aug 13 07:09:21.202630 containerd[1592]: time="2025-08-13T07:09:21.202554126Z" level=info msg="StartContainer for \"5402e062bf3ff623a43085e0466f266682711238c75dbcb671736af7746de82a\" returns successfully" Aug 13 07:09:21.262741 containerd[1592]: time="2025-08-13T07:09:21.262526729Z" level=info msg="shim disconnected" id=5402e062bf3ff623a43085e0466f266682711238c75dbcb671736af7746de82a namespace=k8s.io Aug 13 07:09:21.262741 containerd[1592]: time="2025-08-13T07:09:21.262597578Z" level=warning msg="cleaning up after shim disconnected" id=5402e062bf3ff623a43085e0466f266682711238c75dbcb671736af7746de82a namespace=k8s.io Aug 13 07:09:21.262741 containerd[1592]: time="2025-08-13T07:09:21.262607463Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:09:21.705625 kubelet[1941]: E0813 07:09:21.705414 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:21.962547 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5402e062bf3ff623a43085e0466f266682711238c75dbcb671736af7746de82a-rootfs.mount: Deactivated successfully. Aug 13 07:09:22.229302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2144952238.mount: Deactivated successfully. Aug 13 07:09:22.705717 kubelet[1941]: E0813 07:09:22.705577 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:22.789899 containerd[1592]: time="2025-08-13T07:09:22.789845035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:22.791260 containerd[1592]: time="2025-08-13T07:09:22.791217078Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=30383612" Aug 13 07:09:22.792458 containerd[1592]: time="2025-08-13T07:09:22.792415516Z" level=info msg="ImageCreate event name:\"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:22.795454 containerd[1592]: time="2025-08-13T07:09:22.795006453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:22.796869 containerd[1592]: time="2025-08-13T07:09:22.796195286Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"30382631\" in 1.698541384s" Aug 13 07:09:22.796869 containerd[1592]: time="2025-08-13T07:09:22.796246120Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 13 07:09:22.798412 containerd[1592]: time="2025-08-13T07:09:22.798132537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 07:09:22.799480 containerd[1592]: time="2025-08-13T07:09:22.799268153Z" level=info msg="CreateContainer within sandbox \"06927c30571544fa85be1fccf205f02b1ecdcfd23bac85a7b74e86629116227c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 07:09:22.815290 kubelet[1941]: E0813 07:09:22.815231 1941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jl8pz" podUID="863ff698-bcf1-43dc-8890-89f9cd527211" Aug 13 07:09:22.820307 containerd[1592]: time="2025-08-13T07:09:22.820237747Z" level=info msg="CreateContainer within sandbox \"06927c30571544fa85be1fccf205f02b1ecdcfd23bac85a7b74e86629116227c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6e72f0fc10938cb7a4aaed55d573811ac68feeca6355250e5c1c08055233ea62\"" Aug 13 07:09:22.821024 containerd[1592]: time="2025-08-13T07:09:22.820990323Z" level=info msg="StartContainer for \"6e72f0fc10938cb7a4aaed55d573811ac68feeca6355250e5c1c08055233ea62\"" Aug 13 07:09:22.918603 containerd[1592]: time="2025-08-13T07:09:22.918553277Z" level=info msg="StartContainer for \"6e72f0fc10938cb7a4aaed55d573811ac68feeca6355250e5c1c08055233ea62\" returns successfully" Aug 13 07:09:23.707389 kubelet[1941]: E0813 07:09:23.707145 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:23.838567 kubelet[1941]: E0813 07:09:23.838390 1941 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:23.869281 kubelet[1941]: I0813 07:09:23.869210 1941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ntrkc" podStartSLOduration=4.917444474 podStartE2EDuration="7.869192309s" podCreationTimestamp="2025-08-13 07:09:16 +0000 UTC" firstStartedPulling="2025-08-13 07:09:19.845807973 +0000 UTC m=+3.942526950" lastFinishedPulling="2025-08-13 07:09:22.797555801 +0000 UTC m=+6.894274785" observedRunningTime="2025-08-13 07:09:23.867089644 +0000 UTC m=+7.963808631" watchObservedRunningTime="2025-08-13 07:09:23.869192309 +0000 UTC m=+7.965911294" Aug 13 07:09:24.707492 kubelet[1941]: E0813 07:09:24.707446 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:24.816919 kubelet[1941]: E0813 07:09:24.815680 1941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jl8pz" podUID="863ff698-bcf1-43dc-8890-89f9cd527211" Aug 13 07:09:24.841613 kubelet[1941]: E0813 07:09:24.841568 1941 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:25.708626 kubelet[1941]: E0813 07:09:25.708576 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:25.951457 containerd[1592]: time="2025-08-13T07:09:25.950483482Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:25.952127 containerd[1592]: time="2025-08-13T07:09:25.951699800Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 13 07:09:25.952774 containerd[1592]: time="2025-08-13T07:09:25.952709580Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:25.957306 containerd[1592]: time="2025-08-13T07:09:25.956513382Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:25.957682 containerd[1592]: time="2025-08-13T07:09:25.957640271Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.15945732s" Aug 13 07:09:25.957830 containerd[1592]: time="2025-08-13T07:09:25.957807862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 07:09:25.962065 containerd[1592]: time="2025-08-13T07:09:25.961923162Z" level=info msg="CreateContainer within sandbox \"0de9c9de6675780ccc649cd72c40ec4da2fa32523280695ac51db693d35c26c1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 07:09:25.989277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2264828831.mount: Deactivated successfully. Aug 13 07:09:25.990343 containerd[1592]: time="2025-08-13T07:09:25.990126535Z" level=info msg="CreateContainer within sandbox \"0de9c9de6675780ccc649cd72c40ec4da2fa32523280695ac51db693d35c26c1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a0ca7e7c5edc4ecaa25a941d2c9c3fd7cd8b6f6aad73514a43ebedd4dcf32805\"" Aug 13 07:09:25.991450 containerd[1592]: time="2025-08-13T07:09:25.991166158Z" level=info msg="StartContainer for \"a0ca7e7c5edc4ecaa25a941d2c9c3fd7cd8b6f6aad73514a43ebedd4dcf32805\"" Aug 13 07:09:26.078848 containerd[1592]: time="2025-08-13T07:09:26.078714715Z" level=info msg="StartContainer for \"a0ca7e7c5edc4ecaa25a941d2c9c3fd7cd8b6f6aad73514a43ebedd4dcf32805\" returns successfully" Aug 13 07:09:26.709930 kubelet[1941]: E0813 07:09:26.709668 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:26.816931 kubelet[1941]: E0813 07:09:26.815905 1941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jl8pz" podUID="863ff698-bcf1-43dc-8890-89f9cd527211" Aug 13 07:09:26.818989 kubelet[1941]: I0813 07:09:26.818367 1941 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 07:09:26.827964 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0ca7e7c5edc4ecaa25a941d2c9c3fd7cd8b6f6aad73514a43ebedd4dcf32805-rootfs.mount: Deactivated successfully. Aug 13 07:09:26.860752 containerd[1592]: time="2025-08-13T07:09:26.860534393Z" level=info msg="shim disconnected" id=a0ca7e7c5edc4ecaa25a941d2c9c3fd7cd8b6f6aad73514a43ebedd4dcf32805 namespace=k8s.io Aug 13 07:09:26.860752 containerd[1592]: time="2025-08-13T07:09:26.860610674Z" level=warning msg="cleaning up after shim disconnected" id=a0ca7e7c5edc4ecaa25a941d2c9c3fd7cd8b6f6aad73514a43ebedd4dcf32805 namespace=k8s.io Aug 13 07:09:26.860752 containerd[1592]: time="2025-08-13T07:09:26.860619920Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:09:27.710819 kubelet[1941]: E0813 07:09:27.710742 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:27.854613 containerd[1592]: time="2025-08-13T07:09:27.854541071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 07:09:27.856267 systemd-resolved[1477]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Aug 13 07:09:28.712861 kubelet[1941]: E0813 07:09:28.711530 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:28.821823 containerd[1592]: time="2025-08-13T07:09:28.821356865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl8pz,Uid:863ff698-bcf1-43dc-8890-89f9cd527211,Namespace:calico-system,Attempt:0,}" Aug 13 07:09:28.912428 containerd[1592]: time="2025-08-13T07:09:28.912231446Z" level=error msg="Failed to destroy network for sandbox \"52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:09:28.916461 containerd[1592]: time="2025-08-13T07:09:28.915852523Z" level=error msg="encountered an error cleaning up failed sandbox \"52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:09:28.916461 containerd[1592]: time="2025-08-13T07:09:28.916009056Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl8pz,Uid:863ff698-bcf1-43dc-8890-89f9cd527211,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:09:28.916993 kubelet[1941]: E0813 07:09:28.916499 1941 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:09:28.916993 kubelet[1941]: E0813 07:09:28.916587 1941 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl8pz" Aug 13 07:09:28.916993 kubelet[1941]: E0813 07:09:28.916610 1941 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl8pz" Aug 13 07:09:28.917239 kubelet[1941]: E0813 07:09:28.916663 1941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jl8pz_calico-system(863ff698-bcf1-43dc-8890-89f9cd527211)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jl8pz_calico-system(863ff698-bcf1-43dc-8890-89f9cd527211)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jl8pz" podUID="863ff698-bcf1-43dc-8890-89f9cd527211" Aug 13 07:09:28.918773 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d-shm.mount: Deactivated successfully. Aug 13 07:09:29.712819 kubelet[1941]: E0813 07:09:29.712670 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:29.858278 kubelet[1941]: I0813 07:09:29.857534 1941 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" Aug 13 07:09:29.859451 containerd[1592]: time="2025-08-13T07:09:29.859098457Z" level=info msg="StopPodSandbox for \"52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d\"" Aug 13 07:09:29.859451 containerd[1592]: time="2025-08-13T07:09:29.859301591Z" level=info msg="Ensure that sandbox 52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d in task-service has been cleanup successfully" Aug 13 07:09:29.916835 containerd[1592]: time="2025-08-13T07:09:29.916748225Z" level=error msg="StopPodSandbox for \"52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d\" failed" error="failed to destroy network for sandbox \"52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:09:29.918053 kubelet[1941]: E0813 07:09:29.917778 1941 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" Aug 13 07:09:29.918053 kubelet[1941]: E0813 07:09:29.917866 1941 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d"} Aug 13 07:09:29.918053 kubelet[1941]: E0813 07:09:29.917953 1941 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"863ff698-bcf1-43dc-8890-89f9cd527211\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:09:29.918053 kubelet[1941]: E0813 07:09:29.917984 1941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"863ff698-bcf1-43dc-8890-89f9cd527211\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jl8pz" podUID="863ff698-bcf1-43dc-8890-89f9cd527211" Aug 13 07:09:30.713435 kubelet[1941]: E0813 07:09:30.713168 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:30.936675 systemd-resolved[1477]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Aug 13 07:09:31.713637 kubelet[1941]: E0813 07:09:31.713506 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:32.025181 kubelet[1941]: I0813 07:09:32.024989 1941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvvrr\" (UniqueName: \"kubernetes.io/projected/918bd42e-7b1b-4175-829b-4def6d17c3dc-kube-api-access-wvvrr\") pod \"nginx-deployment-8587fbcb89-4krzr\" (UID: \"918bd42e-7b1b-4175-829b-4def6d17c3dc\") " pod="default/nginx-deployment-8587fbcb89-4krzr" Aug 13 07:09:32.454439 containerd[1592]: time="2025-08-13T07:09:32.454366288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-4krzr,Uid:918bd42e-7b1b-4175-829b-4def6d17c3dc,Namespace:default,Attempt:0,}" Aug 13 07:09:32.627145 containerd[1592]: time="2025-08-13T07:09:32.626663185Z" level=error msg="Failed to destroy network for sandbox \"256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:09:32.627145 containerd[1592]: time="2025-08-13T07:09:32.627099080Z" level=error msg="encountered an error cleaning up failed sandbox \"256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:09:32.627512 containerd[1592]: time="2025-08-13T07:09:32.627335966Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-4krzr,Uid:918bd42e-7b1b-4175-829b-4def6d17c3dc,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:09:32.627988 kubelet[1941]: E0813 07:09:32.627911 1941 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:09:32.627988 kubelet[1941]: E0813 07:09:32.627984 1941 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-4krzr" Aug 13 07:09:32.628257 kubelet[1941]: E0813 07:09:32.628006 1941 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-4krzr" Aug 13 07:09:32.628257 kubelet[1941]: E0813 07:09:32.628061 1941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-4krzr_default(918bd42e-7b1b-4175-829b-4def6d17c3dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-4krzr_default(918bd42e-7b1b-4175-829b-4def6d17c3dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-4krzr" podUID="918bd42e-7b1b-4175-829b-4def6d17c3dc" Aug 13 07:09:32.630700 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d-shm.mount: Deactivated successfully. Aug 13 07:09:32.714537 kubelet[1941]: E0813 07:09:32.714331 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:32.864341 kubelet[1941]: I0813 07:09:32.864289 1941 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" Aug 13 07:09:32.865789 containerd[1592]: time="2025-08-13T07:09:32.865138893Z" level=info msg="StopPodSandbox for \"256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d\"" Aug 13 07:09:32.865789 containerd[1592]: time="2025-08-13T07:09:32.865468550Z" level=info msg="Ensure that sandbox 256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d in task-service has been cleanup successfully" Aug 13 07:09:32.928390 containerd[1592]: time="2025-08-13T07:09:32.928341951Z" level=error msg="StopPodSandbox for \"256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d\" failed" error="failed to destroy network for sandbox \"256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:09:32.928981 kubelet[1941]: E0813 07:09:32.928798 1941 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" Aug 13 07:09:32.928981 kubelet[1941]: E0813 07:09:32.928871 1941 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d"} Aug 13 07:09:32.928981 kubelet[1941]: E0813 07:09:32.928914 1941 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"918bd42e-7b1b-4175-829b-4def6d17c3dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:09:32.928981 kubelet[1941]: E0813 07:09:32.928940 1941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"918bd42e-7b1b-4175-829b-4def6d17c3dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-4krzr" podUID="918bd42e-7b1b-4175-829b-4def6d17c3dc" Aug 13 07:09:33.714786 kubelet[1941]: E0813 07:09:33.714731 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:34.420383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1564001158.mount: Deactivated successfully. Aug 13 07:09:34.465363 containerd[1592]: time="2025-08-13T07:09:34.465286026Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:34.466373 containerd[1592]: time="2025-08-13T07:09:34.466204596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 07:09:34.467373 containerd[1592]: time="2025-08-13T07:09:34.466994426Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:34.469785 containerd[1592]: time="2025-08-13T07:09:34.469731009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:34.471070 containerd[1592]: time="2025-08-13T07:09:34.471016768Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 6.616405441s" Aug 13 07:09:34.471192 containerd[1592]: time="2025-08-13T07:09:34.471075982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 07:09:34.522171 containerd[1592]: time="2025-08-13T07:09:34.522117726Z" level=info msg="CreateContainer within sandbox \"0de9c9de6675780ccc649cd72c40ec4da2fa32523280695ac51db693d35c26c1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 07:09:34.577205 containerd[1592]: time="2025-08-13T07:09:34.577117873Z" level=info msg="CreateContainer within sandbox \"0de9c9de6675780ccc649cd72c40ec4da2fa32523280695ac51db693d35c26c1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c723247a8e2385afdfdda3ad7f5193a0c6738355b2f194ec2752912bedce26bf\"" Aug 13 07:09:34.579482 containerd[1592]: time="2025-08-13T07:09:34.578335529Z" level=info msg="StartContainer for \"c723247a8e2385afdfdda3ad7f5193a0c6738355b2f194ec2752912bedce26bf\"" Aug 13 07:09:34.716251 kubelet[1941]: E0813 07:09:34.716201 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:34.735531 containerd[1592]: time="2025-08-13T07:09:34.735467194Z" level=info msg="StartContainer for \"c723247a8e2385afdfdda3ad7f5193a0c6738355b2f194ec2752912bedce26bf\" returns successfully" Aug 13 07:09:34.880761 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 07:09:34.881268 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 07:09:34.894899 kubelet[1941]: I0813 07:09:34.894819 1941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5jlpz" podStartSLOduration=4.256899284 podStartE2EDuration="18.894791835s" podCreationTimestamp="2025-08-13 07:09:16 +0000 UTC" firstStartedPulling="2025-08-13 07:09:19.83477801 +0000 UTC m=+3.931496991" lastFinishedPulling="2025-08-13 07:09:34.472670553 +0000 UTC m=+18.569389542" observedRunningTime="2025-08-13 07:09:34.893341247 +0000 UTC m=+18.990060243" watchObservedRunningTime="2025-08-13 07:09:34.894791835 +0000 UTC m=+18.991510822" Aug 13 07:09:35.718270 kubelet[1941]: E0813 07:09:35.718165 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:35.876435 kubelet[1941]: I0813 07:09:35.876387 1941 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:09:36.655636 kernel: bpftool[2672]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Aug 13 07:09:36.700781 kubelet[1941]: E0813 07:09:36.700672 1941 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:36.719446 kubelet[1941]: E0813 07:09:36.719357 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:36.963995 systemd-networkd[1226]: vxlan.calico: Link UP Aug 13 07:09:36.964012 systemd-networkd[1226]: vxlan.calico: Gained carrier Aug 13 07:09:37.719996 kubelet[1941]: E0813 07:09:37.719938 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:38.167699 systemd-networkd[1226]: vxlan.calico: Gained IPv6LL Aug 13 07:09:38.721123 kubelet[1941]: E0813 07:09:38.721057 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:39.721459 kubelet[1941]: E0813 07:09:39.721377 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:40.722388 kubelet[1941]: E0813 07:09:40.722325 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:41.723216 kubelet[1941]: E0813 07:09:41.723160 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:42.724118 kubelet[1941]: E0813 07:09:42.724031 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:43.725143 kubelet[1941]: E0813 07:09:43.725075 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:44.725375 kubelet[1941]: E0813 07:09:44.725300 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:44.816527 containerd[1592]: time="2025-08-13T07:09:44.816423496Z" level=info msg="StopPodSandbox for \"52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d\"" Aug 13 07:09:44.872730 kubelet[1941]: I0813 07:09:44.869706 1941 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:09:44.905072 systemd[1]: run-containerd-runc-k8s.io-c723247a8e2385afdfdda3ad7f5193a0c6738355b2f194ec2752912bedce26bf-runc.kkr0g9.mount: Deactivated successfully. Aug 13 07:09:45.053957 systemd[1]: run-containerd-runc-k8s.io-c723247a8e2385afdfdda3ad7f5193a0c6738355b2f194ec2752912bedce26bf-runc.Xh0enY.mount: Deactivated successfully. Aug 13 07:09:45.066723 containerd[1592]: 2025-08-13 07:09:44.945 [INFO][2760] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" Aug 13 07:09:45.066723 containerd[1592]: 2025-08-13 07:09:44.945 [INFO][2760] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" iface="eth0" netns="/var/run/netns/cni-8b30b2ab-6a12-452f-43ec-523f4ac6b24c" Aug 13 07:09:45.066723 containerd[1592]: 2025-08-13 07:09:44.946 [INFO][2760] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" iface="eth0" netns="/var/run/netns/cni-8b30b2ab-6a12-452f-43ec-523f4ac6b24c" Aug 13 07:09:45.066723 containerd[1592]: 2025-08-13 07:09:44.947 [INFO][2760] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" iface="eth0" netns="/var/run/netns/cni-8b30b2ab-6a12-452f-43ec-523f4ac6b24c" Aug 13 07:09:45.066723 containerd[1592]: 2025-08-13 07:09:44.947 [INFO][2760] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" Aug 13 07:09:45.066723 containerd[1592]: 2025-08-13 07:09:44.947 [INFO][2760] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" Aug 13 07:09:45.066723 containerd[1592]: 2025-08-13 07:09:45.029 [INFO][2789] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" HandleID="k8s-pod-network.52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" Workload="164.92.99.201-k8s-csi--node--driver--jl8pz-eth0" Aug 13 07:09:45.066723 containerd[1592]: 2025-08-13 07:09:45.029 [INFO][2789] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:45.066723 containerd[1592]: 2025-08-13 07:09:45.029 [INFO][2789] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:45.066723 containerd[1592]: 2025-08-13 07:09:45.045 [WARNING][2789] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" HandleID="k8s-pod-network.52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" Workload="164.92.99.201-k8s-csi--node--driver--jl8pz-eth0" Aug 13 07:09:45.066723 containerd[1592]: 2025-08-13 07:09:45.046 [INFO][2789] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" HandleID="k8s-pod-network.52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" Workload="164.92.99.201-k8s-csi--node--driver--jl8pz-eth0" Aug 13 07:09:45.066723 containerd[1592]: 2025-08-13 07:09:45.053 [INFO][2789] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:45.066723 containerd[1592]: 2025-08-13 07:09:45.061 [INFO][2760] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" Aug 13 07:09:45.071130 systemd[1]: run-netns-cni\x2d8b30b2ab\x2d6a12\x2d452f\x2d43ec\x2d523f4ac6b24c.mount: Deactivated successfully. Aug 13 07:09:45.080563 containerd[1592]: time="2025-08-13T07:09:45.080478518Z" level=info msg="TearDown network for sandbox \"52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d\" successfully" Aug 13 07:09:45.080724 containerd[1592]: time="2025-08-13T07:09:45.080559384Z" level=info msg="StopPodSandbox for \"52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d\" returns successfully" Aug 13 07:09:45.082935 containerd[1592]: time="2025-08-13T07:09:45.082879956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl8pz,Uid:863ff698-bcf1-43dc-8890-89f9cd527211,Namespace:calico-system,Attempt:1,}" Aug 13 07:09:45.260271 systemd-networkd[1226]: cali70f9d462897: Link UP Aug 13 07:09:45.261190 systemd-networkd[1226]: cali70f9d462897: Gained carrier Aug 13 07:09:45.276723 containerd[1592]: 2025-08-13 07:09:45.149 [INFO][2819] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {164.92.99.201-k8s-csi--node--driver--jl8pz-eth0 csi-node-driver- calico-system 863ff698-bcf1-43dc-8890-89f9cd527211 1273 0 2025-08-13 07:09:16 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 164.92.99.201 csi-node-driver-jl8pz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali70f9d462897 [] [] }} ContainerID="2e26b047b096f3a1025eefe51f5c0a396c51a428ca1bab401eb4de971a2efb6e" Namespace="calico-system" Pod="csi-node-driver-jl8pz" WorkloadEndpoint="164.92.99.201-k8s-csi--node--driver--jl8pz-" Aug 13 07:09:45.276723 containerd[1592]: 2025-08-13 07:09:45.151 [INFO][2819] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2e26b047b096f3a1025eefe51f5c0a396c51a428ca1bab401eb4de971a2efb6e" Namespace="calico-system" Pod="csi-node-driver-jl8pz" WorkloadEndpoint="164.92.99.201-k8s-csi--node--driver--jl8pz-eth0" Aug 13 07:09:45.276723 containerd[1592]: 2025-08-13 07:09:45.192 [INFO][2832] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e26b047b096f3a1025eefe51f5c0a396c51a428ca1bab401eb4de971a2efb6e" HandleID="k8s-pod-network.2e26b047b096f3a1025eefe51f5c0a396c51a428ca1bab401eb4de971a2efb6e" Workload="164.92.99.201-k8s-csi--node--driver--jl8pz-eth0" Aug 13 07:09:45.276723 containerd[1592]: 2025-08-13 07:09:45.193 [INFO][2832] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2e26b047b096f3a1025eefe51f5c0a396c51a428ca1bab401eb4de971a2efb6e" HandleID="k8s-pod-network.2e26b047b096f3a1025eefe51f5c0a396c51a428ca1bab401eb4de971a2efb6e" Workload="164.92.99.201-k8s-csi--node--driver--jl8pz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5680), Attrs:map[string]string{"namespace":"calico-system", "node":"164.92.99.201", "pod":"csi-node-driver-jl8pz", "timestamp":"2025-08-13 07:09:45.192844352 +0000 UTC"}, Hostname:"164.92.99.201", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:09:45.276723 containerd[1592]: 2025-08-13 07:09:45.193 [INFO][2832] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:45.276723 containerd[1592]: 2025-08-13 07:09:45.193 [INFO][2832] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:45.276723 containerd[1592]: 2025-08-13 07:09:45.193 [INFO][2832] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '164.92.99.201' Aug 13 07:09:45.276723 containerd[1592]: 2025-08-13 07:09:45.202 [INFO][2832] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2e26b047b096f3a1025eefe51f5c0a396c51a428ca1bab401eb4de971a2efb6e" host="164.92.99.201" Aug 13 07:09:45.276723 containerd[1592]: 2025-08-13 07:09:45.209 [INFO][2832] ipam/ipam.go 394: Looking up existing affinities for host host="164.92.99.201" Aug 13 07:09:45.276723 containerd[1592]: 2025-08-13 07:09:45.220 [INFO][2832] ipam/ipam.go 511: Trying affinity for 192.168.94.64/26 host="164.92.99.201" Aug 13 07:09:45.276723 containerd[1592]: 2025-08-13 07:09:45.223 [INFO][2832] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.64/26 host="164.92.99.201" Aug 13 07:09:45.276723 containerd[1592]: 2025-08-13 07:09:45.226 [INFO][2832] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="164.92.99.201" Aug 13 07:09:45.276723 containerd[1592]: 2025-08-13 07:09:45.227 [INFO][2832] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.2e26b047b096f3a1025eefe51f5c0a396c51a428ca1bab401eb4de971a2efb6e" host="164.92.99.201" Aug 13 07:09:45.276723 containerd[1592]: 2025-08-13 07:09:45.229 [INFO][2832] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2e26b047b096f3a1025eefe51f5c0a396c51a428ca1bab401eb4de971a2efb6e Aug 13 07:09:45.276723 containerd[1592]: 2025-08-13 07:09:45.237 [INFO][2832] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.2e26b047b096f3a1025eefe51f5c0a396c51a428ca1bab401eb4de971a2efb6e" host="164.92.99.201" Aug 13 07:09:45.276723 containerd[1592]: 2025-08-13 07:09:45.245 [INFO][2832] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.94.65/26] block=192.168.94.64/26 handle="k8s-pod-network.2e26b047b096f3a1025eefe51f5c0a396c51a428ca1bab401eb4de971a2efb6e" host="164.92.99.201" Aug 13 07:09:45.276723 containerd[1592]: 2025-08-13 07:09:45.245 [INFO][2832] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.65/26] handle="k8s-pod-network.2e26b047b096f3a1025eefe51f5c0a396c51a428ca1bab401eb4de971a2efb6e" host="164.92.99.201" Aug 13 07:09:45.276723 containerd[1592]: 2025-08-13 07:09:45.246 [INFO][2832] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:45.276723 containerd[1592]: 2025-08-13 07:09:45.246 [INFO][2832] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.65/26] IPv6=[] ContainerID="2e26b047b096f3a1025eefe51f5c0a396c51a428ca1bab401eb4de971a2efb6e" HandleID="k8s-pod-network.2e26b047b096f3a1025eefe51f5c0a396c51a428ca1bab401eb4de971a2efb6e" Workload="164.92.99.201-k8s-csi--node--driver--jl8pz-eth0" Aug 13 07:09:45.277589 containerd[1592]: 2025-08-13 07:09:45.249 [INFO][2819] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2e26b047b096f3a1025eefe51f5c0a396c51a428ca1bab401eb4de971a2efb6e" Namespace="calico-system" Pod="csi-node-driver-jl8pz" WorkloadEndpoint="164.92.99.201-k8s-csi--node--driver--jl8pz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"164.92.99.201-k8s-csi--node--driver--jl8pz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"863ff698-bcf1-43dc-8890-89f9cd527211", ResourceVersion:"1273", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 9, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"164.92.99.201", ContainerID:"", Pod:"csi-node-driver-jl8pz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.94.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali70f9d462897", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:45.277589 containerd[1592]: 2025-08-13 07:09:45.249 [INFO][2819] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.65/32] ContainerID="2e26b047b096f3a1025eefe51f5c0a396c51a428ca1bab401eb4de971a2efb6e" Namespace="calico-system" Pod="csi-node-driver-jl8pz" WorkloadEndpoint="164.92.99.201-k8s-csi--node--driver--jl8pz-eth0" Aug 13 07:09:45.277589 containerd[1592]: 2025-08-13 07:09:45.249 [INFO][2819] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali70f9d462897 ContainerID="2e26b047b096f3a1025eefe51f5c0a396c51a428ca1bab401eb4de971a2efb6e" Namespace="calico-system" Pod="csi-node-driver-jl8pz" WorkloadEndpoint="164.92.99.201-k8s-csi--node--driver--jl8pz-eth0" Aug 13 07:09:45.277589 containerd[1592]: 2025-08-13 07:09:45.262 [INFO][2819] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e26b047b096f3a1025eefe51f5c0a396c51a428ca1bab401eb4de971a2efb6e" Namespace="calico-system" Pod="csi-node-driver-jl8pz" WorkloadEndpoint="164.92.99.201-k8s-csi--node--driver--jl8pz-eth0" Aug 13 07:09:45.277589 containerd[1592]: 2025-08-13 07:09:45.263 [INFO][2819] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2e26b047b096f3a1025eefe51f5c0a396c51a428ca1bab401eb4de971a2efb6e" Namespace="calico-system" Pod="csi-node-driver-jl8pz" WorkloadEndpoint="164.92.99.201-k8s-csi--node--driver--jl8pz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"164.92.99.201-k8s-csi--node--driver--jl8pz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"863ff698-bcf1-43dc-8890-89f9cd527211", ResourceVersion:"1273", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 9, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"164.92.99.201", ContainerID:"2e26b047b096f3a1025eefe51f5c0a396c51a428ca1bab401eb4de971a2efb6e", Pod:"csi-node-driver-jl8pz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.94.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali70f9d462897", MAC:"c2:ec:65:29:38:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:45.277589 containerd[1592]: 2025-08-13 07:09:45.274 [INFO][2819] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2e26b047b096f3a1025eefe51f5c0a396c51a428ca1bab401eb4de971a2efb6e" Namespace="calico-system" Pod="csi-node-driver-jl8pz" WorkloadEndpoint="164.92.99.201-k8s-csi--node--driver--jl8pz-eth0" Aug 13 07:09:45.323861 containerd[1592]: time="2025-08-13T07:09:45.323499737Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:09:45.323861 containerd[1592]: time="2025-08-13T07:09:45.323556808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:09:45.323861 containerd[1592]: time="2025-08-13T07:09:45.323567733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:45.323861 containerd[1592]: time="2025-08-13T07:09:45.323673277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:45.378329 containerd[1592]: time="2025-08-13T07:09:45.378282110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl8pz,Uid:863ff698-bcf1-43dc-8890-89f9cd527211,Namespace:calico-system,Attempt:1,} returns sandbox id \"2e26b047b096f3a1025eefe51f5c0a396c51a428ca1bab401eb4de971a2efb6e\"" Aug 13 07:09:45.381016 containerd[1592]: time="2025-08-13T07:09:45.380766765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 07:09:45.382795 systemd-resolved[1477]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Aug 13 07:09:45.726129 kubelet[1941]: E0813 07:09:45.726072 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:46.694654 containerd[1592]: time="2025-08-13T07:09:46.694568210Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:46.696131 containerd[1592]: time="2025-08-13T07:09:46.696064562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 13 07:09:46.697031 containerd[1592]: time="2025-08-13T07:09:46.696962764Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:46.699441 containerd[1592]: time="2025-08-13T07:09:46.699376001Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:46.700656 containerd[1592]: time="2025-08-13T07:09:46.700503986Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.319696083s" Aug 13 07:09:46.700656 containerd[1592]: time="2025-08-13T07:09:46.700563822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 07:09:46.703449 containerd[1592]: time="2025-08-13T07:09:46.703345612Z" level=info msg="CreateContainer within sandbox \"2e26b047b096f3a1025eefe51f5c0a396c51a428ca1bab401eb4de971a2efb6e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 07:09:46.716759 containerd[1592]: time="2025-08-13T07:09:46.715186459Z" level=info msg="CreateContainer within sandbox \"2e26b047b096f3a1025eefe51f5c0a396c51a428ca1bab401eb4de971a2efb6e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"aa47a7751c5d53fdcd8550582f942ef1b5e44cdac13ad92e7cc3f3d39b3ee127\"" Aug 13 07:09:46.717535 containerd[1592]: time="2025-08-13T07:09:46.717487741Z" level=info msg="StartContainer for \"aa47a7751c5d53fdcd8550582f942ef1b5e44cdac13ad92e7cc3f3d39b3ee127\"" Aug 13 07:09:46.727059 kubelet[1941]: E0813 07:09:46.726897 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:46.810726 containerd[1592]: time="2025-08-13T07:09:46.810624919Z" level=info msg="StartContainer for \"aa47a7751c5d53fdcd8550582f942ef1b5e44cdac13ad92e7cc3f3d39b3ee127\" returns successfully" Aug 13 07:09:46.812379 containerd[1592]: time="2025-08-13T07:09:46.812325209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 07:09:46.894205 systemd[1]: run-containerd-runc-k8s.io-aa47a7751c5d53fdcd8550582f942ef1b5e44cdac13ad92e7cc3f3d39b3ee127-runc.hStkv0.mount: Deactivated successfully. Aug 13 07:09:47.000316 systemd-networkd[1226]: cali70f9d462897: Gained IPv6LL Aug 13 07:09:47.728185 kubelet[1941]: E0813 07:09:47.728119 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:47.815832 containerd[1592]: time="2025-08-13T07:09:47.815651065Z" level=info msg="StopPodSandbox for \"256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d\"" Aug 13 07:09:47.913673 containerd[1592]: 2025-08-13 07:09:47.871 [INFO][2944] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" Aug 13 07:09:47.913673 containerd[1592]: 2025-08-13 07:09:47.871 [INFO][2944] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" iface="eth0" netns="/var/run/netns/cni-75a18faa-c1a0-98ab-eae5-e1a2a35a4b14" Aug 13 07:09:47.913673 containerd[1592]: 2025-08-13 07:09:47.871 [INFO][2944] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" iface="eth0" netns="/var/run/netns/cni-75a18faa-c1a0-98ab-eae5-e1a2a35a4b14" Aug 13 07:09:47.913673 containerd[1592]: 2025-08-13 07:09:47.872 [INFO][2944] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" iface="eth0" netns="/var/run/netns/cni-75a18faa-c1a0-98ab-eae5-e1a2a35a4b14" Aug 13 07:09:47.913673 containerd[1592]: 2025-08-13 07:09:47.872 [INFO][2944] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" Aug 13 07:09:47.913673 containerd[1592]: 2025-08-13 07:09:47.872 [INFO][2944] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" Aug 13 07:09:47.913673 containerd[1592]: 2025-08-13 07:09:47.898 [INFO][2952] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" HandleID="k8s-pod-network.256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" Workload="164.92.99.201-k8s-nginx--deployment--8587fbcb89--4krzr-eth0" Aug 13 07:09:47.913673 containerd[1592]: 2025-08-13 07:09:47.898 [INFO][2952] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:47.913673 containerd[1592]: 2025-08-13 07:09:47.898 [INFO][2952] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:47.913673 containerd[1592]: 2025-08-13 07:09:47.905 [WARNING][2952] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" HandleID="k8s-pod-network.256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" Workload="164.92.99.201-k8s-nginx--deployment--8587fbcb89--4krzr-eth0" Aug 13 07:09:47.913673 containerd[1592]: 2025-08-13 07:09:47.905 [INFO][2952] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" HandleID="k8s-pod-network.256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" Workload="164.92.99.201-k8s-nginx--deployment--8587fbcb89--4krzr-eth0" Aug 13 07:09:47.913673 containerd[1592]: 2025-08-13 07:09:47.907 [INFO][2952] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:47.913673 containerd[1592]: 2025-08-13 07:09:47.910 [INFO][2944] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" Aug 13 07:09:47.916454 containerd[1592]: time="2025-08-13T07:09:47.916397524Z" level=info msg="TearDown network for sandbox \"256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d\" successfully" Aug 13 07:09:47.916454 containerd[1592]: time="2025-08-13T07:09:47.916447175Z" level=info msg="StopPodSandbox for \"256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d\" returns successfully" Aug 13 07:09:47.917665 containerd[1592]: time="2025-08-13T07:09:47.917195105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-4krzr,Uid:918bd42e-7b1b-4175-829b-4def6d17c3dc,Namespace:default,Attempt:1,}" Aug 13 07:09:47.918671 systemd[1]: run-netns-cni\x2d75a18faa\x2dc1a0\x2d98ab\x2deae5\x2de1a2a35a4b14.mount: Deactivated successfully. Aug 13 07:09:48.133747 systemd-networkd[1226]: cali7aeaf3ed9af: Link UP Aug 13 07:09:48.137492 systemd-networkd[1226]: cali7aeaf3ed9af: Gained carrier Aug 13 07:09:48.157554 containerd[1592]: 2025-08-13 07:09:48.008 [INFO][2959] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {164.92.99.201-k8s-nginx--deployment--8587fbcb89--4krzr-eth0 nginx-deployment-8587fbcb89- default 918bd42e-7b1b-4175-829b-4def6d17c3dc 1296 0 2025-08-13 07:09:31 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 164.92.99.201 nginx-deployment-8587fbcb89-4krzr eth0 default [] [] [kns.default ksa.default.default] cali7aeaf3ed9af [] [] }} ContainerID="10890e12f5519cc881b43b83e1420f39eb87456872b3be7d807631eb793d118f" Namespace="default" Pod="nginx-deployment-8587fbcb89-4krzr" WorkloadEndpoint="164.92.99.201-k8s-nginx--deployment--8587fbcb89--4krzr-" Aug 13 07:09:48.157554 containerd[1592]: 2025-08-13 07:09:48.008 [INFO][2959] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="10890e12f5519cc881b43b83e1420f39eb87456872b3be7d807631eb793d118f" Namespace="default" Pod="nginx-deployment-8587fbcb89-4krzr" WorkloadEndpoint="164.92.99.201-k8s-nginx--deployment--8587fbcb89--4krzr-eth0" Aug 13 07:09:48.157554 containerd[1592]: 2025-08-13 07:09:48.058 [INFO][2971] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10890e12f5519cc881b43b83e1420f39eb87456872b3be7d807631eb793d118f" HandleID="k8s-pod-network.10890e12f5519cc881b43b83e1420f39eb87456872b3be7d807631eb793d118f" Workload="164.92.99.201-k8s-nginx--deployment--8587fbcb89--4krzr-eth0" Aug 13 07:09:48.157554 containerd[1592]: 2025-08-13 07:09:48.058 [INFO][2971] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="10890e12f5519cc881b43b83e1420f39eb87456872b3be7d807631eb793d118f" HandleID="k8s-pod-network.10890e12f5519cc881b43b83e1420f39eb87456872b3be7d807631eb793d118f" Workload="164.92.99.201-k8s-nginx--deployment--8587fbcb89--4krzr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5640), Attrs:map[string]string{"namespace":"default", "node":"164.92.99.201", "pod":"nginx-deployment-8587fbcb89-4krzr", "timestamp":"2025-08-13 07:09:48.058557692 +0000 UTC"}, Hostname:"164.92.99.201", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:09:48.157554 containerd[1592]: 2025-08-13 07:09:48.058 [INFO][2971] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:48.157554 containerd[1592]: 2025-08-13 07:09:48.058 [INFO][2971] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:48.157554 containerd[1592]: 2025-08-13 07:09:48.058 [INFO][2971] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '164.92.99.201' Aug 13 07:09:48.157554 containerd[1592]: 2025-08-13 07:09:48.069 [INFO][2971] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.10890e12f5519cc881b43b83e1420f39eb87456872b3be7d807631eb793d118f" host="164.92.99.201" Aug 13 07:09:48.157554 containerd[1592]: 2025-08-13 07:09:48.078 [INFO][2971] ipam/ipam.go 394: Looking up existing affinities for host host="164.92.99.201" Aug 13 07:09:48.157554 containerd[1592]: 2025-08-13 07:09:48.090 [INFO][2971] ipam/ipam.go 511: Trying affinity for 192.168.94.64/26 host="164.92.99.201" Aug 13 07:09:48.157554 containerd[1592]: 2025-08-13 07:09:48.094 [INFO][2971] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.64/26 host="164.92.99.201" Aug 13 07:09:48.157554 containerd[1592]: 2025-08-13 07:09:48.098 [INFO][2971] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="164.92.99.201" Aug 13 07:09:48.157554 containerd[1592]: 2025-08-13 07:09:48.098 [INFO][2971] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.10890e12f5519cc881b43b83e1420f39eb87456872b3be7d807631eb793d118f" host="164.92.99.201" Aug 13 07:09:48.157554 containerd[1592]: 2025-08-13 07:09:48.101 [INFO][2971] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.10890e12f5519cc881b43b83e1420f39eb87456872b3be7d807631eb793d118f Aug 13 07:09:48.157554 containerd[1592]: 2025-08-13 07:09:48.111 [INFO][2971] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.10890e12f5519cc881b43b83e1420f39eb87456872b3be7d807631eb793d118f" host="164.92.99.201" Aug 13 07:09:48.157554 containerd[1592]: 2025-08-13 07:09:48.123 [INFO][2971] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.94.66/26] block=192.168.94.64/26 handle="k8s-pod-network.10890e12f5519cc881b43b83e1420f39eb87456872b3be7d807631eb793d118f" host="164.92.99.201" Aug 13 07:09:48.157554 containerd[1592]: 2025-08-13 07:09:48.123 [INFO][2971] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.66/26] handle="k8s-pod-network.10890e12f5519cc881b43b83e1420f39eb87456872b3be7d807631eb793d118f" host="164.92.99.201" Aug 13 07:09:48.157554 containerd[1592]: 2025-08-13 07:09:48.123 [INFO][2971] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:48.157554 containerd[1592]: 2025-08-13 07:09:48.123 [INFO][2971] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.66/26] IPv6=[] ContainerID="10890e12f5519cc881b43b83e1420f39eb87456872b3be7d807631eb793d118f" HandleID="k8s-pod-network.10890e12f5519cc881b43b83e1420f39eb87456872b3be7d807631eb793d118f" Workload="164.92.99.201-k8s-nginx--deployment--8587fbcb89--4krzr-eth0" Aug 13 07:09:48.159004 containerd[1592]: 2025-08-13 07:09:48.128 [INFO][2959] cni-plugin/k8s.go 418: Populated endpoint ContainerID="10890e12f5519cc881b43b83e1420f39eb87456872b3be7d807631eb793d118f" Namespace="default" Pod="nginx-deployment-8587fbcb89-4krzr" WorkloadEndpoint="164.92.99.201-k8s-nginx--deployment--8587fbcb89--4krzr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"164.92.99.201-k8s-nginx--deployment--8587fbcb89--4krzr-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"918bd42e-7b1b-4175-829b-4def6d17c3dc", ResourceVersion:"1296", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 9, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"164.92.99.201", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-4krzr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.94.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali7aeaf3ed9af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:48.159004 containerd[1592]: 2025-08-13 07:09:48.129 [INFO][2959] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.66/32] ContainerID="10890e12f5519cc881b43b83e1420f39eb87456872b3be7d807631eb793d118f" Namespace="default" Pod="nginx-deployment-8587fbcb89-4krzr" WorkloadEndpoint="164.92.99.201-k8s-nginx--deployment--8587fbcb89--4krzr-eth0" Aug 13 07:09:48.159004 containerd[1592]: 2025-08-13 07:09:48.129 [INFO][2959] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7aeaf3ed9af ContainerID="10890e12f5519cc881b43b83e1420f39eb87456872b3be7d807631eb793d118f" Namespace="default" Pod="nginx-deployment-8587fbcb89-4krzr" WorkloadEndpoint="164.92.99.201-k8s-nginx--deployment--8587fbcb89--4krzr-eth0" Aug 13 07:09:48.159004 containerd[1592]: 2025-08-13 07:09:48.137 [INFO][2959] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="10890e12f5519cc881b43b83e1420f39eb87456872b3be7d807631eb793d118f" Namespace="default" Pod="nginx-deployment-8587fbcb89-4krzr" WorkloadEndpoint="164.92.99.201-k8s-nginx--deployment--8587fbcb89--4krzr-eth0" Aug 13 07:09:48.159004 containerd[1592]: 2025-08-13 07:09:48.138 [INFO][2959] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="10890e12f5519cc881b43b83e1420f39eb87456872b3be7d807631eb793d118f" Namespace="default" Pod="nginx-deployment-8587fbcb89-4krzr" WorkloadEndpoint="164.92.99.201-k8s-nginx--deployment--8587fbcb89--4krzr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"164.92.99.201-k8s-nginx--deployment--8587fbcb89--4krzr-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"918bd42e-7b1b-4175-829b-4def6d17c3dc", ResourceVersion:"1296", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 9, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"164.92.99.201", ContainerID:"10890e12f5519cc881b43b83e1420f39eb87456872b3be7d807631eb793d118f", Pod:"nginx-deployment-8587fbcb89-4krzr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.94.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali7aeaf3ed9af", MAC:"a2:7f:86:18:6b:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:48.159004 containerd[1592]: 2025-08-13 07:09:48.155 [INFO][2959] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="10890e12f5519cc881b43b83e1420f39eb87456872b3be7d807631eb793d118f" Namespace="default" Pod="nginx-deployment-8587fbcb89-4krzr" WorkloadEndpoint="164.92.99.201-k8s-nginx--deployment--8587fbcb89--4krzr-eth0" Aug 13 07:09:48.202791 containerd[1592]: time="2025-08-13T07:09:48.202415510Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:09:48.202791 containerd[1592]: time="2025-08-13T07:09:48.202520516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:09:48.202791 containerd[1592]: time="2025-08-13T07:09:48.202545062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:48.202791 containerd[1592]: time="2025-08-13T07:09:48.202686930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:48.280884 containerd[1592]: time="2025-08-13T07:09:48.280846286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-4krzr,Uid:918bd42e-7b1b-4175-829b-4def6d17c3dc,Namespace:default,Attempt:1,} returns sandbox id \"10890e12f5519cc881b43b83e1420f39eb87456872b3be7d807631eb793d118f\"" Aug 13 07:09:48.405233 containerd[1592]: time="2025-08-13T07:09:48.405062513Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:48.406715 containerd[1592]: time="2025-08-13T07:09:48.406660377Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 07:09:48.408460 containerd[1592]: time="2025-08-13T07:09:48.407597573Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:48.409312 containerd[1592]: time="2025-08-13T07:09:48.409281981Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:48.410180 containerd[1592]: time="2025-08-13T07:09:48.410058061Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.597682046s" Aug 13 07:09:48.410180 containerd[1592]: time="2025-08-13T07:09:48.410091471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 07:09:48.411652 containerd[1592]: time="2025-08-13T07:09:48.411608773Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Aug 13 07:09:48.413536 containerd[1592]: time="2025-08-13T07:09:48.413290410Z" level=info msg="CreateContainer within sandbox \"2e26b047b096f3a1025eefe51f5c0a396c51a428ca1bab401eb4de971a2efb6e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 07:09:48.436597 containerd[1592]: time="2025-08-13T07:09:48.436460073Z" level=info msg="CreateContainer within sandbox \"2e26b047b096f3a1025eefe51f5c0a396c51a428ca1bab401eb4de971a2efb6e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"401da8d7025c78751be853679f86dbda3cf2f0b83ff8afe39109baf1c6af7c20\"" Aug 13 07:09:48.439064 containerd[1592]: time="2025-08-13T07:09:48.437454979Z" level=info msg="StartContainer for \"401da8d7025c78751be853679f86dbda3cf2f0b83ff8afe39109baf1c6af7c20\"" Aug 13 07:09:48.523432 containerd[1592]: time="2025-08-13T07:09:48.522078592Z" level=info msg="StartContainer for \"401da8d7025c78751be853679f86dbda3cf2f0b83ff8afe39109baf1c6af7c20\" returns successfully" Aug 13 07:09:48.728862 kubelet[1941]: E0813 07:09:48.728778 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:48.805631 kubelet[1941]: I0813 07:09:48.805583 1941 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 07:09:48.805631 kubelet[1941]: I0813 07:09:48.805635 1941 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 07:09:48.950947 kubelet[1941]: I0813 07:09:48.950861 1941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-jl8pz" podStartSLOduration=29.91997782 podStartE2EDuration="32.950834276s" podCreationTimestamp="2025-08-13 07:09:16 +0000 UTC" firstStartedPulling="2025-08-13 07:09:45.380376779 +0000 UTC m=+29.477095750" lastFinishedPulling="2025-08-13 07:09:48.41123324 +0000 UTC m=+32.507952206" observedRunningTime="2025-08-13 07:09:48.949852644 +0000 UTC m=+33.046571635" watchObservedRunningTime="2025-08-13 07:09:48.950834276 +0000 UTC m=+33.047553267" Aug 13 07:09:49.338394 kubelet[1941]: W0813 07:09:49.338245 1941 reflector.go:561] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:164.92.99.201" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node '164.92.99.201' and this object Aug 13 07:09:49.338394 kubelet[1941]: E0813 07:09:49.338293 1941 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:164.92.99.201\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node '164.92.99.201' and this object" logger="UnhandledError" Aug 13 07:09:49.338394 kubelet[1941]: W0813 07:09:49.338355 1941 reflector.go:561] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:164.92.99.201" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node '164.92.99.201' and this object Aug 13 07:09:49.338394 kubelet[1941]: E0813 07:09:49.338368 1941 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:164.92.99.201\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node '164.92.99.201' and this object" logger="UnhandledError" Aug 13 07:09:49.435538 kubelet[1941]: I0813 07:09:49.435482 1941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bsp8\" (UniqueName: \"kubernetes.io/projected/8fd89102-888d-468b-ae5c-29476b57d2ae-kube-api-access-5bsp8\") pod \"calico-apiserver-7c4567fdc-ljqxv\" (UID: \"8fd89102-888d-468b-ae5c-29476b57d2ae\") " pod="calico-apiserver/calico-apiserver-7c4567fdc-ljqxv" Aug 13 07:09:49.435538 kubelet[1941]: I0813 07:09:49.435544 1941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8fd89102-888d-468b-ae5c-29476b57d2ae-calico-apiserver-certs\") pod \"calico-apiserver-7c4567fdc-ljqxv\" (UID: \"8fd89102-888d-468b-ae5c-29476b57d2ae\") " pod="calico-apiserver/calico-apiserver-7c4567fdc-ljqxv" Aug 13 07:09:49.625597 systemd-networkd[1226]: cali7aeaf3ed9af: Gained IPv6LL Aug 13 07:09:49.731429 kubelet[1941]: E0813 07:09:49.729296 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:50.542771 kubelet[1941]: E0813 07:09:50.542703 1941 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Aug 13 07:09:50.542771 kubelet[1941]: E0813 07:09:50.542750 1941 projected.go:194] Error preparing data for projected volume kube-api-access-5bsp8 for pod calico-apiserver/calico-apiserver-7c4567fdc-ljqxv: failed to sync configmap cache: timed out waiting for the condition Aug 13 07:09:50.543147 kubelet[1941]: E0813 07:09:50.542825 1941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8fd89102-888d-468b-ae5c-29476b57d2ae-kube-api-access-5bsp8 podName:8fd89102-888d-468b-ae5c-29476b57d2ae nodeName:}" failed. No retries permitted until 2025-08-13 07:09:51.042804856 +0000 UTC m=+35.139523837 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5bsp8" (UniqueName: "kubernetes.io/projected/8fd89102-888d-468b-ae5c-29476b57d2ae-kube-api-access-5bsp8") pod "calico-apiserver-7c4567fdc-ljqxv" (UID: "8fd89102-888d-468b-ae5c-29476b57d2ae") : failed to sync configmap cache: timed out waiting for the condition Aug 13 07:09:50.730319 kubelet[1941]: E0813 07:09:50.730262 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:51.106906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount986981380.mount: Deactivated successfully. Aug 13 07:09:51.135042 containerd[1592]: time="2025-08-13T07:09:51.134359124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c4567fdc-ljqxv,Uid:8fd89102-888d-468b-ae5c-29476b57d2ae,Namespace:calico-apiserver,Attempt:0,}" Aug 13 07:09:51.327982 systemd-networkd[1226]: cali866ea9c6ce3: Link UP Aug 13 07:09:51.329808 systemd-networkd[1226]: cali866ea9c6ce3: Gained carrier Aug 13 07:09:51.361384 containerd[1592]: 2025-08-13 07:09:51.199 [INFO][3082] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {164.92.99.201-k8s-calico--apiserver--7c4567fdc--ljqxv-eth0 calico-apiserver-7c4567fdc- calico-apiserver 8fd89102-888d-468b-ae5c-29476b57d2ae 1336 0 2025-08-13 07:09:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c4567fdc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 164.92.99.201 calico-apiserver-7c4567fdc-ljqxv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali866ea9c6ce3 [] [] }} ContainerID="2cc9da7d528faacc6d1945afe3ff72f9721a6428bb2ca64f756e0cb9e1fe9763" Namespace="calico-apiserver" Pod="calico-apiserver-7c4567fdc-ljqxv" WorkloadEndpoint="164.92.99.201-k8s-calico--apiserver--7c4567fdc--ljqxv-" Aug 13 07:09:51.361384 containerd[1592]: 2025-08-13 07:09:51.200 [INFO][3082] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2cc9da7d528faacc6d1945afe3ff72f9721a6428bb2ca64f756e0cb9e1fe9763" Namespace="calico-apiserver" Pod="calico-apiserver-7c4567fdc-ljqxv" WorkloadEndpoint="164.92.99.201-k8s-calico--apiserver--7c4567fdc--ljqxv-eth0" Aug 13 07:09:51.361384 containerd[1592]: 2025-08-13 07:09:51.246 [INFO][3095] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2cc9da7d528faacc6d1945afe3ff72f9721a6428bb2ca64f756e0cb9e1fe9763" HandleID="k8s-pod-network.2cc9da7d528faacc6d1945afe3ff72f9721a6428bb2ca64f756e0cb9e1fe9763" Workload="164.92.99.201-k8s-calico--apiserver--7c4567fdc--ljqxv-eth0" Aug 13 07:09:51.361384 containerd[1592]: 2025-08-13 07:09:51.246 [INFO][3095] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2cc9da7d528faacc6d1945afe3ff72f9721a6428bb2ca64f756e0cb9e1fe9763" HandleID="k8s-pod-network.2cc9da7d528faacc6d1945afe3ff72f9721a6428bb2ca64f756e0cb9e1fe9763" Workload="164.92.99.201-k8s-calico--apiserver--7c4567fdc--ljqxv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003254a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"164.92.99.201", "pod":"calico-apiserver-7c4567fdc-ljqxv", "timestamp":"2025-08-13 07:09:51.246155365 +0000 UTC"}, Hostname:"164.92.99.201", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:09:51.361384 containerd[1592]: 2025-08-13 07:09:51.247 [INFO][3095] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:51.361384 containerd[1592]: 2025-08-13 07:09:51.247 [INFO][3095] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:51.361384 containerd[1592]: 2025-08-13 07:09:51.247 [INFO][3095] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '164.92.99.201' Aug 13 07:09:51.361384 containerd[1592]: 2025-08-13 07:09:51.270 [INFO][3095] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2cc9da7d528faacc6d1945afe3ff72f9721a6428bb2ca64f756e0cb9e1fe9763" host="164.92.99.201" Aug 13 07:09:51.361384 containerd[1592]: 2025-08-13 07:09:51.279 [INFO][3095] ipam/ipam.go 394: Looking up existing affinities for host host="164.92.99.201" Aug 13 07:09:51.361384 containerd[1592]: 2025-08-13 07:09:51.286 [INFO][3095] ipam/ipam.go 511: Trying affinity for 192.168.94.64/26 host="164.92.99.201" Aug 13 07:09:51.361384 containerd[1592]: 2025-08-13 07:09:51.290 [INFO][3095] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.64/26 host="164.92.99.201" Aug 13 07:09:51.361384 containerd[1592]: 2025-08-13 07:09:51.295 [INFO][3095] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="164.92.99.201" Aug 13 07:09:51.361384 containerd[1592]: 2025-08-13 07:09:51.296 [INFO][3095] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.2cc9da7d528faacc6d1945afe3ff72f9721a6428bb2ca64f756e0cb9e1fe9763" host="164.92.99.201" Aug 13 07:09:51.361384 containerd[1592]: 2025-08-13 07:09:51.300 [INFO][3095] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2cc9da7d528faacc6d1945afe3ff72f9721a6428bb2ca64f756e0cb9e1fe9763 Aug 13 07:09:51.361384 containerd[1592]: 2025-08-13 07:09:51.308 [INFO][3095] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.2cc9da7d528faacc6d1945afe3ff72f9721a6428bb2ca64f756e0cb9e1fe9763" host="164.92.99.201" Aug 13 07:09:51.361384 containerd[1592]: 2025-08-13 07:09:51.318 [INFO][3095] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.94.67/26] block=192.168.94.64/26 handle="k8s-pod-network.2cc9da7d528faacc6d1945afe3ff72f9721a6428bb2ca64f756e0cb9e1fe9763" host="164.92.99.201" Aug 13 07:09:51.361384 containerd[1592]: 2025-08-13 07:09:51.318 [INFO][3095] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.67/26] handle="k8s-pod-network.2cc9da7d528faacc6d1945afe3ff72f9721a6428bb2ca64f756e0cb9e1fe9763" host="164.92.99.201" Aug 13 07:09:51.361384 containerd[1592]: 2025-08-13 07:09:51.318 [INFO][3095] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:51.361384 containerd[1592]: 2025-08-13 07:09:51.318 [INFO][3095] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.67/26] IPv6=[] ContainerID="2cc9da7d528faacc6d1945afe3ff72f9721a6428bb2ca64f756e0cb9e1fe9763" HandleID="k8s-pod-network.2cc9da7d528faacc6d1945afe3ff72f9721a6428bb2ca64f756e0cb9e1fe9763" Workload="164.92.99.201-k8s-calico--apiserver--7c4567fdc--ljqxv-eth0" Aug 13 07:09:51.362451 containerd[1592]: 2025-08-13 07:09:51.321 [INFO][3082] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2cc9da7d528faacc6d1945afe3ff72f9721a6428bb2ca64f756e0cb9e1fe9763" Namespace="calico-apiserver" Pod="calico-apiserver-7c4567fdc-ljqxv" WorkloadEndpoint="164.92.99.201-k8s-calico--apiserver--7c4567fdc--ljqxv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"164.92.99.201-k8s-calico--apiserver--7c4567fdc--ljqxv-eth0", GenerateName:"calico-apiserver-7c4567fdc-", Namespace:"calico-apiserver", SelfLink:"", UID:"8fd89102-888d-468b-ae5c-29476b57d2ae", ResourceVersion:"1336", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 9, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c4567fdc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"164.92.99.201", ContainerID:"", Pod:"calico-apiserver-7c4567fdc-ljqxv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali866ea9c6ce3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:51.362451 containerd[1592]: 2025-08-13 07:09:51.321 [INFO][3082] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.67/32] ContainerID="2cc9da7d528faacc6d1945afe3ff72f9721a6428bb2ca64f756e0cb9e1fe9763" Namespace="calico-apiserver" Pod="calico-apiserver-7c4567fdc-ljqxv" WorkloadEndpoint="164.92.99.201-k8s-calico--apiserver--7c4567fdc--ljqxv-eth0" Aug 13 07:09:51.362451 containerd[1592]: 2025-08-13 07:09:51.321 [INFO][3082] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali866ea9c6ce3 ContainerID="2cc9da7d528faacc6d1945afe3ff72f9721a6428bb2ca64f756e0cb9e1fe9763" Namespace="calico-apiserver" Pod="calico-apiserver-7c4567fdc-ljqxv" WorkloadEndpoint="164.92.99.201-k8s-calico--apiserver--7c4567fdc--ljqxv-eth0" Aug 13 07:09:51.362451 containerd[1592]: 2025-08-13 07:09:51.330 [INFO][3082] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2cc9da7d528faacc6d1945afe3ff72f9721a6428bb2ca64f756e0cb9e1fe9763" Namespace="calico-apiserver" Pod="calico-apiserver-7c4567fdc-ljqxv" WorkloadEndpoint="164.92.99.201-k8s-calico--apiserver--7c4567fdc--ljqxv-eth0" Aug 13 07:09:51.362451 containerd[1592]: 2025-08-13 07:09:51.332 [INFO][3082] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2cc9da7d528faacc6d1945afe3ff72f9721a6428bb2ca64f756e0cb9e1fe9763" Namespace="calico-apiserver" Pod="calico-apiserver-7c4567fdc-ljqxv" WorkloadEndpoint="164.92.99.201-k8s-calico--apiserver--7c4567fdc--ljqxv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"164.92.99.201-k8s-calico--apiserver--7c4567fdc--ljqxv-eth0", GenerateName:"calico-apiserver-7c4567fdc-", Namespace:"calico-apiserver", SelfLink:"", UID:"8fd89102-888d-468b-ae5c-29476b57d2ae", ResourceVersion:"1336", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 9, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c4567fdc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"164.92.99.201", ContainerID:"2cc9da7d528faacc6d1945afe3ff72f9721a6428bb2ca64f756e0cb9e1fe9763", Pod:"calico-apiserver-7c4567fdc-ljqxv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali866ea9c6ce3", MAC:"e2:b4:ca:f3:82:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:51.362451 containerd[1592]: 2025-08-13 07:09:51.349 [INFO][3082] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2cc9da7d528faacc6d1945afe3ff72f9721a6428bb2ca64f756e0cb9e1fe9763" Namespace="calico-apiserver" Pod="calico-apiserver-7c4567fdc-ljqxv" WorkloadEndpoint="164.92.99.201-k8s-calico--apiserver--7c4567fdc--ljqxv-eth0" Aug 13 07:09:51.411589 containerd[1592]: time="2025-08-13T07:09:51.410597167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:09:51.411589 containerd[1592]: time="2025-08-13T07:09:51.410687941Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:09:51.411589 containerd[1592]: time="2025-08-13T07:09:51.410709278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:51.411589 containerd[1592]: time="2025-08-13T07:09:51.410872631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:51.516015 containerd[1592]: time="2025-08-13T07:09:51.515883802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c4567fdc-ljqxv,Uid:8fd89102-888d-468b-ae5c-29476b57d2ae,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2cc9da7d528faacc6d1945afe3ff72f9721a6428bb2ca64f756e0cb9e1fe9763\"" Aug 13 07:09:51.731203 kubelet[1941]: E0813 07:09:51.731098 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:52.596453 containerd[1592]: time="2025-08-13T07:09:52.596164970Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:52.597757 containerd[1592]: time="2025-08-13T07:09:52.597545129Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73303204" Aug 13 07:09:52.598351 containerd[1592]: time="2025-08-13T07:09:52.598319538Z" level=info msg="ImageCreate event name:\"sha256:f36b8965af58ac17c6fcb27d986b37161ceb26b3d41d3cd53f232b0e16761305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:52.601358 containerd[1592]: time="2025-08-13T07:09:52.601321355Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:a6969d434cb816d30787e9f7ab16b632e12dc05a2c8f4dae701d83ef2199c985\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:52.602622 containerd[1592]: time="2025-08-13T07:09:52.602486667Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:f36b8965af58ac17c6fcb27d986b37161ceb26b3d41d3cd53f232b0e16761305\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:a6969d434cb816d30787e9f7ab16b632e12dc05a2c8f4dae701d83ef2199c985\", size \"73303082\" in 4.190848527s" Aug 13 07:09:52.602622 containerd[1592]: time="2025-08-13T07:09:52.602522073Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:f36b8965af58ac17c6fcb27d986b37161ceb26b3d41d3cd53f232b0e16761305\"" Aug 13 07:09:52.604951 containerd[1592]: time="2025-08-13T07:09:52.604892748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 07:09:52.605901 containerd[1592]: time="2025-08-13T07:09:52.605856552Z" level=info msg="CreateContainer within sandbox \"10890e12f5519cc881b43b83e1420f39eb87456872b3be7d807631eb793d118f\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Aug 13 07:09:52.625736 containerd[1592]: time="2025-08-13T07:09:52.625507050Z" level=info msg="CreateContainer within sandbox \"10890e12f5519cc881b43b83e1420f39eb87456872b3be7d807631eb793d118f\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"a7d60feedfc2f3dc7a0cb69505845bd4f5129d8c64d8289c939558fccba18ba9\"" Aug 13 07:09:52.627436 containerd[1592]: time="2025-08-13T07:09:52.626355775Z" level=info msg="StartContainer for \"a7d60feedfc2f3dc7a0cb69505845bd4f5129d8c64d8289c939558fccba18ba9\"" Aug 13 07:09:52.632874 systemd-networkd[1226]: cali866ea9c6ce3: Gained IPv6LL Aug 13 07:09:52.700751 containerd[1592]: time="2025-08-13T07:09:52.700700327Z" level=info msg="StartContainer for \"a7d60feedfc2f3dc7a0cb69505845bd4f5129d8c64d8289c939558fccba18ba9\" returns successfully" Aug 13 07:09:52.732098 kubelet[1941]: E0813 07:09:52.732053 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:52.965289 kubelet[1941]: I0813 07:09:52.964926 1941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-4krzr" podStartSLOduration=17.643935508 podStartE2EDuration="21.964895273s" podCreationTimestamp="2025-08-13 07:09:31 +0000 UTC" firstStartedPulling="2025-08-13 07:09:48.282683818 +0000 UTC m=+32.379402783" lastFinishedPulling="2025-08-13 07:09:52.60364357 +0000 UTC m=+36.700362548" observedRunningTime="2025-08-13 07:09:52.96485582 +0000 UTC m=+37.061574809" watchObservedRunningTime="2025-08-13 07:09:52.964895273 +0000 UTC m=+37.061614263" Aug 13 07:09:53.733700 kubelet[1941]: E0813 07:09:53.733590 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:54.734196 kubelet[1941]: E0813 07:09:54.734139 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:55.014304 containerd[1592]: time="2025-08-13T07:09:55.014112365Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:55.015789 containerd[1592]: time="2025-08-13T07:09:55.015723330Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Aug 13 07:09:55.017202 containerd[1592]: time="2025-08-13T07:09:55.016665678Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:55.020599 containerd[1592]: time="2025-08-13T07:09:55.020541644Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:55.021472 containerd[1592]: time="2025-08-13T07:09:55.021423481Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 2.416421201s" Aug 13 07:09:55.021472 containerd[1592]: time="2025-08-13T07:09:55.021475869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 07:09:55.024618 containerd[1592]: time="2025-08-13T07:09:55.024494010Z" level=info msg="CreateContainer within sandbox \"2cc9da7d528faacc6d1945afe3ff72f9721a6428bb2ca64f756e0cb9e1fe9763\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 07:09:55.048389 containerd[1592]: time="2025-08-13T07:09:55.048316993Z" level=info msg="CreateContainer within sandbox \"2cc9da7d528faacc6d1945afe3ff72f9721a6428bb2ca64f756e0cb9e1fe9763\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f4f44fe4b0761236f65d52eb8d36c3f9874b2ffcacbb5a6b71015a2f85d92e10\"" Aug 13 07:09:55.049441 containerd[1592]: time="2025-08-13T07:09:55.049267252Z" level=info msg="StartContainer for \"f4f44fe4b0761236f65d52eb8d36c3f9874b2ffcacbb5a6b71015a2f85d92e10\"" Aug 13 07:09:55.161381 containerd[1592]: time="2025-08-13T07:09:55.161080737Z" level=info msg="StartContainer for \"f4f44fe4b0761236f65d52eb8d36c3f9874b2ffcacbb5a6b71015a2f85d92e10\" returns successfully" Aug 13 07:09:55.593705 update_engine[1565]: I20250813 07:09:55.593495 1565 update_attempter.cc:509] Updating boot flags... Aug 13 07:09:55.643470 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3285) Aug 13 07:09:55.718352 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3284) Aug 13 07:09:55.734829 kubelet[1941]: E0813 07:09:55.734791 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:55.807202 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3284) Aug 13 07:09:56.699948 kubelet[1941]: E0813 07:09:56.699881 1941 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:56.736034 kubelet[1941]: E0813 07:09:56.735956 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:57.043628 kubelet[1941]: I0813 07:09:57.043262 1941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7c4567fdc-ljqxv" podStartSLOduration=4.538902107 podStartE2EDuration="8.043222031s" podCreationTimestamp="2025-08-13 07:09:49 +0000 UTC" firstStartedPulling="2025-08-13 07:09:51.518477892 +0000 UTC m=+35.615196857" lastFinishedPulling="2025-08-13 07:09:55.022797815 +0000 UTC m=+39.119516781" observedRunningTime="2025-08-13 07:09:55.988655177 +0000 UTC m=+40.085374165" watchObservedRunningTime="2025-08-13 07:09:57.043222031 +0000 UTC m=+41.139941013" Aug 13 07:09:57.737241 kubelet[1941]: E0813 07:09:57.737174 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:58.738433 kubelet[1941]: E0813 07:09:58.738286 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:59.432861 kubelet[1941]: I0813 07:09:59.432809 1941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnsth\" (UniqueName: \"kubernetes.io/projected/0210892d-982f-45aa-ae7b-63ef5bd476a1-kube-api-access-pnsth\") pod \"nfs-server-provisioner-0\" (UID: \"0210892d-982f-45aa-ae7b-63ef5bd476a1\") " pod="default/nfs-server-provisioner-0" Aug 13 07:09:59.433051 kubelet[1941]: I0813 07:09:59.432898 1941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/0210892d-982f-45aa-ae7b-63ef5bd476a1-data\") pod \"nfs-server-provisioner-0\" (UID: \"0210892d-982f-45aa-ae7b-63ef5bd476a1\") " pod="default/nfs-server-provisioner-0" Aug 13 07:09:59.671237 containerd[1592]: time="2025-08-13T07:09:59.671182655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:0210892d-982f-45aa-ae7b-63ef5bd476a1,Namespace:default,Attempt:0,}" Aug 13 07:09:59.739488 kubelet[1941]: E0813 07:09:59.739439 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:09:59.846266 systemd-networkd[1226]: cali60e51b789ff: Link UP Aug 13 07:09:59.848003 systemd-networkd[1226]: cali60e51b789ff: Gained carrier Aug 13 07:09:59.861793 containerd[1592]: 2025-08-13 07:09:59.735 [INFO][3314] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {164.92.99.201-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 0210892d-982f-45aa-ae7b-63ef5bd476a1 1443 0 2025-08-13 07:09:59 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 164.92.99.201 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="9156b96752f885766a5dd9c9a67eed195c8c08622c23adfdfb52dcde51eed058" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="164.92.99.201-k8s-nfs--server--provisioner--0-" Aug 13 07:09:59.861793 containerd[1592]: 2025-08-13 07:09:59.736 [INFO][3314] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9156b96752f885766a5dd9c9a67eed195c8c08622c23adfdfb52dcde51eed058" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="164.92.99.201-k8s-nfs--server--provisioner--0-eth0" Aug 13 07:09:59.861793 containerd[1592]: 2025-08-13 07:09:59.771 [INFO][3325] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9156b96752f885766a5dd9c9a67eed195c8c08622c23adfdfb52dcde51eed058" HandleID="k8s-pod-network.9156b96752f885766a5dd9c9a67eed195c8c08622c23adfdfb52dcde51eed058" Workload="164.92.99.201-k8s-nfs--server--provisioner--0-eth0" Aug 13 07:09:59.861793 containerd[1592]: 2025-08-13 07:09:59.771 [INFO][3325] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9156b96752f885766a5dd9c9a67eed195c8c08622c23adfdfb52dcde51eed058" HandleID="k8s-pod-network.9156b96752f885766a5dd9c9a67eed195c8c08622c23adfdfb52dcde51eed058" Workload="164.92.99.201-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f250), Attrs:map[string]string{"namespace":"default", "node":"164.92.99.201", "pod":"nfs-server-provisioner-0", "timestamp":"2025-08-13 07:09:59.77172649 +0000 UTC"}, Hostname:"164.92.99.201", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:09:59.861793 containerd[1592]: 2025-08-13 07:09:59.772 [INFO][3325] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:59.861793 containerd[1592]: 2025-08-13 07:09:59.772 [INFO][3325] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:59.861793 containerd[1592]: 2025-08-13 07:09:59.772 [INFO][3325] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '164.92.99.201' Aug 13 07:09:59.861793 containerd[1592]: 2025-08-13 07:09:59.786 [INFO][3325] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9156b96752f885766a5dd9c9a67eed195c8c08622c23adfdfb52dcde51eed058" host="164.92.99.201" Aug 13 07:09:59.861793 containerd[1592]: 2025-08-13 07:09:59.794 [INFO][3325] ipam/ipam.go 394: Looking up existing affinities for host host="164.92.99.201" Aug 13 07:09:59.861793 containerd[1592]: 2025-08-13 07:09:59.809 [INFO][3325] ipam/ipam.go 511: Trying affinity for 192.168.94.64/26 host="164.92.99.201" Aug 13 07:09:59.861793 containerd[1592]: 2025-08-13 07:09:59.812 [INFO][3325] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.64/26 host="164.92.99.201" Aug 13 07:09:59.861793 containerd[1592]: 2025-08-13 07:09:59.816 [INFO][3325] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="164.92.99.201" Aug 13 07:09:59.861793 containerd[1592]: 2025-08-13 07:09:59.816 [INFO][3325] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.9156b96752f885766a5dd9c9a67eed195c8c08622c23adfdfb52dcde51eed058" host="164.92.99.201" Aug 13 07:09:59.861793 containerd[1592]: 2025-08-13 07:09:59.819 [INFO][3325] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9156b96752f885766a5dd9c9a67eed195c8c08622c23adfdfb52dcde51eed058 Aug 13 07:09:59.861793 containerd[1592]: 2025-08-13 07:09:59.826 [INFO][3325] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.9156b96752f885766a5dd9c9a67eed195c8c08622c23adfdfb52dcde51eed058" host="164.92.99.201" Aug 13 07:09:59.861793 containerd[1592]: 2025-08-13 07:09:59.837 [INFO][3325] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.94.68/26] block=192.168.94.64/26 handle="k8s-pod-network.9156b96752f885766a5dd9c9a67eed195c8c08622c23adfdfb52dcde51eed058" host="164.92.99.201" Aug 13 07:09:59.861793 containerd[1592]: 2025-08-13 07:09:59.837 [INFO][3325] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.68/26] handle="k8s-pod-network.9156b96752f885766a5dd9c9a67eed195c8c08622c23adfdfb52dcde51eed058" host="164.92.99.201" Aug 13 07:09:59.861793 containerd[1592]: 2025-08-13 07:09:59.837 [INFO][3325] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:59.861793 containerd[1592]: 2025-08-13 07:09:59.837 [INFO][3325] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.68/26] IPv6=[] ContainerID="9156b96752f885766a5dd9c9a67eed195c8c08622c23adfdfb52dcde51eed058" HandleID="k8s-pod-network.9156b96752f885766a5dd9c9a67eed195c8c08622c23adfdfb52dcde51eed058" Workload="164.92.99.201-k8s-nfs--server--provisioner--0-eth0" Aug 13 07:09:59.862566 containerd[1592]: 2025-08-13 07:09:59.839 [INFO][3314] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9156b96752f885766a5dd9c9a67eed195c8c08622c23adfdfb52dcde51eed058" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="164.92.99.201-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"164.92.99.201-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"0210892d-982f-45aa-ae7b-63ef5bd476a1", ResourceVersion:"1443", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 9, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"164.92.99.201", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.94.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:59.862566 containerd[1592]: 2025-08-13 07:09:59.839 [INFO][3314] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.68/32] ContainerID="9156b96752f885766a5dd9c9a67eed195c8c08622c23adfdfb52dcde51eed058" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="164.92.99.201-k8s-nfs--server--provisioner--0-eth0" Aug 13 07:09:59.862566 containerd[1592]: 2025-08-13 07:09:59.839 [INFO][3314] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="9156b96752f885766a5dd9c9a67eed195c8c08622c23adfdfb52dcde51eed058" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="164.92.99.201-k8s-nfs--server--provisioner--0-eth0" Aug 13 07:09:59.862566 containerd[1592]: 2025-08-13 07:09:59.848 [INFO][3314] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9156b96752f885766a5dd9c9a67eed195c8c08622c23adfdfb52dcde51eed058" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="164.92.99.201-k8s-nfs--server--provisioner--0-eth0" Aug 13 07:09:59.862919 containerd[1592]: 2025-08-13 07:09:59.849 [INFO][3314] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9156b96752f885766a5dd9c9a67eed195c8c08622c23adfdfb52dcde51eed058" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="164.92.99.201-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"164.92.99.201-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"0210892d-982f-45aa-ae7b-63ef5bd476a1", ResourceVersion:"1443", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 9, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"164.92.99.201", ContainerID:"9156b96752f885766a5dd9c9a67eed195c8c08622c23adfdfb52dcde51eed058", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.94.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"e6:6d:95:93:4d:7d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:59.862919 containerd[1592]: 2025-08-13 07:09:59.859 [INFO][3314] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9156b96752f885766a5dd9c9a67eed195c8c08622c23adfdfb52dcde51eed058" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="164.92.99.201-k8s-nfs--server--provisioner--0-eth0" Aug 13 07:09:59.892457 containerd[1592]: time="2025-08-13T07:09:59.891929312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:09:59.892615 containerd[1592]: time="2025-08-13T07:09:59.892429725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:09:59.892615 containerd[1592]: time="2025-08-13T07:09:59.892452277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:59.892615 containerd[1592]: time="2025-08-13T07:09:59.892564260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:59.980420 containerd[1592]: time="2025-08-13T07:09:59.980365360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:0210892d-982f-45aa-ae7b-63ef5bd476a1,Namespace:default,Attempt:0,} returns sandbox id \"9156b96752f885766a5dd9c9a67eed195c8c08622c23adfdfb52dcde51eed058\"" Aug 13 07:09:59.982821 containerd[1592]: time="2025-08-13T07:09:59.982726586Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Aug 13 07:10:00.740663 kubelet[1941]: E0813 07:10:00.740597 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:10:01.740945 kubelet[1941]: E0813 07:10:01.740890 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:10:01.784326 systemd-networkd[1226]: cali60e51b789ff: Gained IPv6LL Aug 13 07:10:02.741797 kubelet[1941]: E0813 07:10:02.741709 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:10:03.191320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3777203026.mount: Deactivated successfully. Aug 13 07:10:03.742562 kubelet[1941]: E0813 07:10:03.742464 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:10:04.743064 kubelet[1941]: E0813 07:10:04.742989 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:10:05.744213 kubelet[1941]: E0813 07:10:05.744120 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:10:06.121567 containerd[1592]: time="2025-08-13T07:10:06.121084856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:10:06.123187 containerd[1592]: time="2025-08-13T07:10:06.123114239Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Aug 13 07:10:06.124492 containerd[1592]: time="2025-08-13T07:10:06.124434574Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:10:06.136381 containerd[1592]: time="2025-08-13T07:10:06.135601212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:10:06.139371 containerd[1592]: time="2025-08-13T07:10:06.139277772Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.156451665s" Aug 13 07:10:06.139371 containerd[1592]: time="2025-08-13T07:10:06.139349636Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Aug 13 07:10:06.148820 containerd[1592]: time="2025-08-13T07:10:06.148428030Z" level=info msg="CreateContainer within sandbox \"9156b96752f885766a5dd9c9a67eed195c8c08622c23adfdfb52dcde51eed058\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Aug 13 07:10:06.169966 containerd[1592]: time="2025-08-13T07:10:06.169885989Z" level=info msg="CreateContainer within sandbox \"9156b96752f885766a5dd9c9a67eed195c8c08622c23adfdfb52dcde51eed058\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"3cbea9699a5ab5e91e0099a1dfdf13366d20d092ef5f4979d503e760405a93e1\"" Aug 13 07:10:06.173456 containerd[1592]: time="2025-08-13T07:10:06.171837870Z" level=info msg="StartContainer for \"3cbea9699a5ab5e91e0099a1dfdf13366d20d092ef5f4979d503e760405a93e1\"" Aug 13 07:10:06.271443 containerd[1592]: time="2025-08-13T07:10:06.270090016Z" level=info msg="StartContainer for \"3cbea9699a5ab5e91e0099a1dfdf13366d20d092ef5f4979d503e760405a93e1\" returns successfully" Aug 13 07:10:06.745001 kubelet[1941]: E0813 07:10:06.744923 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:10:07.024569 kubelet[1941]: I0813 07:10:07.023780 1941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.865643342 podStartE2EDuration="8.023746286s" podCreationTimestamp="2025-08-13 07:09:59 +0000 UTC" firstStartedPulling="2025-08-13 07:09:59.98227569 +0000 UTC m=+44.078994669" lastFinishedPulling="2025-08-13 07:10:06.14037864 +0000 UTC m=+50.237097613" observedRunningTime="2025-08-13 07:10:07.023503422 +0000 UTC m=+51.120222415" watchObservedRunningTime="2025-08-13 07:10:07.023746286 +0000 UTC m=+51.120465280" Aug 13 07:10:07.745622 kubelet[1941]: E0813 07:10:07.745541 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:10:08.746781 kubelet[1941]: E0813 07:10:08.746705 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:10:09.747682 kubelet[1941]: E0813 07:10:09.747606 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:10:10.313942 systemd[1]: Started sshd@7-164.92.99.201:22-14.232.230.165:46608.service - OpenSSH per-connection server daemon (14.232.230.165:46608). Aug 13 07:10:10.370483 sshd[3488]: Connection closed by 14.232.230.165 port 46608 Aug 13 07:10:10.371507 systemd[1]: sshd@7-164.92.99.201:22-14.232.230.165:46608.service: Deactivated successfully. Aug 13 07:10:10.748859 kubelet[1941]: E0813 07:10:10.748774 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:10:10.871006 systemd[1]: Started sshd@8-164.92.99.201:22-14.232.230.165:47170.service - OpenSSH per-connection server daemon (14.232.230.165:47170). Aug 13 07:10:11.749307 kubelet[1941]: E0813 07:10:11.749234 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:10:11.829865 sshd[3492]: Invalid user a from 14.232.230.165 port 47170 Aug 13 07:10:12.044988 sshd[3492]: Connection closed by invalid user a 14.232.230.165 port 47170 [preauth] Aug 13 07:10:12.048815 systemd[1]: sshd@8-164.92.99.201:22-14.232.230.165:47170.service: Deactivated successfully. Aug 13 07:10:12.749931 kubelet[1941]: E0813 07:10:12.749860 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:10:13.750888 kubelet[1941]: E0813 07:10:13.750815 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:10:14.751081 kubelet[1941]: E0813 07:10:14.751009 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:10:14.904692 systemd[1]: run-containerd-runc-k8s.io-c723247a8e2385afdfdda3ad7f5193a0c6738355b2f194ec2752912bedce26bf-runc.AeHeTr.mount: Deactivated successfully. Aug 13 07:10:15.752258 kubelet[1941]: E0813 07:10:15.752190 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:10:15.977643 kubelet[1941]: I0813 07:10:15.977574 1941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9dbe5f3b-f220-443d-8204-2621830eaa11\" (UniqueName: \"kubernetes.io/nfs/0664bc7c-4d2b-462c-bcb3-be374f475360-pvc-9dbe5f3b-f220-443d-8204-2621830eaa11\") pod \"test-pod-1\" (UID: \"0664bc7c-4d2b-462c-bcb3-be374f475360\") " pod="default/test-pod-1" Aug 13 07:10:15.977832 kubelet[1941]: I0813 07:10:15.977659 1941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6995s\" (UniqueName: \"kubernetes.io/projected/0664bc7c-4d2b-462c-bcb3-be374f475360-kube-api-access-6995s\") pod \"test-pod-1\" (UID: \"0664bc7c-4d2b-462c-bcb3-be374f475360\") " pod="default/test-pod-1" Aug 13 07:10:16.140925 kernel: FS-Cache: Loaded Aug 13 07:10:16.251993 kernel: RPC: Registered named UNIX socket transport module. Aug 13 07:10:16.252153 kernel: RPC: Registered udp transport module. Aug 13 07:10:16.252664 kernel: RPC: Registered tcp transport module. Aug 13 07:10:16.253926 kernel: RPC: Registered tcp-with-tls transport module. Aug 13 07:10:16.254973 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Aug 13 07:10:16.592548 kernel: NFS: Registering the id_resolver key type Aug 13 07:10:16.592685 kernel: Key type id_resolver registered Aug 13 07:10:16.594603 kernel: Key type id_legacy registered Aug 13 07:10:16.639016 nfsidmap[3536]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.5-e-dc2da44dd2' Aug 13 07:10:16.645977 nfsidmap[3537]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.5-e-dc2da44dd2' Aug 13 07:10:16.700357 kubelet[1941]: E0813 07:10:16.700281 1941 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:10:16.726269 containerd[1592]: time="2025-08-13T07:10:16.726038957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:0664bc7c-4d2b-462c-bcb3-be374f475360,Namespace:default,Attempt:0,}" Aug 13 07:10:16.768474 kubelet[1941]: E0813 07:10:16.767782 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:10:16.773510 containerd[1592]: time="2025-08-13T07:10:16.772914815Z" level=info msg="StopPodSandbox for \"52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d\"" Aug 13 07:10:17.086019 systemd-networkd[1226]: cali5ec59c6bf6e: Link UP Aug 13 07:10:17.088237 systemd-networkd[1226]: cali5ec59c6bf6e: Gained carrier Aug 13 07:10:17.105746 containerd[1592]: 2025-08-13 07:10:16.870 [INFO][3540] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {164.92.99.201-k8s-test--pod--1-eth0 default 0664bc7c-4d2b-462c-bcb3-be374f475360 1512 0 2025-08-13 07:10:00 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 164.92.99.201 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="8fdf980dab950b88888cf92c54f6bcb714fac9bf103c8d4d49154cce4a2c5882" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="164.92.99.201-k8s-test--pod--1-" Aug 13 07:10:17.105746 containerd[1592]: 2025-08-13 07:10:16.871 [INFO][3540] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8fdf980dab950b88888cf92c54f6bcb714fac9bf103c8d4d49154cce4a2c5882" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="164.92.99.201-k8s-test--pod--1-eth0" Aug 13 07:10:17.105746 containerd[1592]: 2025-08-13 07:10:16.942 [INFO][3566] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8fdf980dab950b88888cf92c54f6bcb714fac9bf103c8d4d49154cce4a2c5882" HandleID="k8s-pod-network.8fdf980dab950b88888cf92c54f6bcb714fac9bf103c8d4d49154cce4a2c5882" Workload="164.92.99.201-k8s-test--pod--1-eth0" Aug 13 07:10:17.105746 containerd[1592]: 2025-08-13 07:10:16.942 [INFO][3566] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8fdf980dab950b88888cf92c54f6bcb714fac9bf103c8d4d49154cce4a2c5882" HandleID="k8s-pod-network.8fdf980dab950b88888cf92c54f6bcb714fac9bf103c8d4d49154cce4a2c5882" Workload="164.92.99.201-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5640), Attrs:map[string]string{"namespace":"default", "node":"164.92.99.201", "pod":"test-pod-1", "timestamp":"2025-08-13 07:10:16.942121021 +0000 UTC"}, Hostname:"164.92.99.201", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:10:17.105746 containerd[1592]: 2025-08-13 07:10:16.942 [INFO][3566] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:10:17.105746 containerd[1592]: 2025-08-13 07:10:16.942 [INFO][3566] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:10:17.105746 containerd[1592]: 2025-08-13 07:10:16.942 [INFO][3566] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '164.92.99.201' Aug 13 07:10:17.105746 containerd[1592]: 2025-08-13 07:10:16.966 [INFO][3566] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8fdf980dab950b88888cf92c54f6bcb714fac9bf103c8d4d49154cce4a2c5882" host="164.92.99.201" Aug 13 07:10:17.105746 containerd[1592]: 2025-08-13 07:10:16.989 [INFO][3566] ipam/ipam.go 394: Looking up existing affinities for host host="164.92.99.201" Aug 13 07:10:17.105746 containerd[1592]: 2025-08-13 07:10:17.003 [INFO][3566] ipam/ipam.go 511: Trying affinity for 192.168.94.64/26 host="164.92.99.201" Aug 13 07:10:17.105746 containerd[1592]: 2025-08-13 07:10:17.018 [INFO][3566] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.64/26 host="164.92.99.201" Aug 13 07:10:17.105746 containerd[1592]: 2025-08-13 07:10:17.037 [INFO][3566] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="164.92.99.201" Aug 13 07:10:17.105746 containerd[1592]: 2025-08-13 07:10:17.037 [INFO][3566] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.8fdf980dab950b88888cf92c54f6bcb714fac9bf103c8d4d49154cce4a2c5882" host="164.92.99.201" Aug 13 07:10:17.105746 containerd[1592]: 2025-08-13 07:10:17.041 [INFO][3566] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8fdf980dab950b88888cf92c54f6bcb714fac9bf103c8d4d49154cce4a2c5882 Aug 13 07:10:17.105746 containerd[1592]: 2025-08-13 07:10:17.057 [INFO][3566] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.8fdf980dab950b88888cf92c54f6bcb714fac9bf103c8d4d49154cce4a2c5882" host="164.92.99.201" Aug 13 07:10:17.105746 containerd[1592]: 2025-08-13 07:10:17.079 [INFO][3566] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.94.69/26] block=192.168.94.64/26 handle="k8s-pod-network.8fdf980dab950b88888cf92c54f6bcb714fac9bf103c8d4d49154cce4a2c5882" host="164.92.99.201" Aug 13 07:10:17.105746 containerd[1592]: 2025-08-13 07:10:17.079 [INFO][3566] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.69/26] handle="k8s-pod-network.8fdf980dab950b88888cf92c54f6bcb714fac9bf103c8d4d49154cce4a2c5882" host="164.92.99.201" Aug 13 07:10:17.105746 containerd[1592]: 2025-08-13 07:10:17.079 [INFO][3566] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:10:17.105746 containerd[1592]: 2025-08-13 07:10:17.079 [INFO][3566] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.69/26] IPv6=[] ContainerID="8fdf980dab950b88888cf92c54f6bcb714fac9bf103c8d4d49154cce4a2c5882" HandleID="k8s-pod-network.8fdf980dab950b88888cf92c54f6bcb714fac9bf103c8d4d49154cce4a2c5882" Workload="164.92.99.201-k8s-test--pod--1-eth0" Aug 13 07:10:17.105746 containerd[1592]: 2025-08-13 07:10:17.081 [INFO][3540] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8fdf980dab950b88888cf92c54f6bcb714fac9bf103c8d4d49154cce4a2c5882" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="164.92.99.201-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"164.92.99.201-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"0664bc7c-4d2b-462c-bcb3-be374f475360", ResourceVersion:"1512", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 10, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"164.92.99.201", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.94.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:10:17.107808 containerd[1592]: 2025-08-13 07:10:17.082 [INFO][3540] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.69/32] ContainerID="8fdf980dab950b88888cf92c54f6bcb714fac9bf103c8d4d49154cce4a2c5882" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="164.92.99.201-k8s-test--pod--1-eth0" Aug 13 07:10:17.107808 containerd[1592]: 2025-08-13 07:10:17.082 [INFO][3540] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="8fdf980dab950b88888cf92c54f6bcb714fac9bf103c8d4d49154cce4a2c5882" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="164.92.99.201-k8s-test--pod--1-eth0" Aug 13 07:10:17.107808 containerd[1592]: 2025-08-13 07:10:17.089 [INFO][3540] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8fdf980dab950b88888cf92c54f6bcb714fac9bf103c8d4d49154cce4a2c5882" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="164.92.99.201-k8s-test--pod--1-eth0" Aug 13 07:10:17.107808 containerd[1592]: 2025-08-13 07:10:17.090 [INFO][3540] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8fdf980dab950b88888cf92c54f6bcb714fac9bf103c8d4d49154cce4a2c5882" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="164.92.99.201-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"164.92.99.201-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"0664bc7c-4d2b-462c-bcb3-be374f475360", ResourceVersion:"1512", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 10, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"164.92.99.201", ContainerID:"8fdf980dab950b88888cf92c54f6bcb714fac9bf103c8d4d49154cce4a2c5882", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.94.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"6e:83:d4:1e:f0:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:10:17.107808 containerd[1592]: 2025-08-13 07:10:17.103 [INFO][3540] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8fdf980dab950b88888cf92c54f6bcb714fac9bf103c8d4d49154cce4a2c5882" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="164.92.99.201-k8s-test--pod--1-eth0" Aug 13 07:10:17.114604 containerd[1592]: 2025-08-13 07:10:16.885 [WARNING][3555] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"164.92.99.201-k8s-csi--node--driver--jl8pz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"863ff698-bcf1-43dc-8890-89f9cd527211", ResourceVersion:"1310", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 9, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"164.92.99.201", ContainerID:"2e26b047b096f3a1025eefe51f5c0a396c51a428ca1bab401eb4de971a2efb6e", Pod:"csi-node-driver-jl8pz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.94.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali70f9d462897", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:10:17.114604 containerd[1592]: 2025-08-13 07:10:16.887 [INFO][3555] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" Aug 13 07:10:17.114604 containerd[1592]: 2025-08-13 07:10:16.888 [INFO][3555] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" iface="eth0" netns="" Aug 13 07:10:17.114604 containerd[1592]: 2025-08-13 07:10:16.888 [INFO][3555] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" Aug 13 07:10:17.114604 containerd[1592]: 2025-08-13 07:10:16.888 [INFO][3555] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" Aug 13 07:10:17.114604 containerd[1592]: 2025-08-13 07:10:16.958 [INFO][3568] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" HandleID="k8s-pod-network.52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" Workload="164.92.99.201-k8s-csi--node--driver--jl8pz-eth0" Aug 13 07:10:17.114604 containerd[1592]: 2025-08-13 07:10:16.959 [INFO][3568] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:10:17.114604 containerd[1592]: 2025-08-13 07:10:17.080 [INFO][3568] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:10:17.114604 containerd[1592]: 2025-08-13 07:10:17.095 [WARNING][3568] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" HandleID="k8s-pod-network.52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" Workload="164.92.99.201-k8s-csi--node--driver--jl8pz-eth0" Aug 13 07:10:17.114604 containerd[1592]: 2025-08-13 07:10:17.097 [INFO][3568] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" HandleID="k8s-pod-network.52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" Workload="164.92.99.201-k8s-csi--node--driver--jl8pz-eth0" Aug 13 07:10:17.114604 containerd[1592]: 2025-08-13 07:10:17.104 [INFO][3568] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:10:17.114604 containerd[1592]: 2025-08-13 07:10:17.110 [INFO][3555] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" Aug 13 07:10:17.116303 containerd[1592]: time="2025-08-13T07:10:17.115528888Z" level=info msg="TearDown network for sandbox \"52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d\" successfully" Aug 13 07:10:17.116303 containerd[1592]: time="2025-08-13T07:10:17.115569145Z" level=info msg="StopPodSandbox for \"52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d\" returns successfully" Aug 13 07:10:17.167511 containerd[1592]: time="2025-08-13T07:10:17.167318973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:10:17.168893 containerd[1592]: time="2025-08-13T07:10:17.168473495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:10:17.168893 containerd[1592]: time="2025-08-13T07:10:17.168520363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:10:17.168893 containerd[1592]: time="2025-08-13T07:10:17.168690845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:10:17.171793 containerd[1592]: time="2025-08-13T07:10:17.169601420Z" level=info msg="RemovePodSandbox for \"52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d\"" Aug 13 07:10:17.171793 containerd[1592]: time="2025-08-13T07:10:17.169679368Z" level=info msg="Forcibly stopping sandbox \"52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d\"" Aug 13 07:10:17.304623 containerd[1592]: time="2025-08-13T07:10:17.304572259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:0664bc7c-4d2b-462c-bcb3-be374f475360,Namespace:default,Attempt:0,} returns sandbox id \"8fdf980dab950b88888cf92c54f6bcb714fac9bf103c8d4d49154cce4a2c5882\"" Aug 13 07:10:17.319839 containerd[1592]: time="2025-08-13T07:10:17.319773073Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Aug 13 07:10:17.360932 containerd[1592]: 2025-08-13 07:10:17.258 [WARNING][3620] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"164.92.99.201-k8s-csi--node--driver--jl8pz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"863ff698-bcf1-43dc-8890-89f9cd527211", ResourceVersion:"1310", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 9, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"164.92.99.201", ContainerID:"2e26b047b096f3a1025eefe51f5c0a396c51a428ca1bab401eb4de971a2efb6e", Pod:"csi-node-driver-jl8pz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.94.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali70f9d462897", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:10:17.360932 containerd[1592]: 2025-08-13 07:10:17.258 [INFO][3620] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" Aug 13 07:10:17.360932 containerd[1592]: 2025-08-13 07:10:17.258 [INFO][3620] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" iface="eth0" netns="" Aug 13 07:10:17.360932 containerd[1592]: 2025-08-13 07:10:17.259 [INFO][3620] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" Aug 13 07:10:17.360932 containerd[1592]: 2025-08-13 07:10:17.259 [INFO][3620] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" Aug 13 07:10:17.360932 containerd[1592]: 2025-08-13 07:10:17.345 [INFO][3638] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" HandleID="k8s-pod-network.52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" Workload="164.92.99.201-k8s-csi--node--driver--jl8pz-eth0" Aug 13 07:10:17.360932 containerd[1592]: 2025-08-13 07:10:17.345 [INFO][3638] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:10:17.360932 containerd[1592]: 2025-08-13 07:10:17.345 [INFO][3638] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:10:17.360932 containerd[1592]: 2025-08-13 07:10:17.353 [WARNING][3638] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" HandleID="k8s-pod-network.52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" Workload="164.92.99.201-k8s-csi--node--driver--jl8pz-eth0" Aug 13 07:10:17.360932 containerd[1592]: 2025-08-13 07:10:17.353 [INFO][3638] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" HandleID="k8s-pod-network.52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" Workload="164.92.99.201-k8s-csi--node--driver--jl8pz-eth0" Aug 13 07:10:17.360932 containerd[1592]: 2025-08-13 07:10:17.356 [INFO][3638] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:10:17.360932 containerd[1592]: 2025-08-13 07:10:17.358 [INFO][3620] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d" Aug 13 07:10:17.361633 containerd[1592]: time="2025-08-13T07:10:17.360910441Z" level=info msg="TearDown network for sandbox \"52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d\" successfully" Aug 13 07:10:17.408943 containerd[1592]: time="2025-08-13T07:10:17.407554023Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:10:17.408943 containerd[1592]: time="2025-08-13T07:10:17.407662590Z" level=info msg="RemovePodSandbox \"52f62052c201c5a3558279b44d2c3c447161b7194cd474a20a6a8b199ec8c72d\" returns successfully" Aug 13 07:10:17.408943 containerd[1592]: time="2025-08-13T07:10:17.408496877Z" level=info msg="StopPodSandbox for \"256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d\"" Aug 13 07:10:17.531008 containerd[1592]: 2025-08-13 07:10:17.470 [WARNING][3659] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"164.92.99.201-k8s-nginx--deployment--8587fbcb89--4krzr-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"918bd42e-7b1b-4175-829b-4def6d17c3dc", ResourceVersion:"1366", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 9, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"164.92.99.201", ContainerID:"10890e12f5519cc881b43b83e1420f39eb87456872b3be7d807631eb793d118f", Pod:"nginx-deployment-8587fbcb89-4krzr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.94.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali7aeaf3ed9af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:10:17.531008 containerd[1592]: 2025-08-13 07:10:17.472 [INFO][3659] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" Aug 13 07:10:17.531008 containerd[1592]: 2025-08-13 07:10:17.472 [INFO][3659] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" iface="eth0" netns="" Aug 13 07:10:17.531008 containerd[1592]: 2025-08-13 07:10:17.472 [INFO][3659] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" Aug 13 07:10:17.531008 containerd[1592]: 2025-08-13 07:10:17.473 [INFO][3659] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" Aug 13 07:10:17.531008 containerd[1592]: 2025-08-13 07:10:17.512 [INFO][3671] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" HandleID="k8s-pod-network.256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" Workload="164.92.99.201-k8s-nginx--deployment--8587fbcb89--4krzr-eth0" Aug 13 07:10:17.531008 containerd[1592]: 2025-08-13 07:10:17.512 [INFO][3671] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:10:17.531008 containerd[1592]: 2025-08-13 07:10:17.512 [INFO][3671] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:10:17.531008 containerd[1592]: 2025-08-13 07:10:17.521 [WARNING][3671] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" HandleID="k8s-pod-network.256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" Workload="164.92.99.201-k8s-nginx--deployment--8587fbcb89--4krzr-eth0" Aug 13 07:10:17.531008 containerd[1592]: 2025-08-13 07:10:17.521 [INFO][3671] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" HandleID="k8s-pod-network.256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" Workload="164.92.99.201-k8s-nginx--deployment--8587fbcb89--4krzr-eth0" Aug 13 07:10:17.531008 containerd[1592]: 2025-08-13 07:10:17.526 [INFO][3671] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:10:17.531008 containerd[1592]: 2025-08-13 07:10:17.528 [INFO][3659] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" Aug 13 07:10:17.532011 containerd[1592]: time="2025-08-13T07:10:17.531962755Z" level=info msg="TearDown network for sandbox \"256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d\" successfully" Aug 13 07:10:17.532140 containerd[1592]: time="2025-08-13T07:10:17.532116021Z" level=info msg="StopPodSandbox for \"256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d\" returns successfully" Aug 13 07:10:17.532956 containerd[1592]: time="2025-08-13T07:10:17.532914352Z" level=info msg="RemovePodSandbox for \"256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d\"" Aug 13 07:10:17.533084 containerd[1592]: time="2025-08-13T07:10:17.532966435Z" level=info msg="Forcibly stopping sandbox \"256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d\"" Aug 13 07:10:17.656645 containerd[1592]: 2025-08-13 07:10:17.586 [WARNING][3686] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"164.92.99.201-k8s-nginx--deployment--8587fbcb89--4krzr-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"918bd42e-7b1b-4175-829b-4def6d17c3dc", ResourceVersion:"1366", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 9, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"164.92.99.201", ContainerID:"10890e12f5519cc881b43b83e1420f39eb87456872b3be7d807631eb793d118f", Pod:"nginx-deployment-8587fbcb89-4krzr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.94.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali7aeaf3ed9af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:10:17.656645 containerd[1592]: 2025-08-13 07:10:17.587 [INFO][3686] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" Aug 13 07:10:17.656645 containerd[1592]: 2025-08-13 07:10:17.587 [INFO][3686] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" iface="eth0" netns="" Aug 13 07:10:17.656645 containerd[1592]: 2025-08-13 07:10:17.587 [INFO][3686] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" Aug 13 07:10:17.656645 containerd[1592]: 2025-08-13 07:10:17.587 [INFO][3686] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" Aug 13 07:10:17.656645 containerd[1592]: 2025-08-13 07:10:17.624 [INFO][3693] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" HandleID="k8s-pod-network.256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" Workload="164.92.99.201-k8s-nginx--deployment--8587fbcb89--4krzr-eth0" Aug 13 07:10:17.656645 containerd[1592]: 2025-08-13 07:10:17.624 [INFO][3693] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:10:17.656645 containerd[1592]: 2025-08-13 07:10:17.624 [INFO][3693] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:10:17.656645 containerd[1592]: 2025-08-13 07:10:17.643 [WARNING][3693] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" HandleID="k8s-pod-network.256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" Workload="164.92.99.201-k8s-nginx--deployment--8587fbcb89--4krzr-eth0" Aug 13 07:10:17.656645 containerd[1592]: 2025-08-13 07:10:17.644 [INFO][3693] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" HandleID="k8s-pod-network.256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" Workload="164.92.99.201-k8s-nginx--deployment--8587fbcb89--4krzr-eth0" Aug 13 07:10:17.656645 containerd[1592]: 2025-08-13 07:10:17.652 [INFO][3693] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:10:17.656645 containerd[1592]: 2025-08-13 07:10:17.654 [INFO][3686] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d" Aug 13 07:10:17.658171 containerd[1592]: time="2025-08-13T07:10:17.657264740Z" level=info msg="TearDown network for sandbox \"256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d\" successfully" Aug 13 07:10:17.695803 containerd[1592]: time="2025-08-13T07:10:17.695559967Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:10:17.695803 containerd[1592]: time="2025-08-13T07:10:17.695681963Z" level=info msg="RemovePodSandbox \"256d604581ee4d250935e9d98594cc371a8b8a70407ea6ab678d98f9f8dc958d\" returns successfully" Aug 13 07:10:17.715303 containerd[1592]: time="2025-08-13T07:10:17.714549135Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:10:17.715303 containerd[1592]: time="2025-08-13T07:10:17.715185694Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Aug 13 07:10:17.719702 containerd[1592]: time="2025-08-13T07:10:17.719644547Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:f36b8965af58ac17c6fcb27d986b37161ceb26b3d41d3cd53f232b0e16761305\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:a6969d434cb816d30787e9f7ab16b632e12dc05a2c8f4dae701d83ef2199c985\", size \"73303082\" in 399.517108ms" Aug 13 07:10:17.719933 containerd[1592]: time="2025-08-13T07:10:17.719908538Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:f36b8965af58ac17c6fcb27d986b37161ceb26b3d41d3cd53f232b0e16761305\"" Aug 13 07:10:17.723303 containerd[1592]: time="2025-08-13T07:10:17.723264795Z" level=info msg="CreateContainer within sandbox \"8fdf980dab950b88888cf92c54f6bcb714fac9bf103c8d4d49154cce4a2c5882\" for container &ContainerMetadata{Name:test,Attempt:0,}" Aug 13 07:10:17.738984 containerd[1592]: time="2025-08-13T07:10:17.738914839Z" level=info msg="CreateContainer within sandbox \"8fdf980dab950b88888cf92c54f6bcb714fac9bf103c8d4d49154cce4a2c5882\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"de4c17f56338849416cd0df3fd32e341dd20408d127f6f77bf952a76cc290473\"" Aug 13 07:10:17.741130 containerd[1592]: time="2025-08-13T07:10:17.739969950Z" level=info msg="StartContainer for \"de4c17f56338849416cd0df3fd32e341dd20408d127f6f77bf952a76cc290473\"" Aug 13 07:10:17.768983 kubelet[1941]: E0813 07:10:17.768910 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:10:17.808591 containerd[1592]: time="2025-08-13T07:10:17.808526024Z" level=info msg="StartContainer for \"de4c17f56338849416cd0df3fd32e341dd20408d127f6f77bf952a76cc290473\" returns successfully" Aug 13 07:10:18.296121 systemd-networkd[1226]: cali5ec59c6bf6e: Gained IPv6LL Aug 13 07:10:18.769482 kubelet[1941]: E0813 07:10:18.769373 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:10:19.770269 kubelet[1941]: E0813 07:10:19.770206 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:10:20.770722 kubelet[1941]: E0813 07:10:20.770657 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:10:21.771838 kubelet[1941]: E0813 07:10:21.771759 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:10:22.772768 kubelet[1941]: E0813 07:10:22.772697 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:10:23.773843 kubelet[1941]: E0813 07:10:23.773766 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:10:24.774744 kubelet[1941]: E0813 07:10:24.774690 1941 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"