Aug 12 23:57:16.962564 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:47:31 -00 2025 Aug 12 23:57:16.962592 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 12 23:57:16.962605 kernel: BIOS-provided physical RAM map: Aug 12 23:57:16.962613 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 12 23:57:16.962619 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 12 23:57:16.962626 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 12 23:57:16.962634 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Aug 12 23:57:16.962641 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Aug 12 23:57:16.962648 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 12 23:57:16.962655 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 12 23:57:16.962665 kernel: NX (Execute Disable) protection: active Aug 12 23:57:16.962672 kernel: APIC: Static calls initialized Aug 12 23:57:16.962684 kernel: SMBIOS 2.8 present. Aug 12 23:57:16.962709 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Aug 12 23:57:16.962717 kernel: Hypervisor detected: KVM Aug 12 23:57:16.962725 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 12 23:57:16.962740 kernel: kvm-clock: using sched offset of 2948054011 cycles Aug 12 23:57:16.962748 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 12 23:57:16.962756 kernel: tsc: Detected 2494.138 MHz processor Aug 12 23:57:16.962765 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 12 23:57:16.962773 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 12 23:57:16.962781 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Aug 12 23:57:16.962789 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 12 23:57:16.962797 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 12 23:57:16.962808 kernel: ACPI: Early table checksum verification disabled Aug 12 23:57:16.962815 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Aug 12 23:57:16.962823 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:57:16.962832 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:57:16.962846 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:57:16.962858 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 12 23:57:16.962869 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:57:16.962879 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:57:16.962891 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:57:16.962907 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:57:16.962915 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Aug 12 23:57:16.962923 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Aug 12 23:57:16.962931 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 12 23:57:16.962939 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Aug 12 23:57:16.962947 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Aug 12 23:57:16.962955 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Aug 12 23:57:16.962967 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Aug 12 23:57:16.962978 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 12 23:57:16.962986 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 12 23:57:16.962994 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 12 23:57:16.963002 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Aug 12 23:57:16.963015 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Aug 12 23:57:16.963024 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Aug 12 23:57:16.963035 kernel: Zone ranges: Aug 12 23:57:16.963043 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 12 23:57:16.963051 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Aug 12 23:57:16.963060 kernel: Normal empty Aug 12 23:57:16.963068 kernel: Movable zone start for each node Aug 12 23:57:16.963076 kernel: Early memory node ranges Aug 12 23:57:16.963085 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 12 23:57:16.963093 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Aug 12 23:57:16.963101 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Aug 12 23:57:16.963110 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 12 23:57:16.963121 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 12 23:57:16.963131 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Aug 12 23:57:16.963140 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 12 23:57:16.963148 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 12 23:57:16.963156 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 12 23:57:16.963164 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 12 23:57:16.963178 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 12 23:57:16.963191 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 12 23:57:16.963202 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 12 23:57:16.963218 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 12 23:57:16.963230 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 12 23:57:16.963242 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 12 23:57:16.963254 kernel: TSC deadline timer available Aug 12 23:57:16.963265 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 12 23:57:16.963277 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 12 23:57:16.963289 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Aug 12 23:57:16.963305 kernel: Booting paravirtualized kernel on KVM Aug 12 23:57:16.963315 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 12 23:57:16.963334 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 12 23:57:16.963346 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Aug 12 23:57:16.963358 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Aug 12 23:57:16.963370 kernel: pcpu-alloc: [0] 0 1 Aug 12 23:57:16.963378 kernel: kvm-guest: PV spinlocks disabled, no host support Aug 12 23:57:16.963388 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 12 23:57:16.963397 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 12 23:57:16.963405 kernel: random: crng init done Aug 12 23:57:16.963416 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 12 23:57:16.963425 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 12 23:57:16.963434 kernel: Fallback order for Node 0: 0 Aug 12 23:57:16.963442 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Aug 12 23:57:16.963450 kernel: Policy zone: DMA32 Aug 12 23:57:16.963459 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 12 23:57:16.963467 kernel: Memory: 1969156K/2096612K available (14336K kernel code, 2295K rwdata, 22872K rodata, 43504K init, 1572K bss, 127196K reserved, 0K cma-reserved) Aug 12 23:57:16.963476 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 12 23:57:16.963484 kernel: Kernel/User page tables isolation: enabled Aug 12 23:57:16.963495 kernel: ftrace: allocating 37942 entries in 149 pages Aug 12 23:57:16.963504 kernel: ftrace: allocated 149 pages with 4 groups Aug 12 23:57:16.963512 kernel: Dynamic Preempt: voluntary Aug 12 23:57:16.963520 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 12 23:57:16.963529 kernel: rcu: RCU event tracing is enabled. Aug 12 23:57:16.963538 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 12 23:57:16.963546 kernel: Trampoline variant of Tasks RCU enabled. Aug 12 23:57:16.963555 kernel: Rude variant of Tasks RCU enabled. Aug 12 23:57:16.963563 kernel: Tracing variant of Tasks RCU enabled. Aug 12 23:57:16.963574 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 12 23:57:16.963583 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 12 23:57:16.963591 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 12 23:57:16.963599 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 12 23:57:16.963611 kernel: Console: colour VGA+ 80x25 Aug 12 23:57:16.963619 kernel: printk: console [tty0] enabled Aug 12 23:57:16.963627 kernel: printk: console [ttyS0] enabled Aug 12 23:57:16.963636 kernel: ACPI: Core revision 20230628 Aug 12 23:57:16.963644 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 12 23:57:16.963655 kernel: APIC: Switch to symmetric I/O mode setup Aug 12 23:57:16.963664 kernel: x2apic enabled Aug 12 23:57:16.963672 kernel: APIC: Switched APIC routing to: physical x2apic Aug 12 23:57:16.963681 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 12 23:57:16.963699 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Aug 12 23:57:16.963708 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Aug 12 23:57:16.963716 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Aug 12 23:57:16.963725 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Aug 12 23:57:16.963745 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 12 23:57:16.963754 kernel: Spectre V2 : Mitigation: Retpolines Aug 12 23:57:16.963763 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 12 23:57:16.963775 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 12 23:57:16.963793 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 12 23:57:16.963805 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 12 23:57:16.963817 kernel: MDS: Mitigation: Clear CPU buffers Aug 12 23:57:16.963829 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 12 23:57:16.963841 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 12 23:57:16.963862 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 12 23:57:16.963873 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 12 23:57:16.963882 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 12 23:57:16.963890 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 12 23:57:16.963900 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 12 23:57:16.963909 kernel: Freeing SMP alternatives memory: 32K Aug 12 23:57:16.963917 kernel: pid_max: default: 32768 minimum: 301 Aug 12 23:57:16.963926 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 12 23:57:16.963938 kernel: landlock: Up and running. Aug 12 23:57:16.963947 kernel: SELinux: Initializing. Aug 12 23:57:16.963956 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 12 23:57:16.963965 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 12 23:57:16.963974 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Aug 12 23:57:16.963983 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 12 23:57:16.963992 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 12 23:57:16.964001 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 12 23:57:16.964010 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Aug 12 23:57:16.964021 kernel: signal: max sigframe size: 1776 Aug 12 23:57:16.964030 kernel: rcu: Hierarchical SRCU implementation. Aug 12 23:57:16.964039 kernel: rcu: Max phase no-delay instances is 400. Aug 12 23:57:16.964048 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 12 23:57:16.964056 kernel: smp: Bringing up secondary CPUs ... Aug 12 23:57:16.964065 kernel: smpboot: x86: Booting SMP configuration: Aug 12 23:57:16.964074 kernel: .... node #0, CPUs: #1 Aug 12 23:57:16.964083 kernel: smp: Brought up 1 node, 2 CPUs Aug 12 23:57:16.964094 kernel: smpboot: Max logical packages: 1 Aug 12 23:57:16.964105 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Aug 12 23:57:16.964114 kernel: devtmpfs: initialized Aug 12 23:57:16.964123 kernel: x86/mm: Memory block size: 128MB Aug 12 23:57:16.964132 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 12 23:57:16.964141 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 12 23:57:16.964150 kernel: pinctrl core: initialized pinctrl subsystem Aug 12 23:57:16.964158 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 12 23:57:16.964167 kernel: audit: initializing netlink subsys (disabled) Aug 12 23:57:16.964176 kernel: audit: type=2000 audit(1755043036.301:1): state=initialized audit_enabled=0 res=1 Aug 12 23:57:16.964188 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 12 23:57:16.964196 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 12 23:57:16.964205 kernel: cpuidle: using governor menu Aug 12 23:57:16.964214 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 12 23:57:16.964223 kernel: dca service started, version 1.12.1 Aug 12 23:57:16.964231 kernel: PCI: Using configuration type 1 for base access Aug 12 23:57:16.964240 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 12 23:57:16.964249 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 12 23:57:16.964258 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 12 23:57:16.964269 kernel: ACPI: Added _OSI(Module Device) Aug 12 23:57:16.964278 kernel: ACPI: Added _OSI(Processor Device) Aug 12 23:57:16.964287 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 12 23:57:16.964296 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 12 23:57:16.964304 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 12 23:57:16.964313 kernel: ACPI: Interpreter enabled Aug 12 23:57:16.964322 kernel: ACPI: PM: (supports S0 S5) Aug 12 23:57:16.964331 kernel: ACPI: Using IOAPIC for interrupt routing Aug 12 23:57:16.964340 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 12 23:57:16.964351 kernel: PCI: Using E820 reservations for host bridge windows Aug 12 23:57:16.964360 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Aug 12 23:57:16.964369 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 12 23:57:16.964582 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 12 23:57:16.964744 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Aug 12 23:57:16.964848 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Aug 12 23:57:16.964860 kernel: acpiphp: Slot [3] registered Aug 12 23:57:16.964874 kernel: acpiphp: Slot [4] registered Aug 12 23:57:16.964882 kernel: acpiphp: Slot [5] registered Aug 12 23:57:16.964891 kernel: acpiphp: Slot [6] registered Aug 12 23:57:16.964900 kernel: acpiphp: Slot [7] registered Aug 12 23:57:16.964909 kernel: acpiphp: Slot [8] registered Aug 12 23:57:16.964918 kernel: acpiphp: Slot [9] registered Aug 12 23:57:16.964926 kernel: acpiphp: Slot [10] registered Aug 12 23:57:16.964935 kernel: acpiphp: Slot [11] registered Aug 12 23:57:16.964944 kernel: acpiphp: Slot [12] registered Aug 12 23:57:16.964953 kernel: acpiphp: Slot [13] registered Aug 12 23:57:16.964964 kernel: acpiphp: Slot [14] registered Aug 12 23:57:16.964973 kernel: acpiphp: Slot [15] registered Aug 12 23:57:16.964981 kernel: acpiphp: Slot [16] registered Aug 12 23:57:16.964990 kernel: acpiphp: Slot [17] registered Aug 12 23:57:16.964999 kernel: acpiphp: Slot [18] registered Aug 12 23:57:16.965008 kernel: acpiphp: Slot [19] registered Aug 12 23:57:16.965016 kernel: acpiphp: Slot [20] registered Aug 12 23:57:16.965025 kernel: acpiphp: Slot [21] registered Aug 12 23:57:16.965033 kernel: acpiphp: Slot [22] registered Aug 12 23:57:16.965044 kernel: acpiphp: Slot [23] registered Aug 12 23:57:16.965053 kernel: acpiphp: Slot [24] registered Aug 12 23:57:16.965062 kernel: acpiphp: Slot [25] registered Aug 12 23:57:16.965070 kernel: acpiphp: Slot [26] registered Aug 12 23:57:16.965079 kernel: acpiphp: Slot [27] registered Aug 12 23:57:16.965088 kernel: acpiphp: Slot [28] registered Aug 12 23:57:16.965096 kernel: acpiphp: Slot [29] registered Aug 12 23:57:16.965105 kernel: acpiphp: Slot [30] registered Aug 12 23:57:16.965114 kernel: acpiphp: Slot [31] registered Aug 12 23:57:16.965123 kernel: PCI host bridge to bus 0000:00 Aug 12 23:57:16.965239 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 12 23:57:16.965346 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 12 23:57:16.965459 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 12 23:57:16.965549 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Aug 12 23:57:16.965637 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Aug 12 23:57:16.965745 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 12 23:57:16.965933 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 12 23:57:16.966065 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Aug 12 23:57:16.966202 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Aug 12 23:57:16.966306 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Aug 12 23:57:16.966429 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Aug 12 23:57:16.966567 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Aug 12 23:57:16.966666 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Aug 12 23:57:16.966822 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Aug 12 23:57:16.966954 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Aug 12 23:57:16.967054 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Aug 12 23:57:16.967201 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Aug 12 23:57:16.967319 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Aug 12 23:57:16.967457 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Aug 12 23:57:16.968380 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Aug 12 23:57:16.968503 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Aug 12 23:57:16.968609 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Aug 12 23:57:16.968743 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Aug 12 23:57:16.968848 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Aug 12 23:57:16.968953 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 12 23:57:16.969076 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Aug 12 23:57:16.969189 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Aug 12 23:57:16.969324 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Aug 12 23:57:16.969426 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Aug 12 23:57:16.969579 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 12 23:57:16.971801 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Aug 12 23:57:16.971948 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Aug 12 23:57:16.972048 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Aug 12 23:57:16.972181 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Aug 12 23:57:16.972282 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Aug 12 23:57:16.972385 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Aug 12 23:57:16.972483 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Aug 12 23:57:16.974772 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Aug 12 23:57:16.974934 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Aug 12 23:57:16.975042 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Aug 12 23:57:16.975158 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Aug 12 23:57:16.975280 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Aug 12 23:57:16.975387 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Aug 12 23:57:16.975489 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Aug 12 23:57:16.975590 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Aug 12 23:57:16.975728 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Aug 12 23:57:16.975845 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Aug 12 23:57:16.975952 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Aug 12 23:57:16.975964 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 12 23:57:16.975973 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 12 23:57:16.975982 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 12 23:57:16.975991 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 12 23:57:16.976000 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 12 23:57:16.976010 kernel: iommu: Default domain type: Translated Aug 12 23:57:16.976022 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 12 23:57:16.976031 kernel: PCI: Using ACPI for IRQ routing Aug 12 23:57:16.976040 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 12 23:57:16.976049 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 12 23:57:16.976058 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Aug 12 23:57:16.976162 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Aug 12 23:57:16.976319 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Aug 12 23:57:16.976424 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 12 23:57:16.976441 kernel: vgaarb: loaded Aug 12 23:57:16.976450 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 12 23:57:16.976459 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 12 23:57:16.976468 kernel: clocksource: Switched to clocksource kvm-clock Aug 12 23:57:16.976477 kernel: VFS: Disk quotas dquot_6.6.0 Aug 12 23:57:16.976487 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 12 23:57:16.976496 kernel: pnp: PnP ACPI init Aug 12 23:57:16.976505 kernel: pnp: PnP ACPI: found 4 devices Aug 12 23:57:16.976514 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 12 23:57:16.976526 kernel: NET: Registered PF_INET protocol family Aug 12 23:57:16.976535 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 12 23:57:16.976544 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 12 23:57:16.976553 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 12 23:57:16.976562 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 12 23:57:16.976571 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 12 23:57:16.976580 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 12 23:57:16.976589 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 12 23:57:16.976598 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 12 23:57:16.976610 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 12 23:57:16.976619 kernel: NET: Registered PF_XDP protocol family Aug 12 23:57:16.978820 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 12 23:57:16.978936 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 12 23:57:16.979024 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 12 23:57:16.979112 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Aug 12 23:57:16.979200 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Aug 12 23:57:16.979328 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Aug 12 23:57:16.979456 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 12 23:57:16.979470 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Aug 12 23:57:16.979573 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 33360 usecs Aug 12 23:57:16.979586 kernel: PCI: CLS 0 bytes, default 64 Aug 12 23:57:16.979596 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 12 23:57:16.979605 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Aug 12 23:57:16.979615 kernel: Initialise system trusted keyrings Aug 12 23:57:16.979624 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 12 23:57:16.979633 kernel: Key type asymmetric registered Aug 12 23:57:16.979646 kernel: Asymmetric key parser 'x509' registered Aug 12 23:57:16.979656 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 12 23:57:16.979665 kernel: io scheduler mq-deadline registered Aug 12 23:57:16.979674 kernel: io scheduler kyber registered Aug 12 23:57:16.979683 kernel: io scheduler bfq registered Aug 12 23:57:16.979708 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 12 23:57:16.979717 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Aug 12 23:57:16.979726 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Aug 12 23:57:16.979735 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Aug 12 23:57:16.979747 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 12 23:57:16.979757 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 12 23:57:16.979766 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 12 23:57:16.979775 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 12 23:57:16.979784 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 12 23:57:16.979927 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 12 23:57:16.979941 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 12 23:57:16.980034 kernel: rtc_cmos 00:03: registered as rtc0 Aug 12 23:57:16.980177 kernel: rtc_cmos 00:03: setting system clock to 2025-08-12T23:57:16 UTC (1755043036) Aug 12 23:57:16.980274 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Aug 12 23:57:16.980286 kernel: intel_pstate: CPU model not supported Aug 12 23:57:16.980295 kernel: NET: Registered PF_INET6 protocol family Aug 12 23:57:16.980304 kernel: Segment Routing with IPv6 Aug 12 23:57:16.980313 kernel: In-situ OAM (IOAM) with IPv6 Aug 12 23:57:16.980322 kernel: NET: Registered PF_PACKET protocol family Aug 12 23:57:16.980331 kernel: Key type dns_resolver registered Aug 12 23:57:16.980341 kernel: IPI shorthand broadcast: enabled Aug 12 23:57:16.980354 kernel: sched_clock: Marking stable (812004561, 83914211)->(987948892, -92030120) Aug 12 23:57:16.980363 kernel: registered taskstats version 1 Aug 12 23:57:16.980372 kernel: Loading compiled-in X.509 certificates Aug 12 23:57:16.980381 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: dfd2b306eb54324ea79eea0261f8d493924aeeeb' Aug 12 23:57:16.980390 kernel: Key type .fscrypt registered Aug 12 23:57:16.980399 kernel: Key type fscrypt-provisioning registered Aug 12 23:57:16.980408 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 12 23:57:16.980417 kernel: ima: Allocated hash algorithm: sha1 Aug 12 23:57:16.980429 kernel: ima: No architecture policies found Aug 12 23:57:16.980438 kernel: clk: Disabling unused clocks Aug 12 23:57:16.980447 kernel: Freeing unused kernel image (initmem) memory: 43504K Aug 12 23:57:16.980456 kernel: Write protecting the kernel read-only data: 38912k Aug 12 23:57:16.980466 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Aug 12 23:57:16.980493 kernel: Run /init as init process Aug 12 23:57:16.980506 kernel: with arguments: Aug 12 23:57:16.980515 kernel: /init Aug 12 23:57:16.980524 kernel: with environment: Aug 12 23:57:16.980536 kernel: HOME=/ Aug 12 23:57:16.980545 kernel: TERM=linux Aug 12 23:57:16.980554 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 12 23:57:16.980565 systemd[1]: Successfully made /usr/ read-only. Aug 12 23:57:16.980579 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 12 23:57:16.980589 systemd[1]: Detected virtualization kvm. Aug 12 23:57:16.980599 systemd[1]: Detected architecture x86-64. Aug 12 23:57:16.980608 systemd[1]: Running in initrd. Aug 12 23:57:16.980620 systemd[1]: No hostname configured, using default hostname. Aug 12 23:57:16.980631 systemd[1]: Hostname set to . Aug 12 23:57:16.980641 systemd[1]: Initializing machine ID from VM UUID. Aug 12 23:57:16.980650 systemd[1]: Queued start job for default target initrd.target. Aug 12 23:57:16.980660 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:57:16.980670 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:57:16.980681 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 12 23:57:16.982891 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 12 23:57:16.982925 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 12 23:57:16.982941 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 12 23:57:16.982959 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 12 23:57:16.982980 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 12 23:57:16.982996 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:57:16.983011 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:57:16.983027 systemd[1]: Reached target paths.target - Path Units. Aug 12 23:57:16.983048 systemd[1]: Reached target slices.target - Slice Units. Aug 12 23:57:16.983063 systemd[1]: Reached target swap.target - Swaps. Aug 12 23:57:16.983082 systemd[1]: Reached target timers.target - Timer Units. Aug 12 23:57:16.983097 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 12 23:57:16.983113 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 12 23:57:16.983134 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 12 23:57:16.983151 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 12 23:57:16.983166 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:57:16.983181 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 12 23:57:16.983195 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:57:16.983209 systemd[1]: Reached target sockets.target - Socket Units. Aug 12 23:57:16.983224 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 12 23:57:16.983240 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 12 23:57:16.983255 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 12 23:57:16.983275 systemd[1]: Starting systemd-fsck-usr.service... Aug 12 23:57:16.983319 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 12 23:57:16.983329 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 12 23:57:16.983340 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:57:16.983350 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 12 23:57:16.983360 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:57:16.983375 systemd[1]: Finished systemd-fsck-usr.service. Aug 12 23:57:16.983387 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 12 23:57:16.983460 systemd-journald[184]: Collecting audit messages is disabled. Aug 12 23:57:16.983499 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 12 23:57:16.983515 systemd-journald[184]: Journal started Aug 12 23:57:16.983545 systemd-journald[184]: Runtime Journal (/run/log/journal/32109db551f9495e8a53e9b7c0b91f04) is 4.9M, max 39.3M, 34.4M free. Aug 12 23:57:16.967995 systemd-modules-load[185]: Inserted module 'overlay' Aug 12 23:57:17.004597 systemd[1]: Started systemd-journald.service - Journal Service. Aug 12 23:57:17.004631 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 12 23:57:17.006971 systemd-modules-load[185]: Inserted module 'br_netfilter' Aug 12 23:57:17.011355 kernel: Bridge firewalling registered Aug 12 23:57:17.010301 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 12 23:57:17.013040 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 12 23:57:17.016281 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 12 23:57:17.017334 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:57:17.018347 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:57:17.029907 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 12 23:57:17.032867 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 12 23:57:17.034151 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:57:17.046033 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:57:17.055089 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 12 23:57:17.057365 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:57:17.059883 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 12 23:57:17.079946 dracut-cmdline[220]: dracut-dracut-053 Aug 12 23:57:17.083387 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 12 23:57:17.095200 systemd-resolved[216]: Positive Trust Anchors: Aug 12 23:57:17.095814 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 12 23:57:17.096303 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 12 23:57:17.100952 systemd-resolved[216]: Defaulting to hostname 'linux'. Aug 12 23:57:17.103244 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 12 23:57:17.103684 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:57:17.169756 kernel: SCSI subsystem initialized Aug 12 23:57:17.179725 kernel: Loading iSCSI transport class v2.0-870. Aug 12 23:57:17.190731 kernel: iscsi: registered transport (tcp) Aug 12 23:57:17.212842 kernel: iscsi: registered transport (qla4xxx) Aug 12 23:57:17.212934 kernel: QLogic iSCSI HBA Driver Aug 12 23:57:17.268416 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 12 23:57:17.273919 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 12 23:57:17.308993 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 12 23:57:17.309087 kernel: device-mapper: uevent: version 1.0.3 Aug 12 23:57:17.310542 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 12 23:57:17.361752 kernel: raid6: avx2x4 gen() 21101 MB/s Aug 12 23:57:17.377740 kernel: raid6: avx2x2 gen() 23083 MB/s Aug 12 23:57:17.395044 kernel: raid6: avx2x1 gen() 18414 MB/s Aug 12 23:57:17.395123 kernel: raid6: using algorithm avx2x2 gen() 23083 MB/s Aug 12 23:57:17.412936 kernel: raid6: .... xor() 19225 MB/s, rmw enabled Aug 12 23:57:17.413015 kernel: raid6: using avx2x2 recovery algorithm Aug 12 23:57:17.435740 kernel: xor: automatically using best checksumming function avx Aug 12 23:57:17.597740 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 12 23:57:17.610792 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 12 23:57:17.615985 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:57:17.645466 systemd-udevd[403]: Using default interface naming scheme 'v255'. Aug 12 23:57:17.652358 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:57:17.659434 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 12 23:57:17.680618 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Aug 12 23:57:17.717517 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 12 23:57:17.723918 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 12 23:57:17.787755 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:57:17.795012 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 12 23:57:17.826754 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 12 23:57:17.828238 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 12 23:57:17.829498 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:57:17.831008 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 12 23:57:17.838260 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 12 23:57:17.860541 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 12 23:57:17.881032 kernel: scsi host0: Virtio SCSI HBA Aug 12 23:57:17.883729 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Aug 12 23:57:17.885762 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Aug 12 23:57:17.904944 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 12 23:57:17.905012 kernel: GPT:9289727 != 125829119 Aug 12 23:57:17.905026 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 12 23:57:17.906030 kernel: GPT:9289727 != 125829119 Aug 12 23:57:17.906721 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 12 23:57:17.906756 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:57:17.913355 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Aug 12 23:57:17.913621 kernel: virtio_blk virtio5: [vdb] 932 512-byte logical blocks (477 kB/466 KiB) Aug 12 23:57:17.923732 kernel: cryptd: max_cpu_qlen set to 1000 Aug 12 23:57:17.954788 kernel: ACPI: bus type USB registered Aug 12 23:57:17.954856 kernel: usbcore: registered new interface driver usbfs Aug 12 23:57:17.956044 kernel: usbcore: registered new interface driver hub Aug 12 23:57:17.956073 kernel: usbcore: registered new device driver usb Aug 12 23:57:17.960496 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 12 23:57:17.960725 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:57:17.962617 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 12 23:57:17.963480 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 12 23:57:17.964023 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:57:17.965872 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:57:17.974980 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:57:17.977248 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 12 23:57:17.993785 kernel: libata version 3.00 loaded. Aug 12 23:57:18.012490 kernel: ata_piix 0000:00:01.1: version 2.13 Aug 12 23:57:18.016720 kernel: AVX2 version of gcm_enc/dec engaged. Aug 12 23:57:18.016782 kernel: AES CTR mode by8 optimization enabled Aug 12 23:57:18.021720 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (447) Aug 12 23:57:18.022714 kernel: scsi host1: ata_piix Aug 12 23:57:18.025399 kernel: BTRFS: device fsid 88a9bed3-d26b-40c9-82ba-dbb7d44acae7 devid 1 transid 45 /dev/vda3 scanned by (udev-worker) (462) Aug 12 23:57:18.026705 kernel: scsi host2: ata_piix Aug 12 23:57:18.026911 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Aug 12 23:57:18.026926 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Aug 12 23:57:18.065269 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 12 23:57:18.080610 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Aug 12 23:57:18.080855 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Aug 12 23:57:18.080992 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Aug 12 23:57:18.081114 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Aug 12 23:57:18.081234 kernel: hub 1-0:1.0: USB hub found Aug 12 23:57:18.081385 kernel: hub 1-0:1.0: 2 ports detected Aug 12 23:57:18.080402 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:57:18.092055 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 12 23:57:18.099598 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 12 23:57:18.100116 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 12 23:57:18.111838 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 12 23:57:18.120908 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 12 23:57:18.123881 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 12 23:57:18.137981 disk-uuid[531]: Primary Header is updated. Aug 12 23:57:18.137981 disk-uuid[531]: Secondary Entries is updated. Aug 12 23:57:18.137981 disk-uuid[531]: Secondary Header is updated. Aug 12 23:57:18.147311 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:57:18.152721 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:57:19.159727 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:57:19.162181 disk-uuid[537]: The operation has completed successfully. Aug 12 23:57:19.219545 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 12 23:57:19.220353 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 12 23:57:19.260076 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 12 23:57:19.266379 sh[561]: Success Aug 12 23:57:19.285747 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 12 23:57:19.375578 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 12 23:57:19.377919 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 12 23:57:19.384744 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 12 23:57:19.404939 kernel: BTRFS info (device dm-0): first mount of filesystem 88a9bed3-d26b-40c9-82ba-dbb7d44acae7 Aug 12 23:57:19.405039 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 12 23:57:19.405903 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 12 23:57:19.406908 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 12 23:57:19.407981 kernel: BTRFS info (device dm-0): using free space tree Aug 12 23:57:19.416320 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 12 23:57:19.417495 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 12 23:57:19.424020 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 12 23:57:19.426879 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 12 23:57:19.447946 kernel: BTRFS info (device vda6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 12 23:57:19.448025 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 12 23:57:19.448039 kernel: BTRFS info (device vda6): using free space tree Aug 12 23:57:19.452747 kernel: BTRFS info (device vda6): auto enabling async discard Aug 12 23:57:19.459908 kernel: BTRFS info (device vda6): last unmount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 12 23:57:19.462593 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 12 23:57:19.472099 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 12 23:57:19.625294 ignition[639]: Ignition 2.20.0 Aug 12 23:57:19.625311 ignition[639]: Stage: fetch-offline Aug 12 23:57:19.625369 ignition[639]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:57:19.625384 ignition[639]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 12 23:57:19.628207 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 12 23:57:19.625542 ignition[639]: parsed url from cmdline: "" Aug 12 23:57:19.625554 ignition[639]: no config URL provided Aug 12 23:57:19.625563 ignition[639]: reading system config file "/usr/lib/ignition/user.ign" Aug 12 23:57:19.625575 ignition[639]: no config at "/usr/lib/ignition/user.ign" Aug 12 23:57:19.625583 ignition[639]: failed to fetch config: resource requires networking Aug 12 23:57:19.625932 ignition[639]: Ignition finished successfully Aug 12 23:57:19.634801 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 12 23:57:19.640997 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 12 23:57:19.685890 systemd-networkd[747]: lo: Link UP Aug 12 23:57:19.685904 systemd-networkd[747]: lo: Gained carrier Aug 12 23:57:19.688476 systemd-networkd[747]: Enumeration completed Aug 12 23:57:19.688851 systemd-networkd[747]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Aug 12 23:57:19.688856 systemd-networkd[747]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Aug 12 23:57:19.690518 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 12 23:57:19.690970 systemd-networkd[747]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:57:19.690975 systemd-networkd[747]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 12 23:57:19.691753 systemd-networkd[747]: eth0: Link UP Aug 12 23:57:19.691758 systemd-networkd[747]: eth0: Gained carrier Aug 12 23:57:19.691769 systemd-networkd[747]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Aug 12 23:57:19.691861 systemd[1]: Reached target network.target - Network. Aug 12 23:57:19.695734 systemd-networkd[747]: eth1: Link UP Aug 12 23:57:19.695740 systemd-networkd[747]: eth1: Gained carrier Aug 12 23:57:19.695761 systemd-networkd[747]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:57:19.699009 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 12 23:57:19.711809 systemd-networkd[747]: eth0: DHCPv4 address 24.199.122.14/20, gateway 24.199.112.1 acquired from 169.254.169.253 Aug 12 23:57:19.716820 systemd-networkd[747]: eth1: DHCPv4 address 10.124.0.32/20 acquired from 169.254.169.253 Aug 12 23:57:19.728476 ignition[751]: Ignition 2.20.0 Aug 12 23:57:19.728492 ignition[751]: Stage: fetch Aug 12 23:57:19.728824 ignition[751]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:57:19.728845 ignition[751]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 12 23:57:19.729018 ignition[751]: parsed url from cmdline: "" Aug 12 23:57:19.729024 ignition[751]: no config URL provided Aug 12 23:57:19.729034 ignition[751]: reading system config file "/usr/lib/ignition/user.ign" Aug 12 23:57:19.729049 ignition[751]: no config at "/usr/lib/ignition/user.ign" Aug 12 23:57:19.729083 ignition[751]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Aug 12 23:57:19.750129 ignition[751]: GET result: OK Aug 12 23:57:19.750900 ignition[751]: parsing config with SHA512: 04d7029c07ee68d524960184601cc75f8f236a04820ed395ede4d092221e42c184d96167b1f07ab2782e2e6607e319b078b4ecc684f52fa76b61f026117f4e6d Aug 12 23:57:19.755435 unknown[751]: fetched base config from "system" Aug 12 23:57:19.755897 ignition[751]: fetch: fetch complete Aug 12 23:57:19.755449 unknown[751]: fetched base config from "system" Aug 12 23:57:19.755907 ignition[751]: fetch: fetch passed Aug 12 23:57:19.755458 unknown[751]: fetched user config from "digitalocean" Aug 12 23:57:19.756006 ignition[751]: Ignition finished successfully Aug 12 23:57:19.758236 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 12 23:57:19.764951 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 12 23:57:19.808034 ignition[758]: Ignition 2.20.0 Aug 12 23:57:19.808056 ignition[758]: Stage: kargs Aug 12 23:57:19.808448 ignition[758]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:57:19.808469 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 12 23:57:19.809831 ignition[758]: kargs: kargs passed Aug 12 23:57:19.809931 ignition[758]: Ignition finished successfully Aug 12 23:57:19.811767 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 12 23:57:19.823125 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 12 23:57:19.845009 ignition[764]: Ignition 2.20.0 Aug 12 23:57:19.845030 ignition[764]: Stage: disks Aug 12 23:57:19.845312 ignition[764]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:57:19.845325 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 12 23:57:19.847597 ignition[764]: disks: disks passed Aug 12 23:57:19.849160 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 12 23:57:19.847669 ignition[764]: Ignition finished successfully Aug 12 23:57:19.853395 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 12 23:57:19.854394 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 12 23:57:19.855214 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 12 23:57:19.856070 systemd[1]: Reached target sysinit.target - System Initialization. Aug 12 23:57:19.856961 systemd[1]: Reached target basic.target - Basic System. Aug 12 23:57:19.865235 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 12 23:57:19.886546 systemd-fsck[772]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 12 23:57:19.889510 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 12 23:57:19.895096 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 12 23:57:20.011902 kernel: EXT4-fs (vda9): mounted filesystem 27db109b-2440-48a3-909e-fd8973275523 r/w with ordered data mode. Quota mode: none. Aug 12 23:57:20.012914 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 12 23:57:20.013967 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 12 23:57:20.025877 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 12 23:57:20.028823 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 12 23:57:20.031904 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Aug 12 23:57:20.036271 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 12 23:57:20.037234 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 12 23:57:20.037275 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 12 23:57:20.042710 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (780) Aug 12 23:57:20.044575 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 12 23:57:20.046843 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 12 23:57:20.050076 kernel: BTRFS info (device vda6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 12 23:57:20.050122 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 12 23:57:20.051746 kernel: BTRFS info (device vda6): using free space tree Aug 12 23:57:20.063991 kernel: BTRFS info (device vda6): auto enabling async discard Aug 12 23:57:20.068410 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 12 23:57:20.145729 initrd-setup-root[811]: cut: /sysroot/etc/passwd: No such file or directory Aug 12 23:57:20.156802 coreos-metadata[783]: Aug 12 23:57:20.156 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 12 23:57:20.159334 coreos-metadata[782]: Aug 12 23:57:20.157 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 12 23:57:20.160378 initrd-setup-root[818]: cut: /sysroot/etc/group: No such file or directory Aug 12 23:57:20.166226 initrd-setup-root[825]: cut: /sysroot/etc/shadow: No such file or directory Aug 12 23:57:20.169269 coreos-metadata[783]: Aug 12 23:57:20.169 INFO Fetch successful Aug 12 23:57:20.171012 coreos-metadata[782]: Aug 12 23:57:20.170 INFO Fetch successful Aug 12 23:57:20.174433 initrd-setup-root[832]: cut: /sysroot/etc/gshadow: No such file or directory Aug 12 23:57:20.180769 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Aug 12 23:57:20.180916 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Aug 12 23:57:20.182992 coreos-metadata[783]: Aug 12 23:57:20.181 INFO wrote hostname ci-4230.2.2-e-bc3605f087 to /sysroot/etc/hostname Aug 12 23:57:20.183179 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 12 23:57:20.300836 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 12 23:57:20.310993 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 12 23:57:20.314028 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 12 23:57:20.326742 kernel: BTRFS info (device vda6): last unmount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 12 23:57:20.355726 ignition[901]: INFO : Ignition 2.20.0 Aug 12 23:57:20.355726 ignition[901]: INFO : Stage: mount Aug 12 23:57:20.355726 ignition[901]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:57:20.355726 ignition[901]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 12 23:57:20.355012 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 12 23:57:20.358211 ignition[901]: INFO : mount: mount passed Aug 12 23:57:20.358211 ignition[901]: INFO : Ignition finished successfully Aug 12 23:57:20.357178 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 12 23:57:20.363927 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 12 23:57:20.405764 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 12 23:57:20.417055 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 12 23:57:20.428752 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (914) Aug 12 23:57:20.431783 kernel: BTRFS info (device vda6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 12 23:57:20.431874 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 12 23:57:20.431891 kernel: BTRFS info (device vda6): using free space tree Aug 12 23:57:20.436062 kernel: BTRFS info (device vda6): auto enabling async discard Aug 12 23:57:20.437922 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 12 23:57:20.474772 ignition[931]: INFO : Ignition 2.20.0 Aug 12 23:57:20.474772 ignition[931]: INFO : Stage: files Aug 12 23:57:20.476423 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:57:20.476423 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 12 23:57:20.476423 ignition[931]: DEBUG : files: compiled without relabeling support, skipping Aug 12 23:57:20.478643 ignition[931]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 12 23:57:20.478643 ignition[931]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 12 23:57:20.480530 ignition[931]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 12 23:57:20.481275 ignition[931]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 12 23:57:20.481275 ignition[931]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 12 23:57:20.481070 unknown[931]: wrote ssh authorized keys file for user: core Aug 12 23:57:20.483362 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Aug 12 23:57:20.483362 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Aug 12 23:57:20.483362 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 12 23:57:20.485249 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 12 23:57:20.485249 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 12 23:57:20.485249 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 12 23:57:20.485249 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 12 23:57:20.485249 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Aug 12 23:57:20.889930 systemd-networkd[747]: eth0: Gained IPv6LL Aug 12 23:57:20.961197 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Aug 12 23:57:21.299084 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 12 23:57:21.300935 ignition[931]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 12 23:57:21.300935 ignition[931]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 12 23:57:21.300935 ignition[931]: INFO : files: files passed Aug 12 23:57:21.300935 ignition[931]: INFO : Ignition finished successfully Aug 12 23:57:21.301548 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 12 23:57:21.307999 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 12 23:57:21.311912 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 12 23:57:21.321254 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 12 23:57:21.321398 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 12 23:57:21.331945 initrd-setup-root-after-ignition[960]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:57:21.331945 initrd-setup-root-after-ignition[960]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:57:21.334107 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:57:21.335158 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 12 23:57:21.336224 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 12 23:57:21.337918 systemd-networkd[747]: eth1: Gained IPv6LL Aug 12 23:57:21.341960 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 12 23:57:21.384450 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 12 23:57:21.384618 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 12 23:57:21.385952 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 12 23:57:21.386826 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 12 23:57:21.387547 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 12 23:57:21.388941 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 12 23:57:21.420273 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 12 23:57:21.430113 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 12 23:57:21.449008 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:57:21.450016 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:57:21.451363 systemd[1]: Stopped target timers.target - Timer Units. Aug 12 23:57:21.452439 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 12 23:57:21.452803 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 12 23:57:21.454115 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 12 23:57:21.455301 systemd[1]: Stopped target basic.target - Basic System. Aug 12 23:57:21.456037 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 12 23:57:21.456954 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 12 23:57:21.457935 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 12 23:57:21.459046 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 12 23:57:21.459943 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 12 23:57:21.460978 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 12 23:57:21.461857 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 12 23:57:21.462843 systemd[1]: Stopped target swap.target - Swaps. Aug 12 23:57:21.463642 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 12 23:57:21.463981 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 12 23:57:21.465598 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:57:21.466549 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:57:21.467629 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 12 23:57:21.467857 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:57:21.468654 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 12 23:57:21.468939 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 12 23:57:21.470524 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 12 23:57:21.470870 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 12 23:57:21.472327 systemd[1]: ignition-files.service: Deactivated successfully. Aug 12 23:57:21.472528 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 12 23:57:21.473580 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 12 23:57:21.473819 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 12 23:57:21.480139 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 12 23:57:21.480894 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 12 23:57:21.481179 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:57:21.494299 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 12 23:57:21.497361 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 12 23:57:21.497769 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:57:21.500435 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 12 23:57:21.500677 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 12 23:57:21.514382 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 12 23:57:21.514765 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 12 23:57:21.523965 ignition[984]: INFO : Ignition 2.20.0 Aug 12 23:57:21.526652 ignition[984]: INFO : Stage: umount Aug 12 23:57:21.526652 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:57:21.526652 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 12 23:57:21.526652 ignition[984]: INFO : umount: umount passed Aug 12 23:57:21.526652 ignition[984]: INFO : Ignition finished successfully Aug 12 23:57:21.529935 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 12 23:57:21.530568 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 12 23:57:21.531655 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 12 23:57:21.532996 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 12 23:57:21.533824 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 12 23:57:21.533885 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 12 23:57:21.534313 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 12 23:57:21.534363 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 12 23:57:21.536247 systemd[1]: Stopped target network.target - Network. Aug 12 23:57:21.536546 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 12 23:57:21.536602 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 12 23:57:21.536986 systemd[1]: Stopped target paths.target - Path Units. Aug 12 23:57:21.537257 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 12 23:57:21.541792 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:57:21.542797 systemd[1]: Stopped target slices.target - Slice Units. Aug 12 23:57:21.543083 systemd[1]: Stopped target sockets.target - Socket Units. Aug 12 23:57:21.543469 systemd[1]: iscsid.socket: Deactivated successfully. Aug 12 23:57:21.543526 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 12 23:57:21.544913 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 12 23:57:21.545004 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 12 23:57:21.546062 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 12 23:57:21.546204 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 12 23:57:21.546905 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 12 23:57:21.546984 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 12 23:57:21.548017 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 12 23:57:21.551898 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 12 23:57:21.554719 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 12 23:57:21.555979 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 12 23:57:21.556202 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 12 23:57:21.557399 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 12 23:57:21.557598 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 12 23:57:21.563299 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 12 23:57:21.563883 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 12 23:57:21.564005 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 12 23:57:21.566538 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 12 23:57:21.569367 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 12 23:57:21.569438 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:57:21.570277 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 12 23:57:21.570358 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 12 23:57:21.581965 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 12 23:57:21.583868 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 12 23:57:21.583992 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 12 23:57:21.584628 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 12 23:57:21.584713 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:57:21.585933 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 12 23:57:21.585989 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 12 23:57:21.586475 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 12 23:57:21.586524 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:57:21.587614 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:57:21.592719 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 12 23:57:21.592832 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 12 23:57:21.603387 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 12 23:57:21.603639 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:57:21.606390 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 12 23:57:21.606501 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 12 23:57:21.607028 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 12 23:57:21.607065 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:57:21.608150 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 12 23:57:21.608217 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 12 23:57:21.608748 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 12 23:57:21.608805 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 12 23:57:21.609239 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 12 23:57:21.609286 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:57:21.613010 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 12 23:57:21.614852 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 12 23:57:21.614956 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:57:21.615984 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 12 23:57:21.616042 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:57:21.618525 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 12 23:57:21.618599 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 12 23:57:21.619295 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 12 23:57:21.619453 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 12 23:57:21.644306 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 12 23:57:21.644465 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 12 23:57:21.646840 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 12 23:57:21.654114 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 12 23:57:21.669623 systemd[1]: Switching root. Aug 12 23:57:21.706886 systemd-journald[184]: Journal stopped Aug 12 23:57:22.966229 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Aug 12 23:57:22.966323 kernel: SELinux: policy capability network_peer_controls=1 Aug 12 23:57:22.966342 kernel: SELinux: policy capability open_perms=1 Aug 12 23:57:22.966355 kernel: SELinux: policy capability extended_socket_class=1 Aug 12 23:57:22.966368 kernel: SELinux: policy capability always_check_network=0 Aug 12 23:57:22.966382 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 12 23:57:22.966395 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 12 23:57:22.966423 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 12 23:57:22.966437 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 12 23:57:22.966450 kernel: audit: type=1403 audit(1755043041.865:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 12 23:57:22.966472 systemd[1]: Successfully loaded SELinux policy in 43.652ms. Aug 12 23:57:22.966491 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.812ms. Aug 12 23:57:22.966507 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 12 23:57:22.966522 systemd[1]: Detected virtualization kvm. Aug 12 23:57:22.966542 systemd[1]: Detected architecture x86-64. Aug 12 23:57:22.966571 systemd[1]: Detected first boot. Aug 12 23:57:22.966593 systemd[1]: Hostname set to . Aug 12 23:57:22.966618 systemd[1]: Initializing machine ID from VM UUID. Aug 12 23:57:22.966642 zram_generator::config[1030]: No configuration found. Aug 12 23:57:22.966667 kernel: Guest personality initialized and is inactive Aug 12 23:57:22.970914 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 12 23:57:22.970958 kernel: Initialized host personality Aug 12 23:57:22.970972 kernel: NET: Registered PF_VSOCK protocol family Aug 12 23:57:22.970987 systemd[1]: Populated /etc with preset unit settings. Aug 12 23:57:22.971017 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 12 23:57:22.971030 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 12 23:57:22.971042 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 12 23:57:22.971055 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 12 23:57:22.971069 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 12 23:57:22.971081 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 12 23:57:22.971094 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 12 23:57:22.971107 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 12 23:57:22.971123 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 12 23:57:22.971136 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 12 23:57:22.971149 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 12 23:57:22.971161 systemd[1]: Created slice user.slice - User and Session Slice. Aug 12 23:57:22.971174 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:57:22.971187 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:57:22.971200 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 12 23:57:22.971213 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 12 23:57:22.971235 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 12 23:57:22.971248 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 12 23:57:22.971261 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 12 23:57:22.971275 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:57:22.971288 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 12 23:57:22.971300 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 12 23:57:22.971313 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 12 23:57:22.971333 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 12 23:57:22.971347 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:57:22.971361 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 12 23:57:22.971378 systemd[1]: Reached target slices.target - Slice Units. Aug 12 23:57:22.971398 systemd[1]: Reached target swap.target - Swaps. Aug 12 23:57:22.971417 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 12 23:57:22.971438 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 12 23:57:22.971459 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 12 23:57:22.971478 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:57:22.971505 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 12 23:57:22.973761 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:57:22.973799 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 12 23:57:22.973813 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 12 23:57:22.973827 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 12 23:57:22.973841 systemd[1]: Mounting media.mount - External Media Directory... Aug 12 23:57:22.973855 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:57:22.973878 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 12 23:57:22.973896 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 12 23:57:22.973917 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 12 23:57:22.973931 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 12 23:57:22.973945 systemd[1]: Reached target machines.target - Containers. Aug 12 23:57:22.973958 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 12 23:57:22.973971 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:57:22.973984 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 12 23:57:22.973997 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 12 23:57:22.974010 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:57:22.974026 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 12 23:57:22.974040 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:57:22.974054 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 12 23:57:22.974070 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:57:22.974091 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 12 23:57:22.974112 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 12 23:57:22.974126 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 12 23:57:22.974139 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 12 23:57:22.974151 systemd[1]: Stopped systemd-fsck-usr.service. Aug 12 23:57:22.974182 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:57:22.974202 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 12 23:57:22.974217 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 12 23:57:22.974236 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 12 23:57:22.974255 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 12 23:57:22.974275 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 12 23:57:22.974297 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 12 23:57:22.974326 systemd[1]: verity-setup.service: Deactivated successfully. Aug 12 23:57:22.974349 systemd[1]: Stopped verity-setup.service. Aug 12 23:57:22.974369 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:57:22.974386 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 12 23:57:22.974401 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 12 23:57:22.974414 systemd[1]: Mounted media.mount - External Media Directory. Aug 12 23:57:22.974427 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 12 23:57:22.974440 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 12 23:57:22.974453 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 12 23:57:22.974465 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:57:22.974478 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 12 23:57:22.974496 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 12 23:57:22.974516 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:57:22.974537 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:57:22.974571 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:57:22.974596 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:57:22.974623 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 12 23:57:22.974646 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 12 23:57:22.974670 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 12 23:57:22.978501 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 12 23:57:22.978673 systemd-journald[1104]: Collecting audit messages is disabled. Aug 12 23:57:22.978733 kernel: loop: module loaded Aug 12 23:57:22.978753 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 12 23:57:22.978771 systemd-journald[1104]: Journal started Aug 12 23:57:22.978802 systemd-journald[1104]: Runtime Journal (/run/log/journal/32109db551f9495e8a53e9b7c0b91f04) is 4.9M, max 39.3M, 34.4M free. Aug 12 23:57:22.653593 systemd[1]: Queued start job for default target multi-user.target. Aug 12 23:57:22.660437 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 12 23:57:22.661018 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 12 23:57:22.986720 kernel: fuse: init (API version 7.39) Aug 12 23:57:22.988132 systemd[1]: Started systemd-journald.service - Journal Service. Aug 12 23:57:22.989544 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:57:22.990749 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:57:22.991513 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 12 23:57:22.992244 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 12 23:57:22.995419 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 12 23:57:22.996766 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 12 23:57:23.017801 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 12 23:57:23.018274 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 12 23:57:23.018321 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 12 23:57:23.021964 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 12 23:57:23.032904 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 12 23:57:23.034856 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 12 23:57:23.035347 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:57:23.041940 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 12 23:57:23.043939 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 12 23:57:23.044341 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 12 23:57:23.047817 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 12 23:57:23.048840 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 12 23:57:23.059966 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 12 23:57:23.064803 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 12 23:57:23.065602 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 12 23:57:23.077359 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:57:23.082765 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 12 23:57:23.108104 systemd-journald[1104]: Time spent on flushing to /var/log/journal/32109db551f9495e8a53e9b7c0b91f04 is 91.988ms for 978 entries. Aug 12 23:57:23.108104 systemd-journald[1104]: System Journal (/var/log/journal/32109db551f9495e8a53e9b7c0b91f04) is 8M, max 195.6M, 187.6M free. Aug 12 23:57:23.214938 systemd-journald[1104]: Received client request to flush runtime journal. Aug 12 23:57:23.214991 kernel: loop0: detected capacity change from 0 to 138176 Aug 12 23:57:23.215014 kernel: ACPI: bus type drm_connector registered Aug 12 23:57:23.215029 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 12 23:57:23.135168 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 12 23:57:23.156902 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 12 23:57:23.157675 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 12 23:57:23.158134 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 12 23:57:23.163012 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 12 23:57:23.182011 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 12 23:57:23.182801 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 12 23:57:23.198939 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 12 23:57:23.217460 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 12 23:57:23.242716 kernel: loop1: detected capacity change from 0 to 229808 Aug 12 23:57:23.275315 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 12 23:57:23.286989 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 12 23:57:23.297350 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:57:23.307988 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 12 23:57:23.313202 kernel: loop2: detected capacity change from 0 to 8 Aug 12 23:57:23.352728 kernel: loop3: detected capacity change from 0 to 147912 Aug 12 23:57:23.372565 udevadm[1177]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 12 23:57:23.389071 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Aug 12 23:57:23.389099 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Aug 12 23:57:23.399369 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:57:23.415790 kernel: loop4: detected capacity change from 0 to 138176 Aug 12 23:57:23.433819 kernel: loop5: detected capacity change from 0 to 229808 Aug 12 23:57:23.474735 kernel: loop6: detected capacity change from 0 to 8 Aug 12 23:57:23.480882 kernel: loop7: detected capacity change from 0 to 147912 Aug 12 23:57:23.524522 (sd-merge)[1182]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Aug 12 23:57:23.526628 (sd-merge)[1182]: Merged extensions into '/usr'. Aug 12 23:57:23.540745 systemd[1]: Reload requested from client PID 1152 ('systemd-sysext') (unit systemd-sysext.service)... Aug 12 23:57:23.540773 systemd[1]: Reloading... Aug 12 23:57:23.827857 zram_generator::config[1210]: No configuration found. Aug 12 23:57:23.893997 ldconfig[1148]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 12 23:57:24.051047 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:57:24.151939 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 12 23:57:24.152134 systemd[1]: Reloading finished in 607 ms. Aug 12 23:57:24.177784 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 12 23:57:24.178746 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 12 23:57:24.191991 systemd[1]: Starting ensure-sysext.service... Aug 12 23:57:24.204907 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 12 23:57:24.229469 systemd[1]: Reload requested from client PID 1253 ('systemctl') (unit ensure-sysext.service)... Aug 12 23:57:24.229488 systemd[1]: Reloading... Aug 12 23:57:24.275276 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 12 23:57:24.275635 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 12 23:57:24.278004 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 12 23:57:24.278443 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Aug 12 23:57:24.278530 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Aug 12 23:57:24.287367 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Aug 12 23:57:24.287384 systemd-tmpfiles[1254]: Skipping /boot Aug 12 23:57:24.329410 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Aug 12 23:57:24.329428 systemd-tmpfiles[1254]: Skipping /boot Aug 12 23:57:24.394728 zram_generator::config[1284]: No configuration found. Aug 12 23:57:24.564865 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:57:24.666069 systemd[1]: Reloading finished in 435 ms. Aug 12 23:57:24.681948 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 12 23:57:24.696899 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:57:24.716327 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 12 23:57:24.721091 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 12 23:57:24.730148 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 12 23:57:24.735160 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 12 23:57:24.746065 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:57:24.749062 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 12 23:57:24.754406 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:57:24.757032 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:57:24.763141 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:57:24.771288 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:57:24.776035 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:57:24.777106 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:57:24.777264 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:57:24.777401 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:57:24.788065 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 12 23:57:24.790236 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:57:24.790485 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:57:24.790739 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:57:24.790849 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:57:24.790967 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:57:24.802142 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:57:24.805046 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:57:24.816152 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 12 23:57:24.816818 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:57:24.817069 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:57:24.817327 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:57:24.819006 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:57:24.820814 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:57:24.827745 systemd[1]: Finished ensure-sysext.service. Aug 12 23:57:24.834739 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 12 23:57:24.840405 systemd-udevd[1339]: Using default interface naming scheme 'v255'. Aug 12 23:57:24.850265 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 12 23:57:24.852795 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 12 23:57:24.853679 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:57:24.853917 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:57:24.858246 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 12 23:57:24.858304 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 12 23:57:24.873352 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 12 23:57:24.874270 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 12 23:57:24.874850 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 12 23:57:24.885915 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 12 23:57:24.890589 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:57:24.892528 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:57:24.893573 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 12 23:57:24.900954 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:57:24.909978 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 12 23:57:24.922275 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 12 23:57:24.926049 augenrules[1379]: No rules Aug 12 23:57:24.930670 systemd[1]: audit-rules.service: Deactivated successfully. Aug 12 23:57:24.932282 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 12 23:57:24.960201 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 12 23:57:25.069924 systemd-resolved[1337]: Positive Trust Anchors: Aug 12 23:57:25.069940 systemd-resolved[1337]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 12 23:57:25.069978 systemd-resolved[1337]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 12 23:57:25.070033 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 12 23:57:25.083347 systemd-resolved[1337]: Using system hostname 'ci-4230.2.2-e-bc3605f087'. Aug 12 23:57:25.087541 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 12 23:57:25.088117 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:57:25.124968 systemd-networkd[1371]: lo: Link UP Aug 12 23:57:25.124977 systemd-networkd[1371]: lo: Gained carrier Aug 12 23:57:25.126039 systemd-networkd[1371]: Enumeration completed Aug 12 23:57:25.126190 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 12 23:57:25.126926 systemd[1]: Reached target network.target - Network. Aug 12 23:57:25.134017 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 12 23:57:25.142011 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 12 23:57:25.164355 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Aug 12 23:57:25.165905 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 12 23:57:25.173632 systemd[1]: Reached target time-set.target - System Time Set. Aug 12 23:57:25.181425 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Aug 12 23:57:25.183775 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:57:25.183973 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:57:25.185753 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:57:25.190978 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:57:25.200999 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:57:25.202308 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:57:25.202361 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:57:25.202400 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 12 23:57:25.202423 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:57:25.221610 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:57:25.221906 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:57:25.222723 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 12 23:57:25.229299 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:57:25.229546 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:57:25.235714 kernel: ISO 9660 Extensions: RRIP_1991A Aug 12 23:57:25.238745 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Aug 12 23:57:25.240373 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:57:25.240603 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:57:25.244429 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 45 scanned by (udev-worker) (1396) Aug 12 23:57:25.243810 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 12 23:57:25.245558 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 12 23:57:25.270319 systemd-networkd[1371]: eth1: Configuring with /run/systemd/network/10-52:25:6c:e2:b9:d8.network. Aug 12 23:57:25.272548 systemd-networkd[1371]: eth1: Link UP Aug 12 23:57:25.272562 systemd-networkd[1371]: eth1: Gained carrier Aug 12 23:57:25.276771 systemd-timesyncd[1357]: Network configuration changed, trying to establish connection. Aug 12 23:57:25.298718 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 12 23:57:25.307776 kernel: ACPI: button: Power Button [PWRF] Aug 12 23:57:25.334719 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Aug 12 23:57:25.361790 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 12 23:57:25.384151 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 12 23:57:25.385676 systemd-networkd[1371]: eth0: Configuring with /run/systemd/network/10-86:09:e6:ae:a3:ce.network. Aug 12 23:57:25.386714 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 12 23:57:25.387380 systemd-networkd[1371]: eth0: Link UP Aug 12 23:57:25.388028 systemd-networkd[1371]: eth0: Gained carrier Aug 12 23:57:25.424139 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 12 23:57:25.460753 kernel: mousedev: PS/2 mouse device common for all mice Aug 12 23:57:25.462124 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:57:25.477880 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Aug 12 23:57:25.479717 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Aug 12 23:57:25.491741 kernel: Console: switching to colour dummy device 80x25 Aug 12 23:57:25.491868 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Aug 12 23:57:25.491901 kernel: [drm] features: -context_init Aug 12 23:57:25.495743 kernel: [drm] number of scanouts: 1 Aug 12 23:57:25.497723 kernel: [drm] number of cap sets: 0 Aug 12 23:57:25.502719 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Aug 12 23:57:25.512007 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 12 23:57:25.512285 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:57:25.515549 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 12 23:57:25.531212 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Aug 12 23:57:25.531311 kernel: Console: switching to colour frame buffer device 128x48 Aug 12 23:57:25.543150 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Aug 12 23:57:25.563307 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:57:25.595901 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 12 23:57:25.596411 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:57:25.602862 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 12 23:57:25.617503 kernel: EDAC MC: Ver: 3.0.0 Aug 12 23:57:25.624924 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:57:25.661434 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 12 23:57:25.670085 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 12 23:57:25.684714 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 12 23:57:25.709800 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:57:25.716088 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 12 23:57:25.717768 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:57:25.717885 systemd[1]: Reached target sysinit.target - System Initialization. Aug 12 23:57:25.718063 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 12 23:57:25.718176 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 12 23:57:25.718593 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 12 23:57:25.718952 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 12 23:57:25.719045 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 12 23:57:25.719106 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 12 23:57:25.719133 systemd[1]: Reached target paths.target - Path Units. Aug 12 23:57:25.719185 systemd[1]: Reached target timers.target - Timer Units. Aug 12 23:57:25.722803 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 12 23:57:25.724969 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 12 23:57:25.729355 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 12 23:57:25.731059 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 12 23:57:25.731721 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 12 23:57:25.740582 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 12 23:57:25.741916 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 12 23:57:25.754968 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 12 23:57:25.758476 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 12 23:57:25.759915 systemd[1]: Reached target sockets.target - Socket Units. Aug 12 23:57:25.760492 systemd[1]: Reached target basic.target - Basic System. Aug 12 23:57:25.761389 lvm[1450]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 12 23:57:25.762613 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 12 23:57:25.762650 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 12 23:57:25.769910 systemd[1]: Starting containerd.service - containerd container runtime... Aug 12 23:57:25.773394 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 12 23:57:25.783015 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 12 23:57:25.787869 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 12 23:57:25.801953 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 12 23:57:25.803777 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 12 23:57:25.817933 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 12 23:57:25.824478 jq[1454]: false Aug 12 23:57:25.825727 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 12 23:57:25.837397 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 12 23:57:25.846332 dbus-daemon[1453]: [system] SELinux support is enabled Aug 12 23:57:25.851040 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 12 23:57:25.854086 coreos-metadata[1452]: Aug 12 23:57:25.853 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 12 23:57:25.854490 coreos-metadata[1452]: Aug 12 23:57:25.854 INFO Fetch successful Aug 12 23:57:25.857105 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 12 23:57:25.857870 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 12 23:57:25.865587 systemd[1]: Starting update-engine.service - Update Engine... Aug 12 23:57:25.874730 extend-filesystems[1457]: Found loop4 Aug 12 23:57:25.874730 extend-filesystems[1457]: Found loop5 Aug 12 23:57:25.874730 extend-filesystems[1457]: Found loop6 Aug 12 23:57:25.874730 extend-filesystems[1457]: Found loop7 Aug 12 23:57:25.874730 extend-filesystems[1457]: Found vda Aug 12 23:57:25.874730 extend-filesystems[1457]: Found vda1 Aug 12 23:57:25.874730 extend-filesystems[1457]: Found vda2 Aug 12 23:57:25.874730 extend-filesystems[1457]: Found vda3 Aug 12 23:57:25.874730 extend-filesystems[1457]: Found usr Aug 12 23:57:25.874730 extend-filesystems[1457]: Found vda4 Aug 12 23:57:25.874730 extend-filesystems[1457]: Found vda6 Aug 12 23:57:25.874730 extend-filesystems[1457]: Found vda7 Aug 12 23:57:25.874730 extend-filesystems[1457]: Found vda9 Aug 12 23:57:25.874730 extend-filesystems[1457]: Checking size of /dev/vda9 Aug 12 23:57:25.880844 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 12 23:57:25.924379 jq[1470]: true Aug 12 23:57:25.882390 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 12 23:57:25.894538 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 12 23:57:25.920256 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 12 23:57:25.920501 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 12 23:57:25.920875 systemd[1]: motdgen.service: Deactivated successfully. Aug 12 23:57:25.921114 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 12 23:57:25.924028 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 12 23:57:25.924279 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 12 23:57:25.945878 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 12 23:57:25.945933 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 12 23:57:25.948852 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 12 23:57:25.948971 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Aug 12 23:57:25.948995 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 12 23:57:25.951575 update_engine[1469]: I20250812 23:57:25.951475 1469 main.cc:92] Flatcar Update Engine starting Aug 12 23:57:25.962247 extend-filesystems[1457]: Resized partition /dev/vda9 Aug 12 23:57:25.970296 extend-filesystems[1492]: resize2fs 1.47.1 (20-May-2024) Aug 12 23:57:25.988861 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Aug 12 23:57:25.962985 systemd[1]: Started update-engine.service - Update Engine. Aug 12 23:57:25.988967 update_engine[1469]: I20250812 23:57:25.963337 1469 update_check_scheduler.cc:74] Next update check in 8m57s Aug 12 23:57:25.971032 (ntainerd)[1483]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 12 23:57:25.973846 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 12 23:57:25.986266 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 12 23:57:25.993916 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 45 scanned by (udev-worker) (1373) Aug 12 23:57:25.988274 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 12 23:57:25.994091 jq[1478]: true Aug 12 23:57:26.095832 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Aug 12 23:57:26.097313 systemd-logind[1464]: New seat seat0. Aug 12 23:57:26.106938 systemd-logind[1464]: Watching system buttons on /dev/input/event1 (Power Button) Aug 12 23:57:26.106965 systemd-logind[1464]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 12 23:57:26.107238 systemd[1]: Started systemd-logind.service - User Login Management. Aug 12 23:57:26.112054 extend-filesystems[1492]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 12 23:57:26.112054 extend-filesystems[1492]: old_desc_blocks = 1, new_desc_blocks = 8 Aug 12 23:57:26.112054 extend-filesystems[1492]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Aug 12 23:57:26.122511 extend-filesystems[1457]: Resized filesystem in /dev/vda9 Aug 12 23:57:26.122511 extend-filesystems[1457]: Found vdb Aug 12 23:57:26.114091 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 12 23:57:26.114407 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 12 23:57:26.189782 bash[1512]: Updated "/home/core/.ssh/authorized_keys" Aug 12 23:57:26.191740 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 12 23:57:26.231113 systemd[1]: Starting sshkeys.service... Aug 12 23:57:26.274207 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 12 23:57:26.287127 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 12 23:57:26.351816 coreos-metadata[1520]: Aug 12 23:57:26.351 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 12 23:57:26.365045 coreos-metadata[1520]: Aug 12 23:57:26.365 INFO Fetch successful Aug 12 23:57:26.378888 unknown[1520]: wrote ssh authorized keys file for user: core Aug 12 23:57:26.381720 containerd[1483]: time="2025-08-12T23:57:26.380651823Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Aug 12 23:57:26.410748 update-ssh-keys[1529]: Updated "/home/core/.ssh/authorized_keys" Aug 12 23:57:26.413749 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 12 23:57:26.417922 systemd[1]: Finished sshkeys.service. Aug 12 23:57:26.425769 locksmithd[1493]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 12 23:57:26.435769 containerd[1483]: time="2025-08-12T23:57:26.435463540Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:57:26.437781 containerd[1483]: time="2025-08-12T23:57:26.437740869Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:57:26.437900 containerd[1483]: time="2025-08-12T23:57:26.437886683Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 12 23:57:26.437962 containerd[1483]: time="2025-08-12T23:57:26.437951320Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 12 23:57:26.438230 containerd[1483]: time="2025-08-12T23:57:26.438211535Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 12 23:57:26.438304 containerd[1483]: time="2025-08-12T23:57:26.438293040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 12 23:57:26.438423 containerd[1483]: time="2025-08-12T23:57:26.438406520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:57:26.438481 containerd[1483]: time="2025-08-12T23:57:26.438469142Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:57:26.438874 containerd[1483]: time="2025-08-12T23:57:26.438844698Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:57:26.438961 containerd[1483]: time="2025-08-12T23:57:26.438948426Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 12 23:57:26.439049 containerd[1483]: time="2025-08-12T23:57:26.439034372Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:57:26.439102 containerd[1483]: time="2025-08-12T23:57:26.439091356Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 12 23:57:26.439248 containerd[1483]: time="2025-08-12T23:57:26.439234712Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:57:26.439607 containerd[1483]: time="2025-08-12T23:57:26.439588266Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:57:26.439946 containerd[1483]: time="2025-08-12T23:57:26.439886366Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:57:26.440023 containerd[1483]: time="2025-08-12T23:57:26.440009099Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 12 23:57:26.440193 containerd[1483]: time="2025-08-12T23:57:26.440177069Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 12 23:57:26.440301 containerd[1483]: time="2025-08-12T23:57:26.440287524Z" level=info msg="metadata content store policy set" policy=shared Aug 12 23:57:26.442850 containerd[1483]: time="2025-08-12T23:57:26.442801303Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 12 23:57:26.443047 containerd[1483]: time="2025-08-12T23:57:26.443028280Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 12 23:57:26.443158 containerd[1483]: time="2025-08-12T23:57:26.443144742Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 12 23:57:26.443226 containerd[1483]: time="2025-08-12T23:57:26.443214351Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 12 23:57:26.443285 containerd[1483]: time="2025-08-12T23:57:26.443274595Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 12 23:57:26.443493 containerd[1483]: time="2025-08-12T23:57:26.443478082Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 12 23:57:26.443950 containerd[1483]: time="2025-08-12T23:57:26.443933283Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 12 23:57:26.444149 containerd[1483]: time="2025-08-12T23:57:26.444131728Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 12 23:57:26.444252 containerd[1483]: time="2025-08-12T23:57:26.444207299Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 12 23:57:26.444322 containerd[1483]: time="2025-08-12T23:57:26.444309187Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 12 23:57:26.444380 containerd[1483]: time="2025-08-12T23:57:26.444369581Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 12 23:57:26.444453 containerd[1483]: time="2025-08-12T23:57:26.444440398Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 12 23:57:26.444513 containerd[1483]: time="2025-08-12T23:57:26.444501376Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 12 23:57:26.444574 containerd[1483]: time="2025-08-12T23:57:26.444562962Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 12 23:57:26.444635 containerd[1483]: time="2025-08-12T23:57:26.444623239Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 12 23:57:26.445156 containerd[1483]: time="2025-08-12T23:57:26.444725620Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 12 23:57:26.445156 containerd[1483]: time="2025-08-12T23:57:26.444746444Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 12 23:57:26.445156 containerd[1483]: time="2025-08-12T23:57:26.444761578Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 12 23:57:26.445156 containerd[1483]: time="2025-08-12T23:57:26.444787190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 12 23:57:26.445156 containerd[1483]: time="2025-08-12T23:57:26.444807355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 12 23:57:26.445156 containerd[1483]: time="2025-08-12T23:57:26.444823767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 12 23:57:26.445156 containerd[1483]: time="2025-08-12T23:57:26.444840530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 12 23:57:26.445156 containerd[1483]: time="2025-08-12T23:57:26.444855587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 12 23:57:26.445156 containerd[1483]: time="2025-08-12T23:57:26.444873159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 12 23:57:26.445156 containerd[1483]: time="2025-08-12T23:57:26.444888753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 12 23:57:26.445156 containerd[1483]: time="2025-08-12T23:57:26.444906606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 12 23:57:26.445156 containerd[1483]: time="2025-08-12T23:57:26.444922391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 12 23:57:26.445156 containerd[1483]: time="2025-08-12T23:57:26.444949147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 12 23:57:26.445156 containerd[1483]: time="2025-08-12T23:57:26.444984919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 12 23:57:26.445621 containerd[1483]: time="2025-08-12T23:57:26.445002161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 12 23:57:26.445621 containerd[1483]: time="2025-08-12T23:57:26.445020827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 12 23:57:26.445621 containerd[1483]: time="2025-08-12T23:57:26.445038931Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 12 23:57:26.445621 containerd[1483]: time="2025-08-12T23:57:26.445066885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 12 23:57:26.445621 containerd[1483]: time="2025-08-12T23:57:26.445083719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 12 23:57:26.445621 containerd[1483]: time="2025-08-12T23:57:26.445098473Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 12 23:57:26.447101 containerd[1483]: time="2025-08-12T23:57:26.445840574Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 12 23:57:26.447101 containerd[1483]: time="2025-08-12T23:57:26.445873201Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 12 23:57:26.447101 containerd[1483]: time="2025-08-12T23:57:26.445965137Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 12 23:57:26.447101 containerd[1483]: time="2025-08-12T23:57:26.445980482Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 12 23:57:26.447101 containerd[1483]: time="2025-08-12T23:57:26.445992932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 12 23:57:26.447101 containerd[1483]: time="2025-08-12T23:57:26.446008563Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 12 23:57:26.447101 containerd[1483]: time="2025-08-12T23:57:26.446020875Z" level=info msg="NRI interface is disabled by configuration." Aug 12 23:57:26.447101 containerd[1483]: time="2025-08-12T23:57:26.446033867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 12 23:57:26.447391 containerd[1483]: time="2025-08-12T23:57:26.446395243Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 12 23:57:26.447391 containerd[1483]: time="2025-08-12T23:57:26.446458578Z" level=info msg="Connect containerd service" Aug 12 23:57:26.447391 containerd[1483]: time="2025-08-12T23:57:26.446504516Z" level=info msg="using legacy CRI server" Aug 12 23:57:26.447391 containerd[1483]: time="2025-08-12T23:57:26.446514784Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 12 23:57:26.447391 containerd[1483]: time="2025-08-12T23:57:26.446702095Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 12 23:57:26.452030 containerd[1483]: time="2025-08-12T23:57:26.451984598Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 12 23:57:26.452435 containerd[1483]: time="2025-08-12T23:57:26.452362043Z" level=info msg="Start subscribing containerd event" Aug 12 23:57:26.452667 containerd[1483]: time="2025-08-12T23:57:26.452649581Z" level=info msg="Start recovering state" Aug 12 23:57:26.452836 containerd[1483]: time="2025-08-12T23:57:26.452488183Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 12 23:57:26.452972 containerd[1483]: time="2025-08-12T23:57:26.452959019Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 12 23:57:26.453339 containerd[1483]: time="2025-08-12T23:57:26.453323356Z" level=info msg="Start event monitor" Aug 12 23:57:26.453659 containerd[1483]: time="2025-08-12T23:57:26.453644343Z" level=info msg="Start snapshots syncer" Aug 12 23:57:26.453739 containerd[1483]: time="2025-08-12T23:57:26.453727839Z" level=info msg="Start cni network conf syncer for default" Aug 12 23:57:26.453825 containerd[1483]: time="2025-08-12T23:57:26.453812759Z" level=info msg="Start streaming server" Aug 12 23:57:26.454088 systemd[1]: Started containerd.service - containerd container runtime. Aug 12 23:57:26.457710 containerd[1483]: time="2025-08-12T23:57:26.457632045Z" level=info msg="containerd successfully booted in 0.079219s" Aug 12 23:57:26.522737 sshd_keygen[1476]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 12 23:57:26.550460 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 12 23:57:26.559061 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 12 23:57:26.571272 systemd[1]: issuegen.service: Deactivated successfully. Aug 12 23:57:26.571517 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 12 23:57:26.580160 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 12 23:57:26.597567 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 12 23:57:26.610211 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 12 23:57:26.613355 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 12 23:57:26.618613 systemd[1]: Reached target getty.target - Login Prompts. Aug 12 23:57:26.649895 systemd-networkd[1371]: eth1: Gained IPv6LL Aug 12 23:57:26.653787 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 12 23:57:26.656274 systemd[1]: Reached target network-online.target - Network is Online. Aug 12 23:57:26.661964 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:57:26.672049 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 12 23:57:26.702727 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 12 23:57:26.905879 systemd-networkd[1371]: eth0: Gained IPv6LL Aug 12 23:57:27.783750 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:57:27.784781 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 12 23:57:27.787548 systemd[1]: Startup finished in 953ms (kernel) + 5.158s (initrd) + 5.964s (userspace) = 12.075s. Aug 12 23:57:27.795220 (kubelet)[1570]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 12 23:57:28.434271 kubelet[1570]: E0812 23:57:28.434167 1570 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:57:28.437582 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:57:28.437811 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:57:28.438272 systemd[1]: kubelet.service: Consumed 1.291s CPU time, 266.9M memory peak. Aug 12 23:57:30.299406 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 12 23:57:30.308165 systemd[1]: Started sshd@0-24.199.122.14:22-139.178.68.195:52306.service - OpenSSH per-connection server daemon (139.178.68.195:52306). Aug 12 23:57:30.384955 sshd[1582]: Accepted publickey for core from 139.178.68.195 port 52306 ssh2: RSA SHA256:Yd4cJaNOPrEdOKjK3Hl1fuqro0lLX1aY5TKeqt+Qp+4 Aug 12 23:57:30.386785 sshd-session[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:57:30.394952 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 12 23:57:30.400136 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 12 23:57:30.411548 systemd-logind[1464]: New session 1 of user core. Aug 12 23:57:30.423162 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 12 23:57:30.438156 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 12 23:57:30.443424 (systemd)[1586]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:57:30.447174 systemd-logind[1464]: New session c1 of user core. Aug 12 23:57:30.604241 systemd[1586]: Queued start job for default target default.target. Aug 12 23:57:30.613050 systemd[1586]: Created slice app.slice - User Application Slice. Aug 12 23:57:30.613093 systemd[1586]: Reached target paths.target - Paths. Aug 12 23:57:30.613156 systemd[1586]: Reached target timers.target - Timers. Aug 12 23:57:30.615120 systemd[1586]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 12 23:57:30.641561 systemd[1586]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 12 23:57:30.641719 systemd[1586]: Reached target sockets.target - Sockets. Aug 12 23:57:30.641775 systemd[1586]: Reached target basic.target - Basic System. Aug 12 23:57:30.641816 systemd[1586]: Reached target default.target - Main User Target. Aug 12 23:57:30.641852 systemd[1586]: Startup finished in 184ms. Aug 12 23:57:30.642359 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 12 23:57:30.647933 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 12 23:57:30.721739 systemd[1]: Started sshd@1-24.199.122.14:22-139.178.68.195:52322.service - OpenSSH per-connection server daemon (139.178.68.195:52322). Aug 12 23:57:30.775233 sshd[1597]: Accepted publickey for core from 139.178.68.195 port 52322 ssh2: RSA SHA256:Yd4cJaNOPrEdOKjK3Hl1fuqro0lLX1aY5TKeqt+Qp+4 Aug 12 23:57:30.777551 sshd-session[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:57:30.784516 systemd-logind[1464]: New session 2 of user core. Aug 12 23:57:30.799678 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 12 23:57:30.865643 sshd[1599]: Connection closed by 139.178.68.195 port 52322 Aug 12 23:57:30.867036 sshd-session[1597]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:30.887095 systemd[1]: sshd@1-24.199.122.14:22-139.178.68.195:52322.service: Deactivated successfully. Aug 12 23:57:30.889290 systemd[1]: session-2.scope: Deactivated successfully. Aug 12 23:57:30.891901 systemd-logind[1464]: Session 2 logged out. Waiting for processes to exit. Aug 12 23:57:30.898155 systemd[1]: Started sshd@2-24.199.122.14:22-139.178.68.195:52330.service - OpenSSH per-connection server daemon (139.178.68.195:52330). Aug 12 23:57:30.899567 systemd-logind[1464]: Removed session 2. Aug 12 23:57:30.942927 sshd[1604]: Accepted publickey for core from 139.178.68.195 port 52330 ssh2: RSA SHA256:Yd4cJaNOPrEdOKjK3Hl1fuqro0lLX1aY5TKeqt+Qp+4 Aug 12 23:57:30.945927 sshd-session[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:57:30.953384 systemd-logind[1464]: New session 3 of user core. Aug 12 23:57:30.961068 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 12 23:57:31.016763 sshd[1607]: Connection closed by 139.178.68.195 port 52330 Aug 12 23:57:31.017516 sshd-session[1604]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:31.026825 systemd[1]: sshd@2-24.199.122.14:22-139.178.68.195:52330.service: Deactivated successfully. Aug 12 23:57:31.029014 systemd[1]: session-3.scope: Deactivated successfully. Aug 12 23:57:31.029835 systemd-logind[1464]: Session 3 logged out. Waiting for processes to exit. Aug 12 23:57:31.037053 systemd[1]: Started sshd@3-24.199.122.14:22-139.178.68.195:52340.service - OpenSSH per-connection server daemon (139.178.68.195:52340). Aug 12 23:57:31.039174 systemd-logind[1464]: Removed session 3. Aug 12 23:57:31.079957 sshd[1612]: Accepted publickey for core from 139.178.68.195 port 52340 ssh2: RSA SHA256:Yd4cJaNOPrEdOKjK3Hl1fuqro0lLX1aY5TKeqt+Qp+4 Aug 12 23:57:31.081645 sshd-session[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:57:31.088008 systemd-logind[1464]: New session 4 of user core. Aug 12 23:57:31.101567 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 12 23:57:31.162541 sshd[1615]: Connection closed by 139.178.68.195 port 52340 Aug 12 23:57:31.162364 sshd-session[1612]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:31.175928 systemd[1]: sshd@3-24.199.122.14:22-139.178.68.195:52340.service: Deactivated successfully. Aug 12 23:57:31.177973 systemd[1]: session-4.scope: Deactivated successfully. Aug 12 23:57:31.180010 systemd-logind[1464]: Session 4 logged out. Waiting for processes to exit. Aug 12 23:57:31.185090 systemd[1]: Started sshd@4-24.199.122.14:22-139.178.68.195:52354.service - OpenSSH per-connection server daemon (139.178.68.195:52354). Aug 12 23:57:31.186958 systemd-logind[1464]: Removed session 4. Aug 12 23:57:31.234103 sshd[1620]: Accepted publickey for core from 139.178.68.195 port 52354 ssh2: RSA SHA256:Yd4cJaNOPrEdOKjK3Hl1fuqro0lLX1aY5TKeqt+Qp+4 Aug 12 23:57:31.236017 sshd-session[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:57:31.243438 systemd-logind[1464]: New session 5 of user core. Aug 12 23:57:31.250028 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 12 23:57:31.320312 sudo[1624]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 12 23:57:31.320640 sudo[1624]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:57:31.343037 sudo[1624]: pam_unix(sudo:session): session closed for user root Aug 12 23:57:31.346905 sshd[1623]: Connection closed by 139.178.68.195 port 52354 Aug 12 23:57:31.347976 sshd-session[1620]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:31.357946 systemd[1]: sshd@4-24.199.122.14:22-139.178.68.195:52354.service: Deactivated successfully. Aug 12 23:57:31.361158 systemd[1]: session-5.scope: Deactivated successfully. Aug 12 23:57:31.363977 systemd-logind[1464]: Session 5 logged out. Waiting for processes to exit. Aug 12 23:57:31.371929 systemd[1]: Started sshd@5-24.199.122.14:22-139.178.68.195:52362.service - OpenSSH per-connection server daemon (139.178.68.195:52362). Aug 12 23:57:31.373931 systemd-logind[1464]: Removed session 5. Aug 12 23:57:31.423027 sshd[1629]: Accepted publickey for core from 139.178.68.195 port 52362 ssh2: RSA SHA256:Yd4cJaNOPrEdOKjK3Hl1fuqro0lLX1aY5TKeqt+Qp+4 Aug 12 23:57:31.424745 sshd-session[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:57:31.430942 systemd-logind[1464]: New session 6 of user core. Aug 12 23:57:31.439038 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 12 23:57:31.502307 sudo[1634]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 12 23:57:31.502622 sudo[1634]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:57:31.506564 sudo[1634]: pam_unix(sudo:session): session closed for user root Aug 12 23:57:31.513616 sudo[1633]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 12 23:57:31.514401 sudo[1633]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:57:31.535148 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 12 23:57:31.567866 augenrules[1656]: No rules Aug 12 23:57:31.569242 systemd[1]: audit-rules.service: Deactivated successfully. Aug 12 23:57:31.569531 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 12 23:57:31.571212 sudo[1633]: pam_unix(sudo:session): session closed for user root Aug 12 23:57:32.256981 systemd-resolved[1337]: Clock change detected. Flushing caches. Aug 12 23:57:32.257544 systemd-timesyncd[1357]: Contacted time server 192.48.105.15:123 (1.flatcar.pool.ntp.org). Aug 12 23:57:32.257610 systemd-timesyncd[1357]: Initial clock synchronization to Tue 2025-08-12 23:57:32.256908 UTC. Aug 12 23:57:32.259295 sshd[1632]: Connection closed by 139.178.68.195 port 52362 Aug 12 23:57:32.259801 sshd-session[1629]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:32.269316 systemd[1]: sshd@5-24.199.122.14:22-139.178.68.195:52362.service: Deactivated successfully. Aug 12 23:57:32.271484 systemd[1]: session-6.scope: Deactivated successfully. Aug 12 23:57:32.272317 systemd-logind[1464]: Session 6 logged out. Waiting for processes to exit. Aug 12 23:57:32.278558 systemd[1]: Started sshd@6-24.199.122.14:22-139.178.68.195:52364.service - OpenSSH per-connection server daemon (139.178.68.195:52364). Aug 12 23:57:32.281514 systemd-logind[1464]: Removed session 6. Aug 12 23:57:32.332978 sshd[1664]: Accepted publickey for core from 139.178.68.195 port 52364 ssh2: RSA SHA256:Yd4cJaNOPrEdOKjK3Hl1fuqro0lLX1aY5TKeqt+Qp+4 Aug 12 23:57:32.334766 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:57:32.340358 systemd-logind[1464]: New session 7 of user core. Aug 12 23:57:32.347430 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 12 23:57:32.409299 sudo[1668]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 12 23:57:32.409613 sudo[1668]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:57:33.199725 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:57:33.199986 systemd[1]: kubelet.service: Consumed 1.291s CPU time, 266.9M memory peak. Aug 12 23:57:33.206555 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:57:33.264199 systemd[1]: Reload requested from client PID 1704 ('systemctl') (unit session-7.scope)... Aug 12 23:57:33.264227 systemd[1]: Reloading... Aug 12 23:57:33.443629 zram_generator::config[1743]: No configuration found. Aug 12 23:57:33.606840 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:57:33.734538 systemd[1]: Reloading finished in 469 ms. Aug 12 23:57:33.819101 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:57:33.820776 systemd[1]: kubelet.service: Deactivated successfully. Aug 12 23:57:33.821106 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:57:33.821190 systemd[1]: kubelet.service: Consumed 156ms CPU time, 97.7M memory peak. Aug 12 23:57:33.826773 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:57:34.116406 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:57:34.123842 (kubelet)[1804]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 12 23:57:34.206132 kubelet[1804]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:57:34.206132 kubelet[1804]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 12 23:57:34.206132 kubelet[1804]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:57:34.206754 kubelet[1804]: I0812 23:57:34.206189 1804 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 12 23:57:35.187016 kubelet[1804]: I0812 23:57:35.186954 1804 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 12 23:57:35.187016 kubelet[1804]: I0812 23:57:35.186990 1804 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 12 23:57:35.187328 kubelet[1804]: I0812 23:57:35.187297 1804 server.go:956] "Client rotation is on, will bootstrap in background" Aug 12 23:57:35.209822 kubelet[1804]: I0812 23:57:35.209787 1804 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 12 23:57:35.227125 kubelet[1804]: E0812 23:57:35.225002 1804 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 12 23:57:35.227125 kubelet[1804]: I0812 23:57:35.225101 1804 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 12 23:57:35.229603 kubelet[1804]: I0812 23:57:35.229572 1804 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 12 23:57:35.230010 kubelet[1804]: I0812 23:57:35.229978 1804 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 12 23:57:35.230354 kubelet[1804]: I0812 23:57:35.230133 1804 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"24.199.122.14","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 12 23:57:35.230581 kubelet[1804]: I0812 23:57:35.230568 1804 topology_manager.go:138] "Creating topology manager with none policy" Aug 12 23:57:35.230663 kubelet[1804]: I0812 23:57:35.230653 1804 container_manager_linux.go:303] "Creating device plugin manager" Aug 12 23:57:35.230851 kubelet[1804]: I0812 23:57:35.230839 1804 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:57:35.234024 kubelet[1804]: I0812 23:57:35.233995 1804 kubelet.go:480] "Attempting to sync node with API server" Aug 12 23:57:35.234183 kubelet[1804]: I0812 23:57:35.234171 1804 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 12 23:57:35.234808 kubelet[1804]: I0812 23:57:35.234788 1804 kubelet.go:386] "Adding apiserver pod source" Aug 12 23:57:35.234890 kubelet[1804]: I0812 23:57:35.234882 1804 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 12 23:57:35.236461 kubelet[1804]: E0812 23:57:35.235885 1804 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:57:35.236461 kubelet[1804]: E0812 23:57:35.235949 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:57:35.240408 kubelet[1804]: I0812 23:57:35.240381 1804 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Aug 12 23:57:35.241277 kubelet[1804]: I0812 23:57:35.241247 1804 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 12 23:57:35.242022 kubelet[1804]: W0812 23:57:35.241999 1804 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 12 23:57:35.244013 kubelet[1804]: E0812 23:57:35.243983 1804 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 12 23:57:35.244117 kubelet[1804]: E0812 23:57:35.244095 1804 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"24.199.122.14\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 12 23:57:35.245709 kubelet[1804]: I0812 23:57:35.245689 1804 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 12 23:57:35.245888 kubelet[1804]: I0812 23:57:35.245877 1804 server.go:1289] "Started kubelet" Aug 12 23:57:35.248605 kubelet[1804]: I0812 23:57:35.248578 1804 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 12 23:57:35.253383 kubelet[1804]: I0812 23:57:35.252242 1804 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 12 23:57:35.253383 kubelet[1804]: I0812 23:57:35.253174 1804 server.go:317] "Adding debug handlers to kubelet server" Aug 12 23:57:35.257562 kubelet[1804]: I0812 23:57:35.257479 1804 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 12 23:57:35.257824 kubelet[1804]: I0812 23:57:35.257806 1804 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 12 23:57:35.258175 kubelet[1804]: I0812 23:57:35.258071 1804 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 12 23:57:35.263134 kubelet[1804]: I0812 23:57:35.260639 1804 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 12 23:57:35.263134 kubelet[1804]: I0812 23:57:35.260827 1804 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 12 23:57:35.263134 kubelet[1804]: I0812 23:57:35.260959 1804 reconciler.go:26] "Reconciler: start to sync state" Aug 12 23:57:35.263134 kubelet[1804]: E0812 23:57:35.262289 1804 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"24.199.122.14\" not found" Aug 12 23:57:35.265554 kubelet[1804]: I0812 23:57:35.265378 1804 factory.go:223] Registration of the systemd container factory successfully Aug 12 23:57:35.265906 kubelet[1804]: I0812 23:57:35.265519 1804 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 12 23:57:35.270737 kubelet[1804]: E0812 23:57:35.267192 1804 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{24.199.122.14.185b2a5c317199a5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:24.199.122.14,UID:24.199.122.14,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:24.199.122.14,},FirstTimestamp:2025-08-12 23:57:35.245826469 +0000 UTC m=+1.112695410,LastTimestamp:2025-08-12 23:57:35.245826469 +0000 UTC m=+1.112695410,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:24.199.122.14,}" Aug 12 23:57:35.274383 kubelet[1804]: E0812 23:57:35.274314 1804 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 12 23:57:35.275056 kubelet[1804]: E0812 23:57:35.274681 1804 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 12 23:57:35.275465 kubelet[1804]: E0812 23:57:35.275107 1804 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"24.199.122.14\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Aug 12 23:57:35.275465 kubelet[1804]: I0812 23:57:35.275231 1804 factory.go:223] Registration of the containerd container factory successfully Aug 12 23:57:35.305417 kubelet[1804]: I0812 23:57:35.305392 1804 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 12 23:57:35.305971 kubelet[1804]: I0812 23:57:35.305698 1804 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 12 23:57:35.305971 kubelet[1804]: I0812 23:57:35.305723 1804 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:57:35.309619 kubelet[1804]: I0812 23:57:35.309332 1804 policy_none.go:49] "None policy: Start" Aug 12 23:57:35.309619 kubelet[1804]: I0812 23:57:35.309362 1804 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 12 23:57:35.309619 kubelet[1804]: I0812 23:57:35.309375 1804 state_mem.go:35] "Initializing new in-memory state store" Aug 12 23:57:35.316833 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 12 23:57:35.328828 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 12 23:57:35.335175 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 12 23:57:35.344685 kubelet[1804]: E0812 23:57:35.344166 1804 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 12 23:57:35.344685 kubelet[1804]: I0812 23:57:35.344376 1804 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 12 23:57:35.344685 kubelet[1804]: I0812 23:57:35.344391 1804 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 12 23:57:35.344898 kubelet[1804]: I0812 23:57:35.344725 1804 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 12 23:57:35.351521 kubelet[1804]: E0812 23:57:35.351484 1804 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 12 23:57:35.351799 kubelet[1804]: E0812 23:57:35.351761 1804 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"24.199.122.14\" not found" Aug 12 23:57:35.364188 kubelet[1804]: I0812 23:57:35.364132 1804 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 12 23:57:35.366055 kubelet[1804]: I0812 23:57:35.366019 1804 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 12 23:57:35.366575 kubelet[1804]: I0812 23:57:35.366222 1804 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 12 23:57:35.366575 kubelet[1804]: I0812 23:57:35.366262 1804 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 12 23:57:35.366575 kubelet[1804]: I0812 23:57:35.366274 1804 kubelet.go:2436] "Starting kubelet main sync loop" Aug 12 23:57:35.366575 kubelet[1804]: E0812 23:57:35.366332 1804 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Aug 12 23:57:35.446506 kubelet[1804]: I0812 23:57:35.445566 1804 kubelet_node_status.go:75] "Attempting to register node" node="24.199.122.14" Aug 12 23:57:35.456705 kubelet[1804]: I0812 23:57:35.456661 1804 kubelet_node_status.go:78] "Successfully registered node" node="24.199.122.14" Aug 12 23:57:35.457102 kubelet[1804]: E0812 23:57:35.456984 1804 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"24.199.122.14\": node \"24.199.122.14\" not found" Aug 12 23:57:35.478675 kubelet[1804]: E0812 23:57:35.478558 1804 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"24.199.122.14\" not found" Aug 12 23:57:35.579470 kubelet[1804]: E0812 23:57:35.579388 1804 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"24.199.122.14\" not found" Aug 12 23:57:35.667734 sudo[1668]: pam_unix(sudo:session): session closed for user root Aug 12 23:57:35.672038 sshd[1667]: Connection closed by 139.178.68.195 port 52364 Aug 12 23:57:35.672993 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:35.677558 systemd-logind[1464]: Session 7 logged out. Waiting for processes to exit. Aug 12 23:57:35.678821 systemd[1]: sshd@6-24.199.122.14:22-139.178.68.195:52364.service: Deactivated successfully. Aug 12 23:57:35.680364 kubelet[1804]: E0812 23:57:35.680260 1804 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"24.199.122.14\" not found" Aug 12 23:57:35.682052 systemd[1]: session-7.scope: Deactivated successfully. Aug 12 23:57:35.682653 systemd[1]: session-7.scope: Consumed 638ms CPU time, 72.3M memory peak. Aug 12 23:57:35.684735 systemd-logind[1464]: Removed session 7. Aug 12 23:57:35.781176 kubelet[1804]: E0812 23:57:35.781005 1804 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"24.199.122.14\" not found" Aug 12 23:57:35.882069 kubelet[1804]: E0812 23:57:35.882014 1804 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"24.199.122.14\" not found" Aug 12 23:57:35.983187 kubelet[1804]: E0812 23:57:35.983127 1804 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"24.199.122.14\" not found" Aug 12 23:57:36.084240 kubelet[1804]: E0812 23:57:36.084101 1804 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"24.199.122.14\" not found" Aug 12 23:57:36.184614 kubelet[1804]: E0812 23:57:36.184548 1804 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"24.199.122.14\" not found" Aug 12 23:57:36.189886 kubelet[1804]: I0812 23:57:36.189827 1804 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Aug 12 23:57:36.190128 kubelet[1804]: I0812 23:57:36.190036 1804 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Aug 12 23:57:36.190128 kubelet[1804]: I0812 23:57:36.190124 1804 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Aug 12 23:57:36.236442 kubelet[1804]: E0812 23:57:36.236383 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:57:36.284683 kubelet[1804]: E0812 23:57:36.284615 1804 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"24.199.122.14\" not found" Aug 12 23:57:36.385861 kubelet[1804]: E0812 23:57:36.385706 1804 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"24.199.122.14\" not found" Aug 12 23:57:36.487183 kubelet[1804]: E0812 23:57:36.486580 1804 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"24.199.122.14\" not found" Aug 12 23:57:36.587169 kubelet[1804]: E0812 23:57:36.587112 1804 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"24.199.122.14\" not found" Aug 12 23:57:36.688273 kubelet[1804]: E0812 23:57:36.688133 1804 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"24.199.122.14\" not found" Aug 12 23:57:36.790106 kubelet[1804]: I0812 23:57:36.790005 1804 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Aug 12 23:57:36.790838 containerd[1483]: time="2025-08-12T23:57:36.790522125Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 12 23:57:36.791460 kubelet[1804]: I0812 23:57:36.790858 1804 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Aug 12 23:57:37.236801 kubelet[1804]: E0812 23:57:37.236738 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:57:37.237595 kubelet[1804]: I0812 23:57:37.237309 1804 apiserver.go:52] "Watching apiserver" Aug 12 23:57:37.248204 kubelet[1804]: E0812 23:57:37.248152 1804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l5vvd" podUID="fee2ac30-f11f-43b7-ba5e-ccd47684ad80" Aug 12 23:57:37.255655 systemd[1]: Created slice kubepods-besteffort-pod9bab77cf_e6d7_4a58_a086_07dc69fbfcf6.slice - libcontainer container kubepods-besteffort-pod9bab77cf_e6d7_4a58_a086_07dc69fbfcf6.slice. Aug 12 23:57:37.261268 kubelet[1804]: I0812 23:57:37.261214 1804 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 12 23:57:37.270583 systemd[1]: Created slice kubepods-besteffort-podf3768e1f_fb4d_494c_a8d0_fdefa276d248.slice - libcontainer container kubepods-besteffort-podf3768e1f_fb4d_494c_a8d0_fdefa276d248.slice. Aug 12 23:57:37.272156 kubelet[1804]: I0812 23:57:37.271825 1804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fee2ac30-f11f-43b7-ba5e-ccd47684ad80-varrun\") pod \"csi-node-driver-l5vvd\" (UID: \"fee2ac30-f11f-43b7-ba5e-ccd47684ad80\") " pod="calico-system/csi-node-driver-l5vvd" Aug 12 23:57:37.272156 kubelet[1804]: I0812 23:57:37.271864 1804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kds68\" (UniqueName: \"kubernetes.io/projected/fee2ac30-f11f-43b7-ba5e-ccd47684ad80-kube-api-access-kds68\") pod \"csi-node-driver-l5vvd\" (UID: \"fee2ac30-f11f-43b7-ba5e-ccd47684ad80\") " pod="calico-system/csi-node-driver-l5vvd" Aug 12 23:57:37.272156 kubelet[1804]: I0812 23:57:37.271881 1804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f3768e1f-fb4d-494c-a8d0-fdefa276d248-kube-proxy\") pod \"kube-proxy-flk5k\" (UID: \"f3768e1f-fb4d-494c-a8d0-fdefa276d248\") " pod="kube-system/kube-proxy-flk5k" Aug 12 23:57:37.272156 kubelet[1804]: I0812 23:57:37.271896 1804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9bab77cf-e6d7-4a58-a086-07dc69fbfcf6-cni-bin-dir\") pod \"calico-node-fqh24\" (UID: \"9bab77cf-e6d7-4a58-a086-07dc69fbfcf6\") " pod="calico-system/calico-node-fqh24" Aug 12 23:57:37.272156 kubelet[1804]: I0812 23:57:37.271913 1804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9bab77cf-e6d7-4a58-a086-07dc69fbfcf6-flexvol-driver-host\") pod \"calico-node-fqh24\" (UID: \"9bab77cf-e6d7-4a58-a086-07dc69fbfcf6\") " pod="calico-system/calico-node-fqh24" Aug 12 23:57:37.272776 kubelet[1804]: I0812 23:57:37.271929 1804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9bab77cf-e6d7-4a58-a086-07dc69fbfcf6-lib-modules\") pod \"calico-node-fqh24\" (UID: \"9bab77cf-e6d7-4a58-a086-07dc69fbfcf6\") " pod="calico-system/calico-node-fqh24" Aug 12 23:57:37.272776 kubelet[1804]: I0812 23:57:37.271945 1804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntpbw\" (UniqueName: \"kubernetes.io/projected/9bab77cf-e6d7-4a58-a086-07dc69fbfcf6-kube-api-access-ntpbw\") pod \"calico-node-fqh24\" (UID: \"9bab77cf-e6d7-4a58-a086-07dc69fbfcf6\") " pod="calico-system/calico-node-fqh24" Aug 12 23:57:37.272776 kubelet[1804]: I0812 23:57:37.271960 1804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9bab77cf-e6d7-4a58-a086-07dc69fbfcf6-cni-log-dir\") pod \"calico-node-fqh24\" (UID: \"9bab77cf-e6d7-4a58-a086-07dc69fbfcf6\") " pod="calico-system/calico-node-fqh24" Aug 12 23:57:37.272776 kubelet[1804]: I0812 23:57:37.271979 1804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9bab77cf-e6d7-4a58-a086-07dc69fbfcf6-cni-net-dir\") pod \"calico-node-fqh24\" (UID: \"9bab77cf-e6d7-4a58-a086-07dc69fbfcf6\") " pod="calico-system/calico-node-fqh24" Aug 12 23:57:37.272776 kubelet[1804]: I0812 23:57:37.271993 1804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9bab77cf-e6d7-4a58-a086-07dc69fbfcf6-node-certs\") pod \"calico-node-fqh24\" (UID: \"9bab77cf-e6d7-4a58-a086-07dc69fbfcf6\") " pod="calico-system/calico-node-fqh24" Aug 12 23:57:37.272902 kubelet[1804]: I0812 23:57:37.272007 1804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9bab77cf-e6d7-4a58-a086-07dc69fbfcf6-var-lib-calico\") pod \"calico-node-fqh24\" (UID: \"9bab77cf-e6d7-4a58-a086-07dc69fbfcf6\") " pod="calico-system/calico-node-fqh24" Aug 12 23:57:37.272902 kubelet[1804]: I0812 23:57:37.272128 1804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9bab77cf-e6d7-4a58-a086-07dc69fbfcf6-var-run-calico\") pod \"calico-node-fqh24\" (UID: \"9bab77cf-e6d7-4a58-a086-07dc69fbfcf6\") " pod="calico-system/calico-node-fqh24" Aug 12 23:57:37.272902 kubelet[1804]: I0812 23:57:37.272438 1804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fee2ac30-f11f-43b7-ba5e-ccd47684ad80-registration-dir\") pod \"csi-node-driver-l5vvd\" (UID: \"fee2ac30-f11f-43b7-ba5e-ccd47684ad80\") " pod="calico-system/csi-node-driver-l5vvd" Aug 12 23:57:37.272902 kubelet[1804]: I0812 23:57:37.272462 1804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3768e1f-fb4d-494c-a8d0-fdefa276d248-xtables-lock\") pod \"kube-proxy-flk5k\" (UID: \"f3768e1f-fb4d-494c-a8d0-fdefa276d248\") " pod="kube-system/kube-proxy-flk5k" Aug 12 23:57:37.272902 kubelet[1804]: I0812 23:57:37.272478 1804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmqlf\" (UniqueName: \"kubernetes.io/projected/f3768e1f-fb4d-494c-a8d0-fdefa276d248-kube-api-access-fmqlf\") pod \"kube-proxy-flk5k\" (UID: \"f3768e1f-fb4d-494c-a8d0-fdefa276d248\") " pod="kube-system/kube-proxy-flk5k" Aug 12 23:57:37.273015 kubelet[1804]: I0812 23:57:37.272507 1804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9bab77cf-e6d7-4a58-a086-07dc69fbfcf6-policysync\") pod \"calico-node-fqh24\" (UID: \"9bab77cf-e6d7-4a58-a086-07dc69fbfcf6\") " pod="calico-system/calico-node-fqh24" Aug 12 23:57:37.273015 kubelet[1804]: I0812 23:57:37.272521 1804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bab77cf-e6d7-4a58-a086-07dc69fbfcf6-tigera-ca-bundle\") pod \"calico-node-fqh24\" (UID: \"9bab77cf-e6d7-4a58-a086-07dc69fbfcf6\") " pod="calico-system/calico-node-fqh24" Aug 12 23:57:37.273015 kubelet[1804]: I0812 23:57:37.272534 1804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9bab77cf-e6d7-4a58-a086-07dc69fbfcf6-xtables-lock\") pod \"calico-node-fqh24\" (UID: \"9bab77cf-e6d7-4a58-a086-07dc69fbfcf6\") " pod="calico-system/calico-node-fqh24" Aug 12 23:57:37.273015 kubelet[1804]: I0812 23:57:37.272573 1804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3768e1f-fb4d-494c-a8d0-fdefa276d248-lib-modules\") pod \"kube-proxy-flk5k\" (UID: \"f3768e1f-fb4d-494c-a8d0-fdefa276d248\") " pod="kube-system/kube-proxy-flk5k" Aug 12 23:57:37.273015 kubelet[1804]: I0812 23:57:37.272604 1804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fee2ac30-f11f-43b7-ba5e-ccd47684ad80-kubelet-dir\") pod \"csi-node-driver-l5vvd\" (UID: \"fee2ac30-f11f-43b7-ba5e-ccd47684ad80\") " pod="calico-system/csi-node-driver-l5vvd" Aug 12 23:57:37.273153 kubelet[1804]: I0812 23:57:37.272620 1804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fee2ac30-f11f-43b7-ba5e-ccd47684ad80-socket-dir\") pod \"csi-node-driver-l5vvd\" (UID: \"fee2ac30-f11f-43b7-ba5e-ccd47684ad80\") " pod="calico-system/csi-node-driver-l5vvd" Aug 12 23:57:37.382230 kubelet[1804]: E0812 23:57:37.382195 1804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:57:37.382449 kubelet[1804]: W0812 23:57:37.382355 1804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:57:37.382449 kubelet[1804]: E0812 23:57:37.382389 1804 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:57:37.390147 kubelet[1804]: E0812 23:57:37.389680 1804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:57:37.392107 kubelet[1804]: W0812 23:57:37.390330 1804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:57:37.392107 kubelet[1804]: E0812 23:57:37.390369 1804 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:57:37.401139 kubelet[1804]: E0812 23:57:37.400329 1804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:57:37.402160 kubelet[1804]: W0812 23:57:37.402122 1804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:57:37.402288 kubelet[1804]: E0812 23:57:37.402161 1804 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:57:37.403996 kubelet[1804]: E0812 23:57:37.403816 1804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:57:37.404703 kubelet[1804]: W0812 23:57:37.404669 1804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:57:37.404703 kubelet[1804]: E0812 23:57:37.404702 1804 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:57:37.570833 containerd[1483]: time="2025-08-12T23:57:37.570446373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fqh24,Uid:9bab77cf-e6d7-4a58-a086-07dc69fbfcf6,Namespace:calico-system,Attempt:0,}" Aug 12 23:57:37.575040 kubelet[1804]: E0812 23:57:37.574758 1804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 12 23:57:37.575861 containerd[1483]: time="2025-08-12T23:57:37.575457368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-flk5k,Uid:f3768e1f-fb4d-494c-a8d0-fdefa276d248,Namespace:kube-system,Attempt:0,}" Aug 12 23:57:38.081420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount134186583.mount: Deactivated successfully. Aug 12 23:57:38.085124 containerd[1483]: time="2025-08-12T23:57:38.085035961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:57:38.086937 containerd[1483]: time="2025-08-12T23:57:38.086895134Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 12 23:57:38.088617 containerd[1483]: time="2025-08-12T23:57:38.088427197Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:57:38.089222 containerd[1483]: time="2025-08-12T23:57:38.089187045Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 12 23:57:38.090930 containerd[1483]: time="2025-08-12T23:57:38.090853793Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:57:38.097838 containerd[1483]: time="2025-08-12T23:57:38.097774182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:57:38.100648 containerd[1483]: time="2025-08-12T23:57:38.099982329Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 529.370961ms" Aug 12 23:57:38.102226 containerd[1483]: time="2025-08-12T23:57:38.102185064Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 526.594018ms" Aug 12 23:57:38.220015 containerd[1483]: time="2025-08-12T23:57:38.219670527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:57:38.220015 containerd[1483]: time="2025-08-12T23:57:38.219731018Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:57:38.220015 containerd[1483]: time="2025-08-12T23:57:38.219745764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:57:38.222112 containerd[1483]: time="2025-08-12T23:57:38.221992335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:57:38.230482 containerd[1483]: time="2025-08-12T23:57:38.230145585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:57:38.230482 containerd[1483]: time="2025-08-12T23:57:38.230201356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:57:38.230482 containerd[1483]: time="2025-08-12T23:57:38.230216350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:57:38.230482 containerd[1483]: time="2025-08-12T23:57:38.230299642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:57:38.238147 kubelet[1804]: E0812 23:57:38.237773 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:57:38.278338 systemd-resolved[1337]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Aug 12 23:57:38.309342 systemd[1]: Started cri-containerd-12a9d9759d36b9db962b2d4eeae0c326f2d66251e400b848d3d5107a29ce94e0.scope - libcontainer container 12a9d9759d36b9db962b2d4eeae0c326f2d66251e400b848d3d5107a29ce94e0. Aug 12 23:57:38.314680 systemd[1]: Started cri-containerd-f3c17e83dc64ba768b51bb135db20c49f75e3ea8791b3dce7cc27c75da023c45.scope - libcontainer container f3c17e83dc64ba768b51bb135db20c49f75e3ea8791b3dce7cc27c75da023c45. Aug 12 23:57:38.353726 containerd[1483]: time="2025-08-12T23:57:38.353570906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fqh24,Uid:9bab77cf-e6d7-4a58-a086-07dc69fbfcf6,Namespace:calico-system,Attempt:0,} returns sandbox id \"f3c17e83dc64ba768b51bb135db20c49f75e3ea8791b3dce7cc27c75da023c45\"" Aug 12 23:57:38.358678 containerd[1483]: time="2025-08-12T23:57:38.358075113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 12 23:57:38.361223 containerd[1483]: time="2025-08-12T23:57:38.360553357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-flk5k,Uid:f3768e1f-fb4d-494c-a8d0-fdefa276d248,Namespace:kube-system,Attempt:0,} returns sandbox id \"12a9d9759d36b9db962b2d4eeae0c326f2d66251e400b848d3d5107a29ce94e0\"" Aug 12 23:57:38.361864 kubelet[1804]: E0812 23:57:38.361413 1804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 12 23:57:39.238757 kubelet[1804]: E0812 23:57:39.238699 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:57:39.368115 kubelet[1804]: E0812 23:57:39.367437 1804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l5vvd" podUID="fee2ac30-f11f-43b7-ba5e-ccd47684ad80" Aug 12 23:57:39.464049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount235407878.mount: Deactivated successfully. Aug 12 23:57:39.545227 containerd[1483]: time="2025-08-12T23:57:39.544948064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:39.546392 containerd[1483]: time="2025-08-12T23:57:39.545902574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=5939797" Aug 12 23:57:39.547318 containerd[1483]: time="2025-08-12T23:57:39.546917300Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:39.550120 containerd[1483]: time="2025-08-12T23:57:39.549655259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:39.550808 containerd[1483]: time="2025-08-12T23:57:39.550496844Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.192369462s" Aug 12 23:57:39.550808 containerd[1483]: time="2025-08-12T23:57:39.550543005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 12 23:57:39.551998 containerd[1483]: time="2025-08-12T23:57:39.551969912Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\"" Aug 12 23:57:39.555530 containerd[1483]: time="2025-08-12T23:57:39.555367678Z" level=info msg="CreateContainer within sandbox \"f3c17e83dc64ba768b51bb135db20c49f75e3ea8791b3dce7cc27c75da023c45\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 12 23:57:39.572481 containerd[1483]: time="2025-08-12T23:57:39.572415183Z" level=info msg="CreateContainer within sandbox \"f3c17e83dc64ba768b51bb135db20c49f75e3ea8791b3dce7cc27c75da023c45\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"abd86cb8f191d7f33dcd51c843f3601e969379714e128e4f409e99e017d63186\"" Aug 12 23:57:39.574179 containerd[1483]: time="2025-08-12T23:57:39.573631877Z" level=info msg="StartContainer for \"abd86cb8f191d7f33dcd51c843f3601e969379714e128e4f409e99e017d63186\"" Aug 12 23:57:39.617323 systemd[1]: Started cri-containerd-abd86cb8f191d7f33dcd51c843f3601e969379714e128e4f409e99e017d63186.scope - libcontainer container abd86cb8f191d7f33dcd51c843f3601e969379714e128e4f409e99e017d63186. Aug 12 23:57:39.655621 containerd[1483]: time="2025-08-12T23:57:39.655428640Z" level=info msg="StartContainer for \"abd86cb8f191d7f33dcd51c843f3601e969379714e128e4f409e99e017d63186\" returns successfully" Aug 12 23:57:39.667048 systemd[1]: cri-containerd-abd86cb8f191d7f33dcd51c843f3601e969379714e128e4f409e99e017d63186.scope: Deactivated successfully. Aug 12 23:57:39.707040 containerd[1483]: time="2025-08-12T23:57:39.706979252Z" level=info msg="shim disconnected" id=abd86cb8f191d7f33dcd51c843f3601e969379714e128e4f409e99e017d63186 namespace=k8s.io Aug 12 23:57:39.707515 containerd[1483]: time="2025-08-12T23:57:39.707296027Z" level=warning msg="cleaning up after shim disconnected" id=abd86cb8f191d7f33dcd51c843f3601e969379714e128e4f409e99e017d63186 namespace=k8s.io Aug 12 23:57:39.707515 containerd[1483]: time="2025-08-12T23:57:39.707312810Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:57:40.239589 kubelet[1804]: E0812 23:57:40.239525 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:57:40.430721 systemd[1]: run-containerd-runc-k8s.io-abd86cb8f191d7f33dcd51c843f3601e969379714e128e4f409e99e017d63186-runc.durV9z.mount: Deactivated successfully. Aug 12 23:57:40.431318 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-abd86cb8f191d7f33dcd51c843f3601e969379714e128e4f409e99e017d63186-rootfs.mount: Deactivated successfully. Aug 12 23:57:40.739508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2282550250.mount: Deactivated successfully. Aug 12 23:57:41.240182 kubelet[1804]: E0812 23:57:41.240127 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:57:41.350412 systemd-resolved[1337]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Aug 12 23:57:41.367881 kubelet[1804]: E0812 23:57:41.366924 1804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l5vvd" podUID="fee2ac30-f11f-43b7-ba5e-ccd47684ad80" Aug 12 23:57:41.406523 containerd[1483]: time="2025-08-12T23:57:41.406448959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:41.407614 containerd[1483]: time="2025-08-12T23:57:41.407570488Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.3: active requests=0, bytes read=31892666" Aug 12 23:57:41.410120 containerd[1483]: time="2025-08-12T23:57:41.409367756Z" level=info msg="ImageCreate event name:\"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:41.412225 containerd[1483]: time="2025-08-12T23:57:41.412135411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:41.413911 containerd[1483]: time="2025-08-12T23:57:41.413167547Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.3\" with image id \"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\", repo tag \"registry.k8s.io/kube-proxy:v1.33.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\", size \"31891685\" in 1.861168668s" Aug 12 23:57:41.413911 containerd[1483]: time="2025-08-12T23:57:41.413199438Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\" returns image reference \"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\"" Aug 12 23:57:41.414279 containerd[1483]: time="2025-08-12T23:57:41.414245434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 12 23:57:41.417965 containerd[1483]: time="2025-08-12T23:57:41.417917153Z" level=info msg="CreateContainer within sandbox \"12a9d9759d36b9db962b2d4eeae0c326f2d66251e400b848d3d5107a29ce94e0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 12 23:57:41.438646 containerd[1483]: time="2025-08-12T23:57:41.438482507Z" level=info msg="CreateContainer within sandbox \"12a9d9759d36b9db962b2d4eeae0c326f2d66251e400b848d3d5107a29ce94e0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fd40e6dcc66975439d6797f9a660c1f16ac9e669ff873128d61ef481632369e6\"" Aug 12 23:57:41.441013 containerd[1483]: time="2025-08-12T23:57:41.439165459Z" level=info msg="StartContainer for \"fd40e6dcc66975439d6797f9a660c1f16ac9e669ff873128d61ef481632369e6\"" Aug 12 23:57:41.490278 systemd[1]: Started cri-containerd-fd40e6dcc66975439d6797f9a660c1f16ac9e669ff873128d61ef481632369e6.scope - libcontainer container fd40e6dcc66975439d6797f9a660c1f16ac9e669ff873128d61ef481632369e6. Aug 12 23:57:41.538777 containerd[1483]: time="2025-08-12T23:57:41.538693930Z" level=info msg="StartContainer for \"fd40e6dcc66975439d6797f9a660c1f16ac9e669ff873128d61ef481632369e6\" returns successfully" Aug 12 23:57:42.240579 kubelet[1804]: E0812 23:57:42.240522 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:57:42.401634 kubelet[1804]: E0812 23:57:42.401258 1804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 12 23:57:42.430191 kubelet[1804]: I0812 23:57:42.430120 1804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-flk5k" podStartSLOduration=4.378631495 podStartE2EDuration="7.430100356s" podCreationTimestamp="2025-08-12 23:57:35 +0000 UTC" firstStartedPulling="2025-08-12 23:57:38.362537112 +0000 UTC m=+4.229405803" lastFinishedPulling="2025-08-12 23:57:41.414005974 +0000 UTC m=+7.280874664" observedRunningTime="2025-08-12 23:57:42.425416212 +0000 UTC m=+8.292284923" watchObservedRunningTime="2025-08-12 23:57:42.430100356 +0000 UTC m=+8.296969060" Aug 12 23:57:43.241606 kubelet[1804]: E0812 23:57:43.241561 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:57:43.368114 kubelet[1804]: E0812 23:57:43.367204 1804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l5vvd" podUID="fee2ac30-f11f-43b7-ba5e-ccd47684ad80" Aug 12 23:57:43.404310 kubelet[1804]: E0812 23:57:43.404272 1804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 12 23:57:44.139655 containerd[1483]: time="2025-08-12T23:57:44.139595205Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:44.140587 containerd[1483]: time="2025-08-12T23:57:44.140530389Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 12 23:57:44.141111 containerd[1483]: time="2025-08-12T23:57:44.141068522Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:44.143739 containerd[1483]: time="2025-08-12T23:57:44.143682288Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:44.144508 containerd[1483]: time="2025-08-12T23:57:44.144476317Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 2.730200612s" Aug 12 23:57:44.144724 containerd[1483]: time="2025-08-12T23:57:44.144611448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 12 23:57:44.148908 containerd[1483]: time="2025-08-12T23:57:44.148871970Z" level=info msg="CreateContainer within sandbox \"f3c17e83dc64ba768b51bb135db20c49f75e3ea8791b3dce7cc27c75da023c45\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 12 23:57:44.168994 containerd[1483]: time="2025-08-12T23:57:44.168936579Z" level=info msg="CreateContainer within sandbox \"f3c17e83dc64ba768b51bb135db20c49f75e3ea8791b3dce7cc27c75da023c45\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b1d86a4a12687a681540a64504ca64d4fd9b2cb766d08f93b3d7e267c3c4ec41\"" Aug 12 23:57:44.169815 containerd[1483]: time="2025-08-12T23:57:44.169577333Z" level=info msg="StartContainer for \"b1d86a4a12687a681540a64504ca64d4fd9b2cb766d08f93b3d7e267c3c4ec41\"" Aug 12 23:57:44.210584 systemd[1]: Started cri-containerd-b1d86a4a12687a681540a64504ca64d4fd9b2cb766d08f93b3d7e267c3c4ec41.scope - libcontainer container b1d86a4a12687a681540a64504ca64d4fd9b2cb766d08f93b3d7e267c3c4ec41. Aug 12 23:57:44.242448 kubelet[1804]: E0812 23:57:44.242368 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:57:44.249777 containerd[1483]: time="2025-08-12T23:57:44.249697593Z" level=info msg="StartContainer for \"b1d86a4a12687a681540a64504ca64d4fd9b2cb766d08f93b3d7e267c3c4ec41\" returns successfully" Aug 12 23:57:44.915291 containerd[1483]: time="2025-08-12T23:57:44.915241597Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 12 23:57:44.919640 systemd[1]: cri-containerd-b1d86a4a12687a681540a64504ca64d4fd9b2cb766d08f93b3d7e267c3c4ec41.scope: Deactivated successfully. Aug 12 23:57:44.919953 systemd[1]: cri-containerd-b1d86a4a12687a681540a64504ca64d4fd9b2cb766d08f93b3d7e267c3c4ec41.scope: Consumed 764ms CPU time, 191.8M memory peak, 171.2M written to disk. Aug 12 23:57:44.942971 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1d86a4a12687a681540a64504ca64d4fd9b2cb766d08f93b3d7e267c3c4ec41-rootfs.mount: Deactivated successfully. Aug 12 23:57:44.984466 containerd[1483]: time="2025-08-12T23:57:44.984383447Z" level=info msg="shim disconnected" id=b1d86a4a12687a681540a64504ca64d4fd9b2cb766d08f93b3d7e267c3c4ec41 namespace=k8s.io Aug 12 23:57:44.984466 containerd[1483]: time="2025-08-12T23:57:44.984453029Z" level=warning msg="cleaning up after shim disconnected" id=b1d86a4a12687a681540a64504ca64d4fd9b2cb766d08f93b3d7e267c3c4ec41 namespace=k8s.io Aug 12 23:57:44.984466 containerd[1483]: time="2025-08-12T23:57:44.984465560Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:57:45.009372 kubelet[1804]: I0812 23:57:45.008497 1804 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 12 23:57:45.243027 kubelet[1804]: E0812 23:57:45.242880 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:57:45.375268 systemd[1]: Created slice kubepods-besteffort-podfee2ac30_f11f_43b7_ba5e_ccd47684ad80.slice - libcontainer container kubepods-besteffort-podfee2ac30_f11f_43b7_ba5e_ccd47684ad80.slice. Aug 12 23:57:45.379212 containerd[1483]: time="2025-08-12T23:57:45.378742627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l5vvd,Uid:fee2ac30-f11f-43b7-ba5e-ccd47684ad80,Namespace:calico-system,Attempt:0,}" Aug 12 23:57:45.420146 containerd[1483]: time="2025-08-12T23:57:45.419589633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 12 23:57:45.421429 systemd-resolved[1337]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Aug 12 23:57:45.463603 containerd[1483]: time="2025-08-12T23:57:45.463537600Z" level=error msg="Failed to destroy network for sandbox \"503fdcecdfd1498f3c16c29bacb792b4b0f9a566e4593f68e29d8f94e8f62188\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:45.465128 containerd[1483]: time="2025-08-12T23:57:45.463974363Z" level=error msg="encountered an error cleaning up failed sandbox \"503fdcecdfd1498f3c16c29bacb792b4b0f9a566e4593f68e29d8f94e8f62188\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:45.465128 containerd[1483]: time="2025-08-12T23:57:45.464051548Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l5vvd,Uid:fee2ac30-f11f-43b7-ba5e-ccd47684ad80,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"503fdcecdfd1498f3c16c29bacb792b4b0f9a566e4593f68e29d8f94e8f62188\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:45.466023 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-503fdcecdfd1498f3c16c29bacb792b4b0f9a566e4593f68e29d8f94e8f62188-shm.mount: Deactivated successfully. Aug 12 23:57:45.466269 kubelet[1804]: E0812 23:57:45.466236 1804 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"503fdcecdfd1498f3c16c29bacb792b4b0f9a566e4593f68e29d8f94e8f62188\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:45.466348 kubelet[1804]: E0812 23:57:45.466298 1804 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"503fdcecdfd1498f3c16c29bacb792b4b0f9a566e4593f68e29d8f94e8f62188\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l5vvd" Aug 12 23:57:45.466348 kubelet[1804]: E0812 23:57:45.466320 1804 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"503fdcecdfd1498f3c16c29bacb792b4b0f9a566e4593f68e29d8f94e8f62188\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l5vvd" Aug 12 23:57:45.467674 kubelet[1804]: E0812 23:57:45.466376 1804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l5vvd_calico-system(fee2ac30-f11f-43b7-ba5e-ccd47684ad80)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l5vvd_calico-system(fee2ac30-f11f-43b7-ba5e-ccd47684ad80)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"503fdcecdfd1498f3c16c29bacb792b4b0f9a566e4593f68e29d8f94e8f62188\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l5vvd" podUID="fee2ac30-f11f-43b7-ba5e-ccd47684ad80" Aug 12 23:57:46.244589 kubelet[1804]: E0812 23:57:46.244524 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:57:46.418853 kubelet[1804]: I0812 23:57:46.418806 1804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="503fdcecdfd1498f3c16c29bacb792b4b0f9a566e4593f68e29d8f94e8f62188" Aug 12 23:57:46.419844 containerd[1483]: time="2025-08-12T23:57:46.419807391Z" level=info msg="StopPodSandbox for \"503fdcecdfd1498f3c16c29bacb792b4b0f9a566e4593f68e29d8f94e8f62188\"" Aug 12 23:57:46.420275 containerd[1483]: time="2025-08-12T23:57:46.420213872Z" level=info msg="Ensure that sandbox 503fdcecdfd1498f3c16c29bacb792b4b0f9a566e4593f68e29d8f94e8f62188 in task-service has been cleanup successfully" Aug 12 23:57:46.423762 containerd[1483]: time="2025-08-12T23:57:46.423010686Z" level=info msg="TearDown network for sandbox \"503fdcecdfd1498f3c16c29bacb792b4b0f9a566e4593f68e29d8f94e8f62188\" successfully" Aug 12 23:57:46.423762 containerd[1483]: time="2025-08-12T23:57:46.423045317Z" level=info msg="StopPodSandbox for \"503fdcecdfd1498f3c16c29bacb792b4b0f9a566e4593f68e29d8f94e8f62188\" returns successfully" Aug 12 23:57:46.424114 containerd[1483]: time="2025-08-12T23:57:46.423923713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l5vvd,Uid:fee2ac30-f11f-43b7-ba5e-ccd47684ad80,Namespace:calico-system,Attempt:1,}" Aug 12 23:57:46.425035 systemd[1]: run-netns-cni\x2db75d60d8\x2dd034\x2d78b7\x2d0c58\x2df09b85e4604e.mount: Deactivated successfully. Aug 12 23:57:46.522890 containerd[1483]: time="2025-08-12T23:57:46.522710711Z" level=error msg="Failed to destroy network for sandbox \"fb815f30ad4d15fec6a1627935b0980c76f1ea3176587b19b49cbee0d0d28613\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:46.528881 containerd[1483]: time="2025-08-12T23:57:46.525449229Z" level=error msg="encountered an error cleaning up failed sandbox \"fb815f30ad4d15fec6a1627935b0980c76f1ea3176587b19b49cbee0d0d28613\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:46.528881 containerd[1483]: time="2025-08-12T23:57:46.525564542Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l5vvd,Uid:fee2ac30-f11f-43b7-ba5e-ccd47684ad80,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"fb815f30ad4d15fec6a1627935b0980c76f1ea3176587b19b49cbee0d0d28613\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:46.529043 kubelet[1804]: E0812 23:57:46.528002 1804 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb815f30ad4d15fec6a1627935b0980c76f1ea3176587b19b49cbee0d0d28613\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:46.529043 kubelet[1804]: E0812 23:57:46.528060 1804 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb815f30ad4d15fec6a1627935b0980c76f1ea3176587b19b49cbee0d0d28613\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l5vvd" Aug 12 23:57:46.529043 kubelet[1804]: E0812 23:57:46.528102 1804 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb815f30ad4d15fec6a1627935b0980c76f1ea3176587b19b49cbee0d0d28613\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l5vvd" Aug 12 23:57:46.527151 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fb815f30ad4d15fec6a1627935b0980c76f1ea3176587b19b49cbee0d0d28613-shm.mount: Deactivated successfully. Aug 12 23:57:46.529228 kubelet[1804]: E0812 23:57:46.528172 1804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l5vvd_calico-system(fee2ac30-f11f-43b7-ba5e-ccd47684ad80)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l5vvd_calico-system(fee2ac30-f11f-43b7-ba5e-ccd47684ad80)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb815f30ad4d15fec6a1627935b0980c76f1ea3176587b19b49cbee0d0d28613\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l5vvd" podUID="fee2ac30-f11f-43b7-ba5e-ccd47684ad80" Aug 12 23:57:47.245596 kubelet[1804]: E0812 23:57:47.245538 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:57:47.423657 kubelet[1804]: I0812 23:57:47.423114 1804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb815f30ad4d15fec6a1627935b0980c76f1ea3176587b19b49cbee0d0d28613" Aug 12 23:57:47.424339 containerd[1483]: time="2025-08-12T23:57:47.424293788Z" level=info msg="StopPodSandbox for \"fb815f30ad4d15fec6a1627935b0980c76f1ea3176587b19b49cbee0d0d28613\"" Aug 12 23:57:47.424691 containerd[1483]: time="2025-08-12T23:57:47.424561407Z" level=info msg="Ensure that sandbox fb815f30ad4d15fec6a1627935b0980c76f1ea3176587b19b49cbee0d0d28613 in task-service has been cleanup successfully" Aug 12 23:57:47.426553 systemd[1]: run-netns-cni\x2dba94cb6b\x2d8d69\x2d10bd\x2de867\x2df1a7cfdec103.mount: Deactivated successfully. Aug 12 23:57:47.427870 containerd[1483]: time="2025-08-12T23:57:47.427345519Z" level=info msg="TearDown network for sandbox \"fb815f30ad4d15fec6a1627935b0980c76f1ea3176587b19b49cbee0d0d28613\" successfully" Aug 12 23:57:47.427870 containerd[1483]: time="2025-08-12T23:57:47.427370282Z" level=info msg="StopPodSandbox for \"fb815f30ad4d15fec6a1627935b0980c76f1ea3176587b19b49cbee0d0d28613\" returns successfully" Aug 12 23:57:47.427870 containerd[1483]: time="2025-08-12T23:57:47.427791593Z" level=info msg="StopPodSandbox for \"503fdcecdfd1498f3c16c29bacb792b4b0f9a566e4593f68e29d8f94e8f62188\"" Aug 12 23:57:47.428022 containerd[1483]: time="2025-08-12T23:57:47.427956873Z" level=info msg="TearDown network for sandbox \"503fdcecdfd1498f3c16c29bacb792b4b0f9a566e4593f68e29d8f94e8f62188\" successfully" Aug 12 23:57:47.428022 containerd[1483]: time="2025-08-12T23:57:47.427978812Z" level=info msg="StopPodSandbox for \"503fdcecdfd1498f3c16c29bacb792b4b0f9a566e4593f68e29d8f94e8f62188\" returns successfully" Aug 12 23:57:47.429247 containerd[1483]: time="2025-08-12T23:57:47.429152869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l5vvd,Uid:fee2ac30-f11f-43b7-ba5e-ccd47684ad80,Namespace:calico-system,Attempt:2,}" Aug 12 23:57:47.532050 containerd[1483]: time="2025-08-12T23:57:47.531919732Z" level=error msg="Failed to destroy network for sandbox \"b09de2089d69efea10f896dcdd456779191f4b212adc194137bc57d5d6f78deb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:47.534432 containerd[1483]: time="2025-08-12T23:57:47.534382675Z" level=error msg="encountered an error cleaning up failed sandbox \"b09de2089d69efea10f896dcdd456779191f4b212adc194137bc57d5d6f78deb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:47.534525 containerd[1483]: time="2025-08-12T23:57:47.534462912Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l5vvd,Uid:fee2ac30-f11f-43b7-ba5e-ccd47684ad80,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"b09de2089d69efea10f896dcdd456779191f4b212adc194137bc57d5d6f78deb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:47.535110 kubelet[1804]: E0812 23:57:47.534731 1804 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b09de2089d69efea10f896dcdd456779191f4b212adc194137bc57d5d6f78deb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:47.535110 kubelet[1804]: E0812 23:57:47.534797 1804 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b09de2089d69efea10f896dcdd456779191f4b212adc194137bc57d5d6f78deb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l5vvd" Aug 12 23:57:47.535110 kubelet[1804]: E0812 23:57:47.534857 1804 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b09de2089d69efea10f896dcdd456779191f4b212adc194137bc57d5d6f78deb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l5vvd" Aug 12 23:57:47.535280 kubelet[1804]: E0812 23:57:47.534921 1804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l5vvd_calico-system(fee2ac30-f11f-43b7-ba5e-ccd47684ad80)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l5vvd_calico-system(fee2ac30-f11f-43b7-ba5e-ccd47684ad80)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b09de2089d69efea10f896dcdd456779191f4b212adc194137bc57d5d6f78deb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l5vvd" podUID="fee2ac30-f11f-43b7-ba5e-ccd47684ad80" Aug 12 23:57:47.535886 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b09de2089d69efea10f896dcdd456779191f4b212adc194137bc57d5d6f78deb-shm.mount: Deactivated successfully. Aug 12 23:57:47.574614 systemd[1]: Created slice kubepods-besteffort-podbd4c9686_c83c_417c_992e_74d7223d78d5.slice - libcontainer container kubepods-besteffort-podbd4c9686_c83c_417c_992e_74d7223d78d5.slice. Aug 12 23:57:47.642665 kubelet[1804]: I0812 23:57:47.642450 1804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bd4c9686-c83c-417c-992e-74d7223d78d5-calico-apiserver-certs\") pod \"calico-apiserver-754ccc7fc7-4tmcd\" (UID: \"bd4c9686-c83c-417c-992e-74d7223d78d5\") " pod="calico-apiserver/calico-apiserver-754ccc7fc7-4tmcd" Aug 12 23:57:47.642665 kubelet[1804]: I0812 23:57:47.642495 1804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpw2b\" (UniqueName: \"kubernetes.io/projected/bd4c9686-c83c-417c-992e-74d7223d78d5-kube-api-access-vpw2b\") pod \"calico-apiserver-754ccc7fc7-4tmcd\" (UID: \"bd4c9686-c83c-417c-992e-74d7223d78d5\") " pod="calico-apiserver/calico-apiserver-754ccc7fc7-4tmcd" Aug 12 23:57:47.878832 containerd[1483]: time="2025-08-12T23:57:47.878383205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-754ccc7fc7-4tmcd,Uid:bd4c9686-c83c-417c-992e-74d7223d78d5,Namespace:calico-apiserver,Attempt:0,}" Aug 12 23:57:48.008444 containerd[1483]: time="2025-08-12T23:57:48.008363458Z" level=error msg="Failed to destroy network for sandbox \"50f8a3c71284b46c337e457f5c0c0fe20572df29e5f3a8de71d3afca99088360\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:48.009170 containerd[1483]: time="2025-08-12T23:57:48.009048602Z" level=error msg="encountered an error cleaning up failed sandbox \"50f8a3c71284b46c337e457f5c0c0fe20572df29e5f3a8de71d3afca99088360\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:48.009350 containerd[1483]: time="2025-08-12T23:57:48.009155008Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-754ccc7fc7-4tmcd,Uid:bd4c9686-c83c-417c-992e-74d7223d78d5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"50f8a3c71284b46c337e457f5c0c0fe20572df29e5f3a8de71d3afca99088360\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:48.010127 kubelet[1804]: E0812 23:57:48.009672 1804 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50f8a3c71284b46c337e457f5c0c0fe20572df29e5f3a8de71d3afca99088360\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:48.010127 kubelet[1804]: E0812 23:57:48.009743 1804 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50f8a3c71284b46c337e457f5c0c0fe20572df29e5f3a8de71d3afca99088360\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-754ccc7fc7-4tmcd" Aug 12 23:57:48.010127 kubelet[1804]: E0812 23:57:48.009769 1804 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50f8a3c71284b46c337e457f5c0c0fe20572df29e5f3a8de71d3afca99088360\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-754ccc7fc7-4tmcd" Aug 12 23:57:48.010311 kubelet[1804]: E0812 23:57:48.009833 1804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-754ccc7fc7-4tmcd_calico-apiserver(bd4c9686-c83c-417c-992e-74d7223d78d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-754ccc7fc7-4tmcd_calico-apiserver(bd4c9686-c83c-417c-992e-74d7223d78d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"50f8a3c71284b46c337e457f5c0c0fe20572df29e5f3a8de71d3afca99088360\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-754ccc7fc7-4tmcd" podUID="bd4c9686-c83c-417c-992e-74d7223d78d5" Aug 12 23:57:48.246459 kubelet[1804]: E0812 23:57:48.245961 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:57:48.434490 kubelet[1804]: I0812 23:57:48.434355 1804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b09de2089d69efea10f896dcdd456779191f4b212adc194137bc57d5d6f78deb" Aug 12 23:57:48.435211 kubelet[1804]: I0812 23:57:48.435185 1804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50f8a3c71284b46c337e457f5c0c0fe20572df29e5f3a8de71d3afca99088360" Aug 12 23:57:48.435854 containerd[1483]: time="2025-08-12T23:57:48.435823490Z" level=info msg="StopPodSandbox for \"50f8a3c71284b46c337e457f5c0c0fe20572df29e5f3a8de71d3afca99088360\"" Aug 12 23:57:48.436192 containerd[1483]: time="2025-08-12T23:57:48.436063780Z" level=info msg="Ensure that sandbox 50f8a3c71284b46c337e457f5c0c0fe20572df29e5f3a8de71d3afca99088360 in task-service has been cleanup successfully" Aug 12 23:57:48.438880 containerd[1483]: time="2025-08-12T23:57:48.438314992Z" level=info msg="TearDown network for sandbox \"50f8a3c71284b46c337e457f5c0c0fe20572df29e5f3a8de71d3afca99088360\" successfully" Aug 12 23:57:48.438880 containerd[1483]: time="2025-08-12T23:57:48.438347262Z" level=info msg="StopPodSandbox for \"50f8a3c71284b46c337e457f5c0c0fe20572df29e5f3a8de71d3afca99088360\" returns successfully" Aug 12 23:57:48.438880 containerd[1483]: time="2025-08-12T23:57:48.438478816Z" level=info msg="StopPodSandbox for \"b09de2089d69efea10f896dcdd456779191f4b212adc194137bc57d5d6f78deb\"" Aug 12 23:57:48.438880 containerd[1483]: time="2025-08-12T23:57:48.438746115Z" level=info msg="Ensure that sandbox b09de2089d69efea10f896dcdd456779191f4b212adc194137bc57d5d6f78deb in task-service has been cleanup successfully" Aug 12 23:57:48.439960 containerd[1483]: time="2025-08-12T23:57:48.439101507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-754ccc7fc7-4tmcd,Uid:bd4c9686-c83c-417c-992e-74d7223d78d5,Namespace:calico-apiserver,Attempt:1,}" Aug 12 23:57:48.439493 systemd[1]: run-netns-cni\x2ddfbd6054\x2deb2a\x2d5894\x2db6b4\x2d7594784889b7.mount: Deactivated successfully. Aug 12 23:57:48.443121 containerd[1483]: time="2025-08-12T23:57:48.443092052Z" level=info msg="TearDown network for sandbox \"b09de2089d69efea10f896dcdd456779191f4b212adc194137bc57d5d6f78deb\" successfully" Aug 12 23:57:48.443231 containerd[1483]: time="2025-08-12T23:57:48.443216349Z" level=info msg="StopPodSandbox for \"b09de2089d69efea10f896dcdd456779191f4b212adc194137bc57d5d6f78deb\" returns successfully" Aug 12 23:57:48.443985 systemd[1]: run-netns-cni\x2d21a4f74f\x2d960e\x2d8a58\x2d0a9c\x2d6ae6ca2fbb68.mount: Deactivated successfully. Aug 12 23:57:48.447070 containerd[1483]: time="2025-08-12T23:57:48.446832547Z" level=info msg="StopPodSandbox for \"fb815f30ad4d15fec6a1627935b0980c76f1ea3176587b19b49cbee0d0d28613\"" Aug 12 23:57:48.447070 containerd[1483]: time="2025-08-12T23:57:48.446931156Z" level=info msg="TearDown network for sandbox \"fb815f30ad4d15fec6a1627935b0980c76f1ea3176587b19b49cbee0d0d28613\" successfully" Aug 12 23:57:48.447070 containerd[1483]: time="2025-08-12T23:57:48.446941788Z" level=info msg="StopPodSandbox for \"fb815f30ad4d15fec6a1627935b0980c76f1ea3176587b19b49cbee0d0d28613\" returns successfully" Aug 12 23:57:48.448103 containerd[1483]: time="2025-08-12T23:57:48.447884999Z" level=info msg="StopPodSandbox for \"503fdcecdfd1498f3c16c29bacb792b4b0f9a566e4593f68e29d8f94e8f62188\"" Aug 12 23:57:48.448103 containerd[1483]: time="2025-08-12T23:57:48.448043983Z" level=info msg="TearDown network for sandbox \"503fdcecdfd1498f3c16c29bacb792b4b0f9a566e4593f68e29d8f94e8f62188\" successfully" Aug 12 23:57:48.448103 containerd[1483]: time="2025-08-12T23:57:48.448056143Z" level=info msg="StopPodSandbox for \"503fdcecdfd1498f3c16c29bacb792b4b0f9a566e4593f68e29d8f94e8f62188\" returns successfully" Aug 12 23:57:48.448809 containerd[1483]: time="2025-08-12T23:57:48.448686743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l5vvd,Uid:fee2ac30-f11f-43b7-ba5e-ccd47684ad80,Namespace:calico-system,Attempt:3,}" Aug 12 23:57:48.569557 containerd[1483]: time="2025-08-12T23:57:48.567990112Z" level=error msg="Failed to destroy network for sandbox \"29f0fbde7712a23fd4fcb1a076cd04601e90ec013ac86b28e8c0d55cb5320740\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:48.569557 containerd[1483]: time="2025-08-12T23:57:48.568792702Z" level=error msg="encountered an error cleaning up failed sandbox \"29f0fbde7712a23fd4fcb1a076cd04601e90ec013ac86b28e8c0d55cb5320740\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:48.569557 containerd[1483]: time="2025-08-12T23:57:48.568863923Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-754ccc7fc7-4tmcd,Uid:bd4c9686-c83c-417c-992e-74d7223d78d5,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"29f0fbde7712a23fd4fcb1a076cd04601e90ec013ac86b28e8c0d55cb5320740\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:48.569767 kubelet[1804]: E0812 23:57:48.569198 1804 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29f0fbde7712a23fd4fcb1a076cd04601e90ec013ac86b28e8c0d55cb5320740\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:48.569767 kubelet[1804]: E0812 23:57:48.569257 1804 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29f0fbde7712a23fd4fcb1a076cd04601e90ec013ac86b28e8c0d55cb5320740\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-754ccc7fc7-4tmcd" Aug 12 23:57:48.569767 kubelet[1804]: E0812 23:57:48.569284 1804 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29f0fbde7712a23fd4fcb1a076cd04601e90ec013ac86b28e8c0d55cb5320740\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-754ccc7fc7-4tmcd" Aug 12 23:57:48.569865 kubelet[1804]: E0812 23:57:48.569340 1804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-754ccc7fc7-4tmcd_calico-apiserver(bd4c9686-c83c-417c-992e-74d7223d78d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-754ccc7fc7-4tmcd_calico-apiserver(bd4c9686-c83c-417c-992e-74d7223d78d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"29f0fbde7712a23fd4fcb1a076cd04601e90ec013ac86b28e8c0d55cb5320740\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-754ccc7fc7-4tmcd" podUID="bd4c9686-c83c-417c-992e-74d7223d78d5" Aug 12 23:57:48.589806 containerd[1483]: time="2025-08-12T23:57:48.589757962Z" level=error msg="Failed to destroy network for sandbox \"940d62105e425d2171d692581357b09b112ed8b4a38e42c67926be16a81eb3f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:48.591434 containerd[1483]: time="2025-08-12T23:57:48.591385763Z" level=error msg="encountered an error cleaning up failed sandbox \"940d62105e425d2171d692581357b09b112ed8b4a38e42c67926be16a81eb3f4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:48.591675 containerd[1483]: time="2025-08-12T23:57:48.591654517Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l5vvd,Uid:fee2ac30-f11f-43b7-ba5e-ccd47684ad80,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"940d62105e425d2171d692581357b09b112ed8b4a38e42c67926be16a81eb3f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:48.592068 kubelet[1804]: E0812 23:57:48.592003 1804 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"940d62105e425d2171d692581357b09b112ed8b4a38e42c67926be16a81eb3f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:48.592207 kubelet[1804]: E0812 23:57:48.592074 1804 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"940d62105e425d2171d692581357b09b112ed8b4a38e42c67926be16a81eb3f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l5vvd" Aug 12 23:57:48.592207 kubelet[1804]: E0812 23:57:48.592117 1804 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"940d62105e425d2171d692581357b09b112ed8b4a38e42c67926be16a81eb3f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l5vvd" Aug 12 23:57:48.592207 kubelet[1804]: E0812 23:57:48.592170 1804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l5vvd_calico-system(fee2ac30-f11f-43b7-ba5e-ccd47684ad80)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l5vvd_calico-system(fee2ac30-f11f-43b7-ba5e-ccd47684ad80)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"940d62105e425d2171d692581357b09b112ed8b4a38e42c67926be16a81eb3f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l5vvd" podUID="fee2ac30-f11f-43b7-ba5e-ccd47684ad80" Aug 12 23:57:49.246854 kubelet[1804]: E0812 23:57:49.246759 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:57:49.427502 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-29f0fbde7712a23fd4fcb1a076cd04601e90ec013ac86b28e8c0d55cb5320740-shm.mount: Deactivated successfully. Aug 12 23:57:49.438971 kubelet[1804]: I0812 23:57:49.438742 1804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="940d62105e425d2171d692581357b09b112ed8b4a38e42c67926be16a81eb3f4" Aug 12 23:57:49.439713 containerd[1483]: time="2025-08-12T23:57:49.439441491Z" level=info msg="StopPodSandbox for \"940d62105e425d2171d692581357b09b112ed8b4a38e42c67926be16a81eb3f4\"" Aug 12 23:57:49.440047 containerd[1483]: time="2025-08-12T23:57:49.439715375Z" level=info msg="Ensure that sandbox 940d62105e425d2171d692581357b09b112ed8b4a38e42c67926be16a81eb3f4 in task-service has been cleanup successfully" Aug 12 23:57:49.441686 systemd[1]: run-netns-cni\x2d0ab4f5bc\x2d5773\x2d7d25\x2d5de6\x2d3dc9b7c1c3d6.mount: Deactivated successfully. Aug 12 23:57:49.443309 containerd[1483]: time="2025-08-12T23:57:49.443159216Z" level=info msg="TearDown network for sandbox \"940d62105e425d2171d692581357b09b112ed8b4a38e42c67926be16a81eb3f4\" successfully" Aug 12 23:57:49.443309 containerd[1483]: time="2025-08-12T23:57:49.443219538Z" level=info msg="StopPodSandbox for \"940d62105e425d2171d692581357b09b112ed8b4a38e42c67926be16a81eb3f4\" returns successfully" Aug 12 23:57:49.445463 containerd[1483]: time="2025-08-12T23:57:49.445399840Z" level=info msg="StopPodSandbox for \"b09de2089d69efea10f896dcdd456779191f4b212adc194137bc57d5d6f78deb\"" Aug 12 23:57:49.445801 containerd[1483]: time="2025-08-12T23:57:49.445605688Z" level=info msg="TearDown network for sandbox \"b09de2089d69efea10f896dcdd456779191f4b212adc194137bc57d5d6f78deb\" successfully" Aug 12 23:57:49.445801 containerd[1483]: time="2025-08-12T23:57:49.445794875Z" level=info msg="StopPodSandbox for \"b09de2089d69efea10f896dcdd456779191f4b212adc194137bc57d5d6f78deb\" returns successfully" Aug 12 23:57:49.447060 containerd[1483]: time="2025-08-12T23:57:49.446327910Z" level=info msg="StopPodSandbox for \"fb815f30ad4d15fec6a1627935b0980c76f1ea3176587b19b49cbee0d0d28613\"" Aug 12 23:57:49.447060 containerd[1483]: time="2025-08-12T23:57:49.446664658Z" level=info msg="TearDown network for sandbox \"fb815f30ad4d15fec6a1627935b0980c76f1ea3176587b19b49cbee0d0d28613\" successfully" Aug 12 23:57:49.447060 containerd[1483]: time="2025-08-12T23:57:49.446680656Z" level=info msg="StopPodSandbox for \"fb815f30ad4d15fec6a1627935b0980c76f1ea3176587b19b49cbee0d0d28613\" returns successfully" Aug 12 23:57:49.447252 kubelet[1804]: I0812 23:57:49.446691 1804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29f0fbde7712a23fd4fcb1a076cd04601e90ec013ac86b28e8c0d55cb5320740" Aug 12 23:57:49.447870 containerd[1483]: time="2025-08-12T23:57:49.447849681Z" level=info msg="StopPodSandbox for \"503fdcecdfd1498f3c16c29bacb792b4b0f9a566e4593f68e29d8f94e8f62188\"" Aug 12 23:57:49.447939 containerd[1483]: time="2025-08-12T23:57:49.447930173Z" level=info msg="TearDown network for sandbox \"503fdcecdfd1498f3c16c29bacb792b4b0f9a566e4593f68e29d8f94e8f62188\" successfully" Aug 12 23:57:49.447965 containerd[1483]: time="2025-08-12T23:57:49.447940173Z" level=info msg="StopPodSandbox for \"503fdcecdfd1498f3c16c29bacb792b4b0f9a566e4593f68e29d8f94e8f62188\" returns successfully" Aug 12 23:57:49.447999 containerd[1483]: time="2025-08-12T23:57:49.447985703Z" level=info msg="StopPodSandbox for \"29f0fbde7712a23fd4fcb1a076cd04601e90ec013ac86b28e8c0d55cb5320740\"" Aug 12 23:57:49.448714 containerd[1483]: time="2025-08-12T23:57:49.448689518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l5vvd,Uid:fee2ac30-f11f-43b7-ba5e-ccd47684ad80,Namespace:calico-system,Attempt:4,}" Aug 12 23:57:49.448966 containerd[1483]: time="2025-08-12T23:57:49.448861831Z" level=info msg="Ensure that sandbox 29f0fbde7712a23fd4fcb1a076cd04601e90ec013ac86b28e8c0d55cb5320740 in task-service has been cleanup successfully" Aug 12 23:57:49.450144 containerd[1483]: time="2025-08-12T23:57:49.450121277Z" level=info msg="TearDown network for sandbox \"29f0fbde7712a23fd4fcb1a076cd04601e90ec013ac86b28e8c0d55cb5320740\" successfully" Aug 12 23:57:49.450144 containerd[1483]: time="2025-08-12T23:57:49.450140127Z" level=info msg="StopPodSandbox for \"29f0fbde7712a23fd4fcb1a076cd04601e90ec013ac86b28e8c0d55cb5320740\" returns successfully" Aug 12 23:57:49.452098 containerd[1483]: time="2025-08-12T23:57:49.451316259Z" level=info msg="StopPodSandbox for \"50f8a3c71284b46c337e457f5c0c0fe20572df29e5f3a8de71d3afca99088360\"" Aug 12 23:57:49.453275 containerd[1483]: time="2025-08-12T23:57:49.453223991Z" level=info msg="TearDown network for sandbox \"50f8a3c71284b46c337e457f5c0c0fe20572df29e5f3a8de71d3afca99088360\" successfully" Aug 12 23:57:49.453275 containerd[1483]: time="2025-08-12T23:57:49.453243548Z" level=info msg="StopPodSandbox for \"50f8a3c71284b46c337e457f5c0c0fe20572df29e5f3a8de71d3afca99088360\" returns successfully" Aug 12 23:57:49.453451 systemd[1]: run-netns-cni\x2d4199ae38\x2d5462\x2df49b\x2d070e\x2d5d9e711cc574.mount: Deactivated successfully. Aug 12 23:57:49.455410 containerd[1483]: time="2025-08-12T23:57:49.455284229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-754ccc7fc7-4tmcd,Uid:bd4c9686-c83c-417c-992e-74d7223d78d5,Namespace:calico-apiserver,Attempt:2,}" Aug 12 23:57:49.581693 containerd[1483]: time="2025-08-12T23:57:49.581564206Z" level=error msg="Failed to destroy network for sandbox \"ba47dfa9cd71f1b8267d0ee843c64b15b18763f2d5a6f5362b2bc74048302ecc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:49.584646 containerd[1483]: time="2025-08-12T23:57:49.584445554Z" level=error msg="encountered an error cleaning up failed sandbox \"ba47dfa9cd71f1b8267d0ee843c64b15b18763f2d5a6f5362b2bc74048302ecc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:49.584951 containerd[1483]: time="2025-08-12T23:57:49.584530448Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l5vvd,Uid:fee2ac30-f11f-43b7-ba5e-ccd47684ad80,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"ba47dfa9cd71f1b8267d0ee843c64b15b18763f2d5a6f5362b2bc74048302ecc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:49.585915 kubelet[1804]: E0812 23:57:49.585304 1804 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba47dfa9cd71f1b8267d0ee843c64b15b18763f2d5a6f5362b2bc74048302ecc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:49.585915 kubelet[1804]: E0812 23:57:49.585364 1804 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba47dfa9cd71f1b8267d0ee843c64b15b18763f2d5a6f5362b2bc74048302ecc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l5vvd" Aug 12 23:57:49.585915 kubelet[1804]: E0812 23:57:49.585385 1804 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba47dfa9cd71f1b8267d0ee843c64b15b18763f2d5a6f5362b2bc74048302ecc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l5vvd" Aug 12 23:57:49.586105 kubelet[1804]: E0812 23:57:49.585437 1804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l5vvd_calico-system(fee2ac30-f11f-43b7-ba5e-ccd47684ad80)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l5vvd_calico-system(fee2ac30-f11f-43b7-ba5e-ccd47684ad80)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba47dfa9cd71f1b8267d0ee843c64b15b18763f2d5a6f5362b2bc74048302ecc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l5vvd" podUID="fee2ac30-f11f-43b7-ba5e-ccd47684ad80" Aug 12 23:57:49.603803 containerd[1483]: time="2025-08-12T23:57:49.603236970Z" level=error msg="Failed to destroy network for sandbox \"1f5e1228b9c1956276576512d15c8019fdbc2109fd174ffb8a10bb09f2a588bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:49.604773 containerd[1483]: time="2025-08-12T23:57:49.604439637Z" level=error msg="encountered an error cleaning up failed sandbox \"1f5e1228b9c1956276576512d15c8019fdbc2109fd174ffb8a10bb09f2a588bd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:49.604773 containerd[1483]: time="2025-08-12T23:57:49.604509244Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-754ccc7fc7-4tmcd,Uid:bd4c9686-c83c-417c-992e-74d7223d78d5,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"1f5e1228b9c1956276576512d15c8019fdbc2109fd174ffb8a10bb09f2a588bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:49.605497 kubelet[1804]: E0812 23:57:49.605105 1804 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f5e1228b9c1956276576512d15c8019fdbc2109fd174ffb8a10bb09f2a588bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:49.605497 kubelet[1804]: E0812 23:57:49.605173 1804 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f5e1228b9c1956276576512d15c8019fdbc2109fd174ffb8a10bb09f2a588bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-754ccc7fc7-4tmcd" Aug 12 23:57:49.605497 kubelet[1804]: E0812 23:57:49.605200 1804 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f5e1228b9c1956276576512d15c8019fdbc2109fd174ffb8a10bb09f2a588bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-754ccc7fc7-4tmcd" Aug 12 23:57:49.605651 kubelet[1804]: E0812 23:57:49.605255 1804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-754ccc7fc7-4tmcd_calico-apiserver(bd4c9686-c83c-417c-992e-74d7223d78d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-754ccc7fc7-4tmcd_calico-apiserver(bd4c9686-c83c-417c-992e-74d7223d78d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f5e1228b9c1956276576512d15c8019fdbc2109fd174ffb8a10bb09f2a588bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-754ccc7fc7-4tmcd" podUID="bd4c9686-c83c-417c-992e-74d7223d78d5" Aug 12 23:57:49.905422 systemd[1]: Created slice kubepods-besteffort-pod94db9bfe_6519_4933_ac34_b88541ae35ec.slice - libcontainer container kubepods-besteffort-pod94db9bfe_6519_4933_ac34_b88541ae35ec.slice. Aug 12 23:57:49.954546 kubelet[1804]: I0812 23:57:49.954178 1804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9vdx\" (UniqueName: \"kubernetes.io/projected/94db9bfe-6519-4933-ac34-b88541ae35ec-kube-api-access-z9vdx\") pod \"nginx-deployment-7fcdb87857-ss27p\" (UID: \"94db9bfe-6519-4933-ac34-b88541ae35ec\") " pod="default/nginx-deployment-7fcdb87857-ss27p" Aug 12 23:57:50.210496 containerd[1483]: time="2025-08-12T23:57:50.210045433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-ss27p,Uid:94db9bfe-6519-4933-ac34-b88541ae35ec,Namespace:default,Attempt:0,}" Aug 12 23:57:50.247755 kubelet[1804]: E0812 23:57:50.247399 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:57:50.326258 containerd[1483]: time="2025-08-12T23:57:50.325836393Z" level=error msg="Failed to destroy network for sandbox \"f2ed697318a423944da5627d5067f748a846f55fcc8ae47d950084c76ff90be1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:50.326826 containerd[1483]: time="2025-08-12T23:57:50.326675634Z" level=error msg="encountered an error cleaning up failed sandbox \"f2ed697318a423944da5627d5067f748a846f55fcc8ae47d950084c76ff90be1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:50.326826 containerd[1483]: time="2025-08-12T23:57:50.326778455Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-ss27p,Uid:94db9bfe-6519-4933-ac34-b88541ae35ec,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f2ed697318a423944da5627d5067f748a846f55fcc8ae47d950084c76ff90be1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:50.327486 kubelet[1804]: E0812 23:57:50.327360 1804 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2ed697318a423944da5627d5067f748a846f55fcc8ae47d950084c76ff90be1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:50.327486 kubelet[1804]: E0812 23:57:50.327452 1804 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2ed697318a423944da5627d5067f748a846f55fcc8ae47d950084c76ff90be1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-ss27p" Aug 12 23:57:50.328150 kubelet[1804]: E0812 23:57:50.327710 1804 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2ed697318a423944da5627d5067f748a846f55fcc8ae47d950084c76ff90be1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-ss27p" Aug 12 23:57:50.328150 kubelet[1804]: E0812 23:57:50.327830 1804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-ss27p_default(94db9bfe-6519-4933-ac34-b88541ae35ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-ss27p_default(94db9bfe-6519-4933-ac34-b88541ae35ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f2ed697318a423944da5627d5067f748a846f55fcc8ae47d950084c76ff90be1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-ss27p" podUID="94db9bfe-6519-4933-ac34-b88541ae35ec" Aug 12 23:57:50.430148 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1f5e1228b9c1956276576512d15c8019fdbc2109fd174ffb8a10bb09f2a588bd-shm.mount: Deactivated successfully. Aug 12 23:57:50.430572 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ba47dfa9cd71f1b8267d0ee843c64b15b18763f2d5a6f5362b2bc74048302ecc-shm.mount: Deactivated successfully. Aug 12 23:57:50.452647 kubelet[1804]: I0812 23:57:50.452355 1804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba47dfa9cd71f1b8267d0ee843c64b15b18763f2d5a6f5362b2bc74048302ecc" Aug 12 23:57:50.454112 containerd[1483]: time="2025-08-12T23:57:50.453728724Z" level=info msg="StopPodSandbox for \"ba47dfa9cd71f1b8267d0ee843c64b15b18763f2d5a6f5362b2bc74048302ecc\"" Aug 12 23:57:50.454112 containerd[1483]: time="2025-08-12T23:57:50.454008917Z" level=info msg="Ensure that sandbox ba47dfa9cd71f1b8267d0ee843c64b15b18763f2d5a6f5362b2bc74048302ecc in task-service has been cleanup successfully" Aug 12 23:57:50.456930 containerd[1483]: time="2025-08-12T23:57:50.455173246Z" level=info msg="TearDown network for sandbox \"ba47dfa9cd71f1b8267d0ee843c64b15b18763f2d5a6f5362b2bc74048302ecc\" successfully" Aug 12 23:57:50.456930 containerd[1483]: time="2025-08-12T23:57:50.455202217Z" level=info msg="StopPodSandbox for \"ba47dfa9cd71f1b8267d0ee843c64b15b18763f2d5a6f5362b2bc74048302ecc\" returns successfully" Aug 12 23:57:50.457454 containerd[1483]: time="2025-08-12T23:57:50.457422184Z" level=info msg="StopPodSandbox for \"940d62105e425d2171d692581357b09b112ed8b4a38e42c67926be16a81eb3f4\"" Aug 12 23:57:50.459094 containerd[1483]: time="2025-08-12T23:57:50.457989557Z" level=info msg="TearDown network for sandbox \"940d62105e425d2171d692581357b09b112ed8b4a38e42c67926be16a81eb3f4\" successfully" Aug 12 23:57:50.460133 containerd[1483]: time="2025-08-12T23:57:50.459288261Z" level=info msg="StopPodSandbox for \"940d62105e425d2171d692581357b09b112ed8b4a38e42c67926be16a81eb3f4\" returns successfully" Aug 12 23:57:50.460133 containerd[1483]: time="2025-08-12T23:57:50.459749344Z" level=info msg="StopPodSandbox for \"b09de2089d69efea10f896dcdd456779191f4b212adc194137bc57d5d6f78deb\"" Aug 12 23:57:50.459866 systemd[1]: run-netns-cni\x2dce505554\x2ddf37\x2d99c8\x2d2d8c\x2dddda96f5f71c.mount: Deactivated successfully. Aug 12 23:57:50.460652 containerd[1483]: time="2025-08-12T23:57:50.460414050Z" level=info msg="TearDown network for sandbox \"b09de2089d69efea10f896dcdd456779191f4b212adc194137bc57d5d6f78deb\" successfully" Aug 12 23:57:50.460652 containerd[1483]: time="2025-08-12T23:57:50.460475016Z" level=info msg="StopPodSandbox for \"b09de2089d69efea10f896dcdd456779191f4b212adc194137bc57d5d6f78deb\" returns successfully" Aug 12 23:57:50.463477 kubelet[1804]: I0812 23:57:50.462215 1804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f5e1228b9c1956276576512d15c8019fdbc2109fd174ffb8a10bb09f2a588bd" Aug 12 23:57:50.463617 containerd[1483]: time="2025-08-12T23:57:50.463108796Z" level=info msg="StopPodSandbox for \"fb815f30ad4d15fec6a1627935b0980c76f1ea3176587b19b49cbee0d0d28613\"" Aug 12 23:57:50.463617 containerd[1483]: time="2025-08-12T23:57:50.463227572Z" level=info msg="TearDown network for sandbox \"fb815f30ad4d15fec6a1627935b0980c76f1ea3176587b19b49cbee0d0d28613\" successfully" Aug 12 23:57:50.463617 containerd[1483]: time="2025-08-12T23:57:50.463241848Z" level=info msg="StopPodSandbox for \"fb815f30ad4d15fec6a1627935b0980c76f1ea3176587b19b49cbee0d0d28613\" returns successfully" Aug 12 23:57:50.464309 containerd[1483]: time="2025-08-12T23:57:50.464281937Z" level=info msg="StopPodSandbox for \"1f5e1228b9c1956276576512d15c8019fdbc2109fd174ffb8a10bb09f2a588bd\"" Aug 12 23:57:50.465392 containerd[1483]: time="2025-08-12T23:57:50.465220060Z" level=info msg="Ensure that sandbox 1f5e1228b9c1956276576512d15c8019fdbc2109fd174ffb8a10bb09f2a588bd in task-service has been cleanup successfully" Aug 12 23:57:50.465556 containerd[1483]: time="2025-08-12T23:57:50.465535790Z" level=info msg="TearDown network for sandbox \"1f5e1228b9c1956276576512d15c8019fdbc2109fd174ffb8a10bb09f2a588bd\" successfully" Aug 12 23:57:50.465643 containerd[1483]: time="2025-08-12T23:57:50.465628696Z" level=info msg="StopPodSandbox for \"1f5e1228b9c1956276576512d15c8019fdbc2109fd174ffb8a10bb09f2a588bd\" returns successfully" Aug 12 23:57:50.468101 containerd[1483]: time="2025-08-12T23:57:50.465769422Z" level=info msg="StopPodSandbox for \"503fdcecdfd1498f3c16c29bacb792b4b0f9a566e4593f68e29d8f94e8f62188\"" Aug 12 23:57:50.468101 containerd[1483]: time="2025-08-12T23:57:50.465866441Z" level=info msg="TearDown network for sandbox \"503fdcecdfd1498f3c16c29bacb792b4b0f9a566e4593f68e29d8f94e8f62188\" successfully" Aug 12 23:57:50.468101 containerd[1483]: time="2025-08-12T23:57:50.465879186Z" level=info msg="StopPodSandbox for \"503fdcecdfd1498f3c16c29bacb792b4b0f9a566e4593f68e29d8f94e8f62188\" returns successfully" Aug 12 23:57:50.469548 containerd[1483]: time="2025-08-12T23:57:50.468824411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l5vvd,Uid:fee2ac30-f11f-43b7-ba5e-ccd47684ad80,Namespace:calico-system,Attempt:5,}" Aug 12 23:57:50.469548 containerd[1483]: time="2025-08-12T23:57:50.469188078Z" level=info msg="StopPodSandbox for \"29f0fbde7712a23fd4fcb1a076cd04601e90ec013ac86b28e8c0d55cb5320740\"" Aug 12 23:57:50.469548 containerd[1483]: time="2025-08-12T23:57:50.469270642Z" level=info msg="TearDown network for sandbox \"29f0fbde7712a23fd4fcb1a076cd04601e90ec013ac86b28e8c0d55cb5320740\" successfully" Aug 12 23:57:50.469548 containerd[1483]: time="2025-08-12T23:57:50.469280145Z" level=info msg="StopPodSandbox for \"29f0fbde7712a23fd4fcb1a076cd04601e90ec013ac86b28e8c0d55cb5320740\" returns successfully" Aug 12 23:57:50.470144 systemd[1]: run-netns-cni\x2de1f04855\x2decbc\x2d879a\x2d355e\x2d6f7be8e2f628.mount: Deactivated successfully. Aug 12 23:57:50.470840 kubelet[1804]: I0812 23:57:50.470374 1804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2ed697318a423944da5627d5067f748a846f55fcc8ae47d950084c76ff90be1" Aug 12 23:57:50.476964 containerd[1483]: time="2025-08-12T23:57:50.476916664Z" level=info msg="StopPodSandbox for \"f2ed697318a423944da5627d5067f748a846f55fcc8ae47d950084c76ff90be1\"" Aug 12 23:57:50.477252 containerd[1483]: time="2025-08-12T23:57:50.477199642Z" level=info msg="Ensure that sandbox f2ed697318a423944da5627d5067f748a846f55fcc8ae47d950084c76ff90be1 in task-service has been cleanup successfully" Aug 12 23:57:50.479117 containerd[1483]: time="2025-08-12T23:57:50.478170950Z" level=info msg="TearDown network for sandbox \"f2ed697318a423944da5627d5067f748a846f55fcc8ae47d950084c76ff90be1\" successfully" Aug 12 23:57:50.479117 containerd[1483]: time="2025-08-12T23:57:50.478207423Z" level=info msg="StopPodSandbox for \"f2ed697318a423944da5627d5067f748a846f55fcc8ae47d950084c76ff90be1\" returns successfully" Aug 12 23:57:50.481819 containerd[1483]: time="2025-08-12T23:57:50.479510045Z" level=info msg="StopPodSandbox for \"50f8a3c71284b46c337e457f5c0c0fe20572df29e5f3a8de71d3afca99088360\"" Aug 12 23:57:50.481819 containerd[1483]: time="2025-08-12T23:57:50.479602942Z" level=info msg="TearDown network for sandbox \"50f8a3c71284b46c337e457f5c0c0fe20572df29e5f3a8de71d3afca99088360\" successfully" Aug 12 23:57:50.481819 containerd[1483]: time="2025-08-12T23:57:50.479614091Z" level=info msg="StopPodSandbox for \"50f8a3c71284b46c337e457f5c0c0fe20572df29e5f3a8de71d3afca99088360\" returns successfully" Aug 12 23:57:50.481518 systemd[1]: run-netns-cni\x2d31f3ec09\x2d30fe\x2d88d5\x2dc77f\x2de2c25020a339.mount: Deactivated successfully. Aug 12 23:57:50.482152 containerd[1483]: time="2025-08-12T23:57:50.481949769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-ss27p,Uid:94db9bfe-6519-4933-ac34-b88541ae35ec,Namespace:default,Attempt:1,}" Aug 12 23:57:50.484006 containerd[1483]: time="2025-08-12T23:57:50.483354632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-754ccc7fc7-4tmcd,Uid:bd4c9686-c83c-417c-992e-74d7223d78d5,Namespace:calico-apiserver,Attempt:3,}" Aug 12 23:57:50.645721 containerd[1483]: time="2025-08-12T23:57:50.645648526Z" level=error msg="Failed to destroy network for sandbox \"f8b065f344c3ff1e73658d19670aabe693dc502c7b240206e6dddb713ac14222\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:50.646542 containerd[1483]: time="2025-08-12T23:57:50.646500353Z" level=error msg="encountered an error cleaning up failed sandbox \"f8b065f344c3ff1e73658d19670aabe693dc502c7b240206e6dddb713ac14222\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:50.647357 containerd[1483]: time="2025-08-12T23:57:50.647122171Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l5vvd,Uid:fee2ac30-f11f-43b7-ba5e-ccd47684ad80,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"f8b065f344c3ff1e73658d19670aabe693dc502c7b240206e6dddb713ac14222\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:50.647460 kubelet[1804]: E0812 23:57:50.647393 1804 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8b065f344c3ff1e73658d19670aabe693dc502c7b240206e6dddb713ac14222\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:50.647715 kubelet[1804]: E0812 23:57:50.647456 1804 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8b065f344c3ff1e73658d19670aabe693dc502c7b240206e6dddb713ac14222\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l5vvd" Aug 12 23:57:50.647715 kubelet[1804]: E0812 23:57:50.647478 1804 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8b065f344c3ff1e73658d19670aabe693dc502c7b240206e6dddb713ac14222\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l5vvd" Aug 12 23:57:50.647715 kubelet[1804]: E0812 23:57:50.647539 1804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l5vvd_calico-system(fee2ac30-f11f-43b7-ba5e-ccd47684ad80)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l5vvd_calico-system(fee2ac30-f11f-43b7-ba5e-ccd47684ad80)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f8b065f344c3ff1e73658d19670aabe693dc502c7b240206e6dddb713ac14222\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l5vvd" podUID="fee2ac30-f11f-43b7-ba5e-ccd47684ad80" Aug 12 23:57:50.674940 containerd[1483]: time="2025-08-12T23:57:50.674878322Z" level=error msg="Failed to destroy network for sandbox \"1416d5673e95f25fbadebeae7a403f355c3e1730f185c12562125f84e97bc393\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:50.675532 containerd[1483]: time="2025-08-12T23:57:50.675496274Z" level=error msg="encountered an error cleaning up failed sandbox \"1416d5673e95f25fbadebeae7a403f355c3e1730f185c12562125f84e97bc393\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:50.675710 containerd[1483]: time="2025-08-12T23:57:50.675684552Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-754ccc7fc7-4tmcd,Uid:bd4c9686-c83c-417c-992e-74d7223d78d5,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"1416d5673e95f25fbadebeae7a403f355c3e1730f185c12562125f84e97bc393\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:50.676206 kubelet[1804]: E0812 23:57:50.676151 1804 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1416d5673e95f25fbadebeae7a403f355c3e1730f185c12562125f84e97bc393\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:50.676666 kubelet[1804]: E0812 23:57:50.676240 1804 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1416d5673e95f25fbadebeae7a403f355c3e1730f185c12562125f84e97bc393\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-754ccc7fc7-4tmcd" Aug 12 23:57:50.676666 kubelet[1804]: E0812 23:57:50.676278 1804 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1416d5673e95f25fbadebeae7a403f355c3e1730f185c12562125f84e97bc393\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-754ccc7fc7-4tmcd" Aug 12 23:57:50.676666 kubelet[1804]: E0812 23:57:50.676351 1804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-754ccc7fc7-4tmcd_calico-apiserver(bd4c9686-c83c-417c-992e-74d7223d78d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-754ccc7fc7-4tmcd_calico-apiserver(bd4c9686-c83c-417c-992e-74d7223d78d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1416d5673e95f25fbadebeae7a403f355c3e1730f185c12562125f84e97bc393\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-754ccc7fc7-4tmcd" podUID="bd4c9686-c83c-417c-992e-74d7223d78d5" Aug 12 23:57:50.691218 containerd[1483]: time="2025-08-12T23:57:50.691048709Z" level=error msg="Failed to destroy network for sandbox \"ad089b30ddb8c5f091ad2efb5ceaaaaabfda974708d515fe6f4d90602140969c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:50.692185 containerd[1483]: time="2025-08-12T23:57:50.691968269Z" level=error msg="encountered an error cleaning up failed sandbox \"ad089b30ddb8c5f091ad2efb5ceaaaaabfda974708d515fe6f4d90602140969c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:50.692185 containerd[1483]: time="2025-08-12T23:57:50.692068495Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-ss27p,Uid:94db9bfe-6519-4933-ac34-b88541ae35ec,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"ad089b30ddb8c5f091ad2efb5ceaaaaabfda974708d515fe6f4d90602140969c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:50.692820 kubelet[1804]: E0812 23:57:50.692578 1804 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad089b30ddb8c5f091ad2efb5ceaaaaabfda974708d515fe6f4d90602140969c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:57:50.692820 kubelet[1804]: E0812 23:57:50.692646 1804 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad089b30ddb8c5f091ad2efb5ceaaaaabfda974708d515fe6f4d90602140969c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-ss27p" Aug 12 23:57:50.692820 kubelet[1804]: E0812 23:57:50.692666 1804 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad089b30ddb8c5f091ad2efb5ceaaaaabfda974708d515fe6f4d90602140969c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-ss27p" Aug 12 23:57:50.693138 kubelet[1804]: E0812 23:57:50.692720 1804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-ss27p_default(94db9bfe-6519-4933-ac34-b88541ae35ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-ss27p_default(94db9bfe-6519-4933-ac34-b88541ae35ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad089b30ddb8c5f091ad2efb5ceaaaaabfda974708d515fe6f4d90602140969c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-ss27p" podUID="94db9bfe-6519-4933-ac34-b88541ae35ec" Aug 12 23:57:51.169547 containerd[1483]: time="2025-08-12T23:57:51.169463992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:51.170741 containerd[1483]: time="2025-08-12T23:57:51.170488651Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 12 23:57:51.171279 containerd[1483]: time="2025-08-12T23:57:51.171249406Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:51.173963 containerd[1483]: time="2025-08-12T23:57:51.173919494Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:51.174843 containerd[1483]: time="2025-08-12T23:57:51.174800830Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 5.755149202s" Aug 12 23:57:51.174843 containerd[1483]: time="2025-08-12T23:57:51.174842885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 12 23:57:51.185640 containerd[1483]: time="2025-08-12T23:57:51.185548484Z" level=info msg="CreateContainer within sandbox \"f3c17e83dc64ba768b51bb135db20c49f75e3ea8791b3dce7cc27c75da023c45\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 12 23:57:51.206013 containerd[1483]: time="2025-08-12T23:57:51.205951967Z" level=info msg="CreateContainer within sandbox \"f3c17e83dc64ba768b51bb135db20c49f75e3ea8791b3dce7cc27c75da023c45\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"37e2e21576400577c3db9a185149dfcaabc913e3424975d73e23156b8ff2a068\"" Aug 12 23:57:51.207232 containerd[1483]: time="2025-08-12T23:57:51.207041168Z" level=info msg="StartContainer for \"37e2e21576400577c3db9a185149dfcaabc913e3424975d73e23156b8ff2a068\"" Aug 12 23:57:51.248324 kubelet[1804]: E0812 23:57:51.248245 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:57:51.291373 systemd[1]: Started cri-containerd-37e2e21576400577c3db9a185149dfcaabc913e3424975d73e23156b8ff2a068.scope - libcontainer container 37e2e21576400577c3db9a185149dfcaabc913e3424975d73e23156b8ff2a068. Aug 12 23:57:51.329145 containerd[1483]: time="2025-08-12T23:57:51.328340498Z" level=info msg="StartContainer for \"37e2e21576400577c3db9a185149dfcaabc913e3424975d73e23156b8ff2a068\" returns successfully" Aug 12 23:57:51.421769 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 12 23:57:51.421970 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 12 23:57:51.432009 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f8b065f344c3ff1e73658d19670aabe693dc502c7b240206e6dddb713ac14222-shm.mount: Deactivated successfully. Aug 12 23:57:51.433239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount30930717.mount: Deactivated successfully. Aug 12 23:57:51.478975 kubelet[1804]: I0812 23:57:51.478908 1804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8b065f344c3ff1e73658d19670aabe693dc502c7b240206e6dddb713ac14222" Aug 12 23:57:51.481256 containerd[1483]: time="2025-08-12T23:57:51.480904566Z" level=info msg="StopPodSandbox for \"f8b065f344c3ff1e73658d19670aabe693dc502c7b240206e6dddb713ac14222\"" Aug 12 23:57:51.484919 containerd[1483]: time="2025-08-12T23:57:51.482661372Z" level=info msg="Ensure that sandbox f8b065f344c3ff1e73658d19670aabe693dc502c7b240206e6dddb713ac14222 in task-service has been cleanup successfully" Aug 12 23:57:51.486956 systemd[1]: run-netns-cni\x2d542ad0f5\x2d11f8\x2d03c8\x2dcd91\x2d785bc3f1c291.mount: Deactivated successfully. Aug 12 23:57:51.488210 containerd[1483]: time="2025-08-12T23:57:51.488147883Z" level=info msg="TearDown network for sandbox \"f8b065f344c3ff1e73658d19670aabe693dc502c7b240206e6dddb713ac14222\" successfully" Aug 12 23:57:51.488389 kubelet[1804]: I0812 23:57:51.488356 1804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1416d5673e95f25fbadebeae7a403f355c3e1730f185c12562125f84e97bc393" Aug 12 23:57:51.488942 containerd[1483]: time="2025-08-12T23:57:51.488811738Z" level=info msg="StopPodSandbox for \"f8b065f344c3ff1e73658d19670aabe693dc502c7b240206e6dddb713ac14222\" returns successfully" Aug 12 23:57:51.491581 containerd[1483]: time="2025-08-12T23:57:51.491073919Z" level=info msg="StopPodSandbox for \"1416d5673e95f25fbadebeae7a403f355c3e1730f185c12562125f84e97bc393\"" Aug 12 23:57:51.491581 containerd[1483]: time="2025-08-12T23:57:51.491373164Z" level=info msg="Ensure that sandbox 1416d5673e95f25fbadebeae7a403f355c3e1730f185c12562125f84e97bc393 in task-service has been cleanup successfully" Aug 12 23:57:51.495940 systemd[1]: run-netns-cni\x2d1de46f96\x2d7a32\x2d8e80\x2dde59\x2d59294fb32a73.mount: Deactivated successfully. Aug 12 23:57:51.496976 containerd[1483]: time="2025-08-12T23:57:51.496302116Z" level=info msg="TearDown network for sandbox \"1416d5673e95f25fbadebeae7a403f355c3e1730f185c12562125f84e97bc393\" successfully" Aug 12 23:57:51.496976 containerd[1483]: time="2025-08-12T23:57:51.496337045Z" level=info msg="StopPodSandbox for \"1416d5673e95f25fbadebeae7a403f355c3e1730f185c12562125f84e97bc393\" returns successfully" Aug 12 23:57:51.499345 containerd[1483]: time="2025-08-12T23:57:51.498960496Z" level=info msg="StopPodSandbox for \"ba47dfa9cd71f1b8267d0ee843c64b15b18763f2d5a6f5362b2bc74048302ecc\"" Aug 12 23:57:51.499345 containerd[1483]: time="2025-08-12T23:57:51.499009564Z" level=info msg="StopPodSandbox for \"1f5e1228b9c1956276576512d15c8019fdbc2109fd174ffb8a10bb09f2a588bd\"" Aug 12 23:57:51.499345 containerd[1483]: time="2025-08-12T23:57:51.499158815Z" level=info msg="TearDown network for sandbox \"1f5e1228b9c1956276576512d15c8019fdbc2109fd174ffb8a10bb09f2a588bd\" successfully" Aug 12 23:57:51.499345 containerd[1483]: time="2025-08-12T23:57:51.499176821Z" level=info msg="StopPodSandbox for \"1f5e1228b9c1956276576512d15c8019fdbc2109fd174ffb8a10bb09f2a588bd\" returns successfully" Aug 12 23:57:51.499728 containerd[1483]: time="2025-08-12T23:57:51.499707710Z" level=info msg="TearDown network for sandbox \"ba47dfa9cd71f1b8267d0ee843c64b15b18763f2d5a6f5362b2bc74048302ecc\" successfully" Aug 12 23:57:51.500149 containerd[1483]: time="2025-08-12T23:57:51.499847288Z" level=info msg="StopPodSandbox for \"ba47dfa9cd71f1b8267d0ee843c64b15b18763f2d5a6f5362b2bc74048302ecc\" returns successfully" Aug 12 23:57:51.500525 containerd[1483]: time="2025-08-12T23:57:51.500507454Z" level=info msg="StopPodSandbox for \"940d62105e425d2171d692581357b09b112ed8b4a38e42c67926be16a81eb3f4\"" Aug 12 23:57:51.500837 containerd[1483]: time="2025-08-12T23:57:51.500820227Z" level=info msg="TearDown network for sandbox \"940d62105e425d2171d692581357b09b112ed8b4a38e42c67926be16a81eb3f4\" successfully" Aug 12 23:57:51.500949 containerd[1483]: time="2025-08-12T23:57:51.500928588Z" level=info msg="StopPodSandbox for \"940d62105e425d2171d692581357b09b112ed8b4a38e42c67926be16a81eb3f4\" returns successfully" Aug 12 23:57:51.501049 containerd[1483]: time="2025-08-12T23:57:51.500508142Z" level=info msg="StopPodSandbox for \"29f0fbde7712a23fd4fcb1a076cd04601e90ec013ac86b28e8c0d55cb5320740\"" Aug 12 23:57:51.501453 kubelet[1804]: I0812 23:57:51.501435 1804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad089b30ddb8c5f091ad2efb5ceaaaaabfda974708d515fe6f4d90602140969c" Aug 12 23:57:51.503067 containerd[1483]: time="2025-08-12T23:57:51.502062367Z" level=info msg="StopPodSandbox for \"ad089b30ddb8c5f091ad2efb5ceaaaaabfda974708d515fe6f4d90602140969c\"" Aug 12 23:57:51.503067 containerd[1483]: time="2025-08-12T23:57:51.502301606Z" level=info msg="Ensure that sandbox ad089b30ddb8c5f091ad2efb5ceaaaaabfda974708d515fe6f4d90602140969c in task-service has been cleanup successfully" Aug 12 23:57:51.503067 containerd[1483]: time="2025-08-12T23:57:51.501632657Z" level=info msg="StopPodSandbox for \"b09de2089d69efea10f896dcdd456779191f4b212adc194137bc57d5d6f78deb\"" Aug 12 23:57:51.503067 containerd[1483]: time="2025-08-12T23:57:51.502863922Z" level=info msg="TearDown network for sandbox \"b09de2089d69efea10f896dcdd456779191f4b212adc194137bc57d5d6f78deb\" successfully" Aug 12 23:57:51.503067 containerd[1483]: time="2025-08-12T23:57:51.502879788Z" level=info msg="StopPodSandbox for \"b09de2089d69efea10f896dcdd456779191f4b212adc194137bc57d5d6f78deb\" returns successfully" Aug 12 23:57:51.507162 systemd[1]: run-netns-cni\x2d4625b6c0\x2dcf12\x2df2ae\x2d28be\x2db3a299da61b5.mount: Deactivated successfully. Aug 12 23:57:51.509062 containerd[1483]: time="2025-08-12T23:57:51.508737549Z" level=info msg="TearDown network for sandbox \"29f0fbde7712a23fd4fcb1a076cd04601e90ec013ac86b28e8c0d55cb5320740\" successfully" Aug 12 23:57:51.509062 containerd[1483]: time="2025-08-12T23:57:51.508776028Z" level=info msg="StopPodSandbox for \"29f0fbde7712a23fd4fcb1a076cd04601e90ec013ac86b28e8c0d55cb5320740\" returns successfully" Aug 12 23:57:51.510746 containerd[1483]: time="2025-08-12T23:57:51.510147107Z" level=info msg="StopPodSandbox for \"50f8a3c71284b46c337e457f5c0c0fe20572df29e5f3a8de71d3afca99088360\"" Aug 12 23:57:51.510746 containerd[1483]: time="2025-08-12T23:57:51.510248166Z" level=info msg="TearDown network for sandbox \"50f8a3c71284b46c337e457f5c0c0fe20572df29e5f3a8de71d3afca99088360\" successfully" Aug 12 23:57:51.510746 containerd[1483]: time="2025-08-12T23:57:51.510260246Z" level=info msg="StopPodSandbox for \"50f8a3c71284b46c337e457f5c0c0fe20572df29e5f3a8de71d3afca99088360\" returns successfully" Aug 12 23:57:51.510746 containerd[1483]: time="2025-08-12T23:57:51.510268666Z" level=info msg="StopPodSandbox for \"fb815f30ad4d15fec6a1627935b0980c76f1ea3176587b19b49cbee0d0d28613\"" Aug 12 23:57:51.510951 containerd[1483]: time="2025-08-12T23:57:51.510624575Z" level=info msg="TearDown network for sandbox \"fb815f30ad4d15fec6a1627935b0980c76f1ea3176587b19b49cbee0d0d28613\" successfully" Aug 12 23:57:51.510951 containerd[1483]: time="2025-08-12T23:57:51.510785684Z" level=info msg="StopPodSandbox for \"fb815f30ad4d15fec6a1627935b0980c76f1ea3176587b19b49cbee0d0d28613\" returns successfully" Aug 12 23:57:51.511724 containerd[1483]: time="2025-08-12T23:57:51.511701012Z" level=info msg="StopPodSandbox for \"503fdcecdfd1498f3c16c29bacb792b4b0f9a566e4593f68e29d8f94e8f62188\"" Aug 12 23:57:51.511939 containerd[1483]: time="2025-08-12T23:57:51.511922059Z" level=info msg="TearDown network for sandbox \"503fdcecdfd1498f3c16c29bacb792b4b0f9a566e4593f68e29d8f94e8f62188\" successfully" Aug 12 23:57:51.511999 containerd[1483]: time="2025-08-12T23:57:51.511988858Z" level=info msg="StopPodSandbox for \"503fdcecdfd1498f3c16c29bacb792b4b0f9a566e4593f68e29d8f94e8f62188\" returns successfully" Aug 12 23:57:51.512119 containerd[1483]: time="2025-08-12T23:57:51.512105382Z" level=info msg="TearDown network for sandbox \"ad089b30ddb8c5f091ad2efb5ceaaaaabfda974708d515fe6f4d90602140969c\" successfully" Aug 12 23:57:51.512539 containerd[1483]: time="2025-08-12T23:57:51.512518449Z" level=info msg="StopPodSandbox for \"ad089b30ddb8c5f091ad2efb5ceaaaaabfda974708d515fe6f4d90602140969c\" returns successfully" Aug 12 23:57:51.512676 containerd[1483]: time="2025-08-12T23:57:51.512245576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-754ccc7fc7-4tmcd,Uid:bd4c9686-c83c-417c-992e-74d7223d78d5,Namespace:calico-apiserver,Attempt:4,}" Aug 12 23:57:51.513203 containerd[1483]: time="2025-08-12T23:57:51.513183407Z" level=info msg="StopPodSandbox for \"f2ed697318a423944da5627d5067f748a846f55fcc8ae47d950084c76ff90be1\"" Aug 12 23:57:51.513846 containerd[1483]: time="2025-08-12T23:57:51.513826937Z" level=info msg="TearDown network for sandbox \"f2ed697318a423944da5627d5067f748a846f55fcc8ae47d950084c76ff90be1\" successfully" Aug 12 23:57:51.513945 containerd[1483]: time="2025-08-12T23:57:51.513932700Z" level=info msg="StopPodSandbox for \"f2ed697318a423944da5627d5067f748a846f55fcc8ae47d950084c76ff90be1\" returns successfully" Aug 12 23:57:51.514623 containerd[1483]: time="2025-08-12T23:57:51.512695373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l5vvd,Uid:fee2ac30-f11f-43b7-ba5e-ccd47684ad80,Namespace:calico-system,Attempt:6,}" Aug 12 23:57:51.515076 containerd[1483]: time="2025-08-12T23:57:51.515053776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-ss27p,Uid:94db9bfe-6519-4933-ac34-b88541ae35ec,Namespace:default,Attempt:2,}" Aug 12 23:57:51.914587 systemd-networkd[1371]: cali729d8961110: Link UP Aug 12 23:57:51.916150 systemd-networkd[1371]: cali729d8961110: Gained carrier Aug 12 23:57:51.942109 kubelet[1804]: I0812 23:57:51.941880 1804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fqh24" podStartSLOduration=4.122870353 podStartE2EDuration="16.941858499s" podCreationTimestamp="2025-08-12 23:57:35 +0000 UTC" firstStartedPulling="2025-08-12 23:57:38.357328797 +0000 UTC m=+4.224197487" lastFinishedPulling="2025-08-12 23:57:51.17631693 +0000 UTC m=+17.043185633" observedRunningTime="2025-08-12 23:57:51.507604368 +0000 UTC m=+17.374473080" watchObservedRunningTime="2025-08-12 23:57:51.941858499 +0000 UTC m=+17.808727207" Aug 12 23:57:51.946588 containerd[1483]: 2025-08-12 23:57:51.648 [INFO][2681] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 12 23:57:51.946588 containerd[1483]: 2025-08-12 23:57:51.675 [INFO][2681] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {24.199.122.14-k8s-nginx--deployment--7fcdb87857--ss27p-eth0 nginx-deployment-7fcdb87857- default 94db9bfe-6519-4933-ac34-b88541ae35ec 1466 0 2025-08-12 23:57:49 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 24.199.122.14 nginx-deployment-7fcdb87857-ss27p eth0 default [] [] [kns.default ksa.default.default] cali729d8961110 [] [] }} ContainerID="ae7359da4d179ca646e97a3b4cb2412aedda3fa9bf4335b58cb9725170215528" Namespace="default" Pod="nginx-deployment-7fcdb87857-ss27p" WorkloadEndpoint="24.199.122.14-k8s-nginx--deployment--7fcdb87857--ss27p-" Aug 12 23:57:51.946588 containerd[1483]: 2025-08-12 23:57:51.676 [INFO][2681] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ae7359da4d179ca646e97a3b4cb2412aedda3fa9bf4335b58cb9725170215528" Namespace="default" Pod="nginx-deployment-7fcdb87857-ss27p" WorkloadEndpoint="24.199.122.14-k8s-nginx--deployment--7fcdb87857--ss27p-eth0" Aug 12 23:57:51.946588 containerd[1483]: 2025-08-12 23:57:51.744 [INFO][2710] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ae7359da4d179ca646e97a3b4cb2412aedda3fa9bf4335b58cb9725170215528" HandleID="k8s-pod-network.ae7359da4d179ca646e97a3b4cb2412aedda3fa9bf4335b58cb9725170215528" Workload="24.199.122.14-k8s-nginx--deployment--7fcdb87857--ss27p-eth0" Aug 12 23:57:51.946588 containerd[1483]: 2025-08-12 23:57:51.744 [INFO][2710] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ae7359da4d179ca646e97a3b4cb2412aedda3fa9bf4335b58cb9725170215528" HandleID="k8s-pod-network.ae7359da4d179ca646e97a3b4cb2412aedda3fa9bf4335b58cb9725170215528" Workload="24.199.122.14-k8s-nginx--deployment--7fcdb87857--ss27p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fe20), Attrs:map[string]string{"namespace":"default", "node":"24.199.122.14", "pod":"nginx-deployment-7fcdb87857-ss27p", "timestamp":"2025-08-12 23:57:51.744489829 +0000 UTC"}, Hostname:"24.199.122.14", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:57:51.946588 containerd[1483]: 2025-08-12 23:57:51.746 [INFO][2710] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:57:51.946588 containerd[1483]: 2025-08-12 23:57:51.746 [INFO][2710] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:57:51.946588 containerd[1483]: 2025-08-12 23:57:51.746 [INFO][2710] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '24.199.122.14' Aug 12 23:57:51.946588 containerd[1483]: 2025-08-12 23:57:51.756 [INFO][2710] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ae7359da4d179ca646e97a3b4cb2412aedda3fa9bf4335b58cb9725170215528" host="24.199.122.14" Aug 12 23:57:51.946588 containerd[1483]: 2025-08-12 23:57:51.817 [INFO][2710] ipam/ipam.go 394: Looking up existing affinities for host host="24.199.122.14" Aug 12 23:57:51.946588 containerd[1483]: 2025-08-12 23:57:51.828 [INFO][2710] ipam/ipam.go 543: Ran out of existing affine blocks for host host="24.199.122.14" Aug 12 23:57:51.946588 containerd[1483]: 2025-08-12 23:57:51.833 [INFO][2710] ipam/ipam.go 560: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="24.199.122.14" Aug 12 23:57:51.946588 containerd[1483]: 2025-08-12 23:57:51.838 [INFO][2710] ipam/ipam_block_reader_writer.go 158: Found free block: 192.168.17.64/26 Aug 12 23:57:51.946588 containerd[1483]: 2025-08-12 23:57:51.838 [INFO][2710] ipam/ipam.go 572: Found unclaimed block host="24.199.122.14" subnet=192.168.17.64/26 Aug 12 23:57:51.946588 containerd[1483]: 2025-08-12 23:57:51.838 [INFO][2710] ipam/ipam_block_reader_writer.go 175: Trying to create affinity in pending state host="24.199.122.14" subnet=192.168.17.64/26 Aug 12 23:57:51.946588 containerd[1483]: 2025-08-12 23:57:51.850 [INFO][2710] ipam/ipam_block_reader_writer.go 205: Successfully created pending affinity for block host="24.199.122.14" subnet=192.168.17.64/26 Aug 12 23:57:51.946588 containerd[1483]: 2025-08-12 23:57:51.850 [INFO][2710] ipam/ipam.go 158: Attempting to load block cidr=192.168.17.64/26 host="24.199.122.14" Aug 12 23:57:51.946588 containerd[1483]: 2025-08-12 23:57:51.857 [INFO][2710] ipam/ipam.go 163: The referenced block doesn't exist, trying to create it cidr=192.168.17.64/26 host="24.199.122.14" Aug 12 23:57:51.946588 containerd[1483]: 2025-08-12 23:57:51.862 [INFO][2710] ipam/ipam.go 170: Wrote affinity as pending cidr=192.168.17.64/26 host="24.199.122.14" Aug 12 23:57:51.946588 containerd[1483]: 2025-08-12 23:57:51.865 [INFO][2710] ipam/ipam.go 179: Attempting to claim the block cidr=192.168.17.64/26 host="24.199.122.14" Aug 12 23:57:51.946588 containerd[1483]: 2025-08-12 23:57:51.866 [INFO][2710] ipam/ipam_block_reader_writer.go 226: Attempting to create a new block affinityType="host" host="24.199.122.14" subnet=192.168.17.64/26 Aug 12 23:57:51.946588 containerd[1483]: 2025-08-12 23:57:51.875 [INFO][2710] ipam/ipam_block_reader_writer.go 267: Successfully created block Aug 12 23:57:51.946588 containerd[1483]: 2025-08-12 23:57:51.875 [INFO][2710] ipam/ipam_block_reader_writer.go 283: Confirming affinity host="24.199.122.14" subnet=192.168.17.64/26 Aug 12 23:57:51.946588 containerd[1483]: 2025-08-12 23:57:51.883 [INFO][2710] ipam/ipam_block_reader_writer.go 298: Successfully confirmed affinity host="24.199.122.14" subnet=192.168.17.64/26 Aug 12 23:57:51.946588 containerd[1483]: 2025-08-12 23:57:51.883 [INFO][2710] ipam/ipam.go 607: Block '192.168.17.64/26' has 64 free ips which is more than 1 ips required. host="24.199.122.14" subnet=192.168.17.64/26 Aug 12 23:57:51.946588 containerd[1483]: 2025-08-12 23:57:51.883 [INFO][2710] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.17.64/26 handle="k8s-pod-network.ae7359da4d179ca646e97a3b4cb2412aedda3fa9bf4335b58cb9725170215528" host="24.199.122.14" Aug 12 23:57:51.946588 containerd[1483]: 2025-08-12 23:57:51.886 [INFO][2710] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ae7359da4d179ca646e97a3b4cb2412aedda3fa9bf4335b58cb9725170215528 Aug 12 23:57:51.946588 containerd[1483]: 2025-08-12 23:57:51.892 [INFO][2710] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.17.64/26 handle="k8s-pod-network.ae7359da4d179ca646e97a3b4cb2412aedda3fa9bf4335b58cb9725170215528" host="24.199.122.14" Aug 12 23:57:51.946588 containerd[1483]: 2025-08-12 23:57:51.898 [INFO][2710] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.17.64/26] block=192.168.17.64/26 handle="k8s-pod-network.ae7359da4d179ca646e97a3b4cb2412aedda3fa9bf4335b58cb9725170215528" host="24.199.122.14" Aug 12 23:57:51.947870 containerd[1483]: 2025-08-12 23:57:51.898 [INFO][2710] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.17.64/26] handle="k8s-pod-network.ae7359da4d179ca646e97a3b4cb2412aedda3fa9bf4335b58cb9725170215528" host="24.199.122.14" Aug 12 23:57:51.947870 containerd[1483]: 2025-08-12 23:57:51.898 [INFO][2710] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:57:51.947870 containerd[1483]: 2025-08-12 23:57:51.898 [INFO][2710] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.64/26] IPv6=[] ContainerID="ae7359da4d179ca646e97a3b4cb2412aedda3fa9bf4335b58cb9725170215528" HandleID="k8s-pod-network.ae7359da4d179ca646e97a3b4cb2412aedda3fa9bf4335b58cb9725170215528" Workload="24.199.122.14-k8s-nginx--deployment--7fcdb87857--ss27p-eth0" Aug 12 23:57:51.947870 containerd[1483]: 2025-08-12 23:57:51.902 [INFO][2681] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ae7359da4d179ca646e97a3b4cb2412aedda3fa9bf4335b58cb9725170215528" Namespace="default" Pod="nginx-deployment-7fcdb87857-ss27p" WorkloadEndpoint="24.199.122.14-k8s-nginx--deployment--7fcdb87857--ss27p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"24.199.122.14-k8s-nginx--deployment--7fcdb87857--ss27p-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"94db9bfe-6519-4933-ac34-b88541ae35ec", ResourceVersion:"1466", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 57, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"24.199.122.14", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-ss27p", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.17.64/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali729d8961110", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:57:51.947870 containerd[1483]: 2025-08-12 23:57:51.902 [INFO][2681] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.17.64/32] ContainerID="ae7359da4d179ca646e97a3b4cb2412aedda3fa9bf4335b58cb9725170215528" Namespace="default" Pod="nginx-deployment-7fcdb87857-ss27p" WorkloadEndpoint="24.199.122.14-k8s-nginx--deployment--7fcdb87857--ss27p-eth0" Aug 12 23:57:51.947870 containerd[1483]: 2025-08-12 23:57:51.902 [INFO][2681] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali729d8961110 ContainerID="ae7359da4d179ca646e97a3b4cb2412aedda3fa9bf4335b58cb9725170215528" Namespace="default" Pod="nginx-deployment-7fcdb87857-ss27p" WorkloadEndpoint="24.199.122.14-k8s-nginx--deployment--7fcdb87857--ss27p-eth0" Aug 12 23:57:51.947870 containerd[1483]: 2025-08-12 23:57:51.915 [INFO][2681] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ae7359da4d179ca646e97a3b4cb2412aedda3fa9bf4335b58cb9725170215528" Namespace="default" Pod="nginx-deployment-7fcdb87857-ss27p" WorkloadEndpoint="24.199.122.14-k8s-nginx--deployment--7fcdb87857--ss27p-eth0" Aug 12 23:57:51.947870 containerd[1483]: 2025-08-12 23:57:51.917 [INFO][2681] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ae7359da4d179ca646e97a3b4cb2412aedda3fa9bf4335b58cb9725170215528" Namespace="default" Pod="nginx-deployment-7fcdb87857-ss27p" WorkloadEndpoint="24.199.122.14-k8s-nginx--deployment--7fcdb87857--ss27p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"24.199.122.14-k8s-nginx--deployment--7fcdb87857--ss27p-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"94db9bfe-6519-4933-ac34-b88541ae35ec", ResourceVersion:"1466", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 57, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"24.199.122.14", ContainerID:"ae7359da4d179ca646e97a3b4cb2412aedda3fa9bf4335b58cb9725170215528", Pod:"nginx-deployment-7fcdb87857-ss27p", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.17.64/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali729d8961110", MAC:"16:9a:16:6a:cd:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:57:51.947870 containerd[1483]: 2025-08-12 23:57:51.945 [INFO][2681] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ae7359da4d179ca646e97a3b4cb2412aedda3fa9bf4335b58cb9725170215528" Namespace="default" Pod="nginx-deployment-7fcdb87857-ss27p" WorkloadEndpoint="24.199.122.14-k8s-nginx--deployment--7fcdb87857--ss27p-eth0" Aug 12 23:57:51.969437 containerd[1483]: time="2025-08-12T23:57:51.969121145Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:57:51.969437 containerd[1483]: time="2025-08-12T23:57:51.969195746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:57:51.969437 containerd[1483]: time="2025-08-12T23:57:51.969208446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:57:51.969437 containerd[1483]: time="2025-08-12T23:57:51.969293250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:57:51.992877 systemd[1]: Started cri-containerd-ae7359da4d179ca646e97a3b4cb2412aedda3fa9bf4335b58cb9725170215528.scope - libcontainer container ae7359da4d179ca646e97a3b4cb2412aedda3fa9bf4335b58cb9725170215528. Aug 12 23:57:52.048301 containerd[1483]: time="2025-08-12T23:57:52.048251254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-ss27p,Uid:94db9bfe-6519-4933-ac34-b88541ae35ec,Namespace:default,Attempt:2,} returns sandbox id \"ae7359da4d179ca646e97a3b4cb2412aedda3fa9bf4335b58cb9725170215528\"" Aug 12 23:57:52.050827 containerd[1483]: time="2025-08-12T23:57:52.050332201Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Aug 12 23:57:52.113687 systemd-networkd[1371]: calic664bd135df: Link UP Aug 12 23:57:52.116053 systemd-networkd[1371]: calic664bd135df: Gained carrier Aug 12 23:57:52.136924 containerd[1483]: 2025-08-12 23:57:51.653 [INFO][2679] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 12 23:57:52.136924 containerd[1483]: 2025-08-12 23:57:51.676 [INFO][2679] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {24.199.122.14-k8s-csi--node--driver--l5vvd-eth0 csi-node-driver- calico-system fee2ac30-f11f-43b7-ba5e-ccd47684ad80 1315 0 2025-08-12 23:57:35 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 24.199.122.14 csi-node-driver-l5vvd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic664bd135df [] [] }} ContainerID="e98cf3895d26db3a5cfc886268bad5f32002ddab7576ea7514ea5ee57d511213" Namespace="calico-system" Pod="csi-node-driver-l5vvd" WorkloadEndpoint="24.199.122.14-k8s-csi--node--driver--l5vvd-" Aug 12 23:57:52.136924 containerd[1483]: 2025-08-12 23:57:51.677 [INFO][2679] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e98cf3895d26db3a5cfc886268bad5f32002ddab7576ea7514ea5ee57d511213" Namespace="calico-system" Pod="csi-node-driver-l5vvd" WorkloadEndpoint="24.199.122.14-k8s-csi--node--driver--l5vvd-eth0" Aug 12 23:57:52.136924 containerd[1483]: 2025-08-12 23:57:51.750 [INFO][2716] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e98cf3895d26db3a5cfc886268bad5f32002ddab7576ea7514ea5ee57d511213" HandleID="k8s-pod-network.e98cf3895d26db3a5cfc886268bad5f32002ddab7576ea7514ea5ee57d511213" Workload="24.199.122.14-k8s-csi--node--driver--l5vvd-eth0" Aug 12 23:57:52.136924 containerd[1483]: 2025-08-12 23:57:51.750 [INFO][2716] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e98cf3895d26db3a5cfc886268bad5f32002ddab7576ea7514ea5ee57d511213" HandleID="k8s-pod-network.e98cf3895d26db3a5cfc886268bad5f32002ddab7576ea7514ea5ee57d511213" Workload="24.199.122.14-k8s-csi--node--driver--l5vvd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039d860), Attrs:map[string]string{"namespace":"calico-system", "node":"24.199.122.14", "pod":"csi-node-driver-l5vvd", "timestamp":"2025-08-12 23:57:51.749987549 +0000 UTC"}, Hostname:"24.199.122.14", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:57:52.136924 containerd[1483]: 2025-08-12 23:57:51.750 [INFO][2716] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:57:52.136924 containerd[1483]: 2025-08-12 23:57:51.898 [INFO][2716] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:57:52.136924 containerd[1483]: 2025-08-12 23:57:51.898 [INFO][2716] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '24.199.122.14' Aug 12 23:57:52.136924 containerd[1483]: 2025-08-12 23:57:51.918 [INFO][2716] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e98cf3895d26db3a5cfc886268bad5f32002ddab7576ea7514ea5ee57d511213" host="24.199.122.14" Aug 12 23:57:52.136924 containerd[1483]: 2025-08-12 23:57:51.944 [INFO][2716] ipam/ipam.go 394: Looking up existing affinities for host host="24.199.122.14" Aug 12 23:57:52.136924 containerd[1483]: 2025-08-12 23:57:51.971 [INFO][2716] ipam/ipam.go 511: Trying affinity for 192.168.17.64/26 host="24.199.122.14" Aug 12 23:57:52.136924 containerd[1483]: 2025-08-12 23:57:51.979 [INFO][2716] ipam/ipam.go 158: Attempting to load block cidr=192.168.17.64/26 host="24.199.122.14" Aug 12 23:57:52.136924 containerd[1483]: 2025-08-12 23:57:51.986 [INFO][2716] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.17.64/26 host="24.199.122.14" Aug 12 23:57:52.136924 containerd[1483]: 2025-08-12 23:57:51.986 [INFO][2716] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.17.64/26 handle="k8s-pod-network.e98cf3895d26db3a5cfc886268bad5f32002ddab7576ea7514ea5ee57d511213" host="24.199.122.14" Aug 12 23:57:52.136924 containerd[1483]: 2025-08-12 23:57:51.991 [INFO][2716] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e98cf3895d26db3a5cfc886268bad5f32002ddab7576ea7514ea5ee57d511213 Aug 12 23:57:52.136924 containerd[1483]: 2025-08-12 23:57:52.009 [INFO][2716] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.17.64/26 handle="k8s-pod-network.e98cf3895d26db3a5cfc886268bad5f32002ddab7576ea7514ea5ee57d511213" host="24.199.122.14" Aug 12 23:57:52.137687 containerd[1483]: 2025-08-12 23:57:52.023 [ERROR][2716] ipam/customresource.go 184: Error updating resource Key=IPAMBlock(192-168-17-64-26) Name="192-168-17-64-26" Resource="IPAMBlocks" Value=&v3.IPAMBlock{TypeMeta:v1.TypeMeta{Kind:"IPAMBlock", APIVersion:"crd.projectcalico.org/v1"}, ObjectMeta:v1.ObjectMeta{Name:"192-168-17-64-26", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"1501", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.IPAMBlockSpec{CIDR:"192.168.17.64/26", Affinity:(*string)(0xc0003ec8b0), Allocations:[]*int{(*int)(0xc000535868), (*int)(0xc0005359c0), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil)}, Unallocated:[]int{2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63}, Attributes:[]v3.AllocationAttribute{v3.AllocationAttribute{AttrPrimary:(*string)(0xc0003ec8e0), AttrSecondary:map[string]string{"namespace":"default", "node":"24.199.122.14", "pod":"nginx-deployment-7fcdb87857-ss27p", "timestamp":"2025-08-12 23:57:51.744489829 +0000 UTC"}}, v3.AllocationAttribute{AttrPrimary:(*string)(0xc00039d860), AttrSecondary:map[string]string{"namespace":"calico-system", "node":"24.199.122.14", "pod":"csi-node-driver-l5vvd", "timestamp":"2025-08-12 23:57:51.749987549 +0000 UTC"}}}, SequenceNumber:0x185b2a601016d81a, SequenceNumberForAllocation:map[string]uint64{"0":0x185b2a601016d818, "1":0x185b2a601016d819}, Deleted:false, DeprecatedStrictAffinity:false}} error=Operation cannot be fulfilled on ipamblocks.crd.projectcalico.org "192-168-17-64-26": the object has been modified; please apply your changes to the latest version and try again Aug 12 23:57:52.137687 containerd[1483]: 2025-08-12 23:57:52.023 [INFO][2716] ipam/ipam.go 1247: Failed to update block block=192.168.17.64/26 error=update conflict: IPAMBlock(192-168-17-64-26) handle="k8s-pod-network.e98cf3895d26db3a5cfc886268bad5f32002ddab7576ea7514ea5ee57d511213" host="24.199.122.14" Aug 12 23:57:52.137687 containerd[1483]: 2025-08-12 23:57:52.079 [INFO][2716] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.17.64/26 handle="k8s-pod-network.e98cf3895d26db3a5cfc886268bad5f32002ddab7576ea7514ea5ee57d511213" host="24.199.122.14" Aug 12 23:57:52.137687 containerd[1483]: 2025-08-12 23:57:52.083 [INFO][2716] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e98cf3895d26db3a5cfc886268bad5f32002ddab7576ea7514ea5ee57d511213 Aug 12 23:57:52.137687 containerd[1483]: 2025-08-12 23:57:52.091 [INFO][2716] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.17.64/26 handle="k8s-pod-network.e98cf3895d26db3a5cfc886268bad5f32002ddab7576ea7514ea5ee57d511213" host="24.199.122.14" Aug 12 23:57:52.137687 containerd[1483]: 2025-08-12 23:57:52.108 [INFO][2716] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.17.66/26] block=192.168.17.64/26 handle="k8s-pod-network.e98cf3895d26db3a5cfc886268bad5f32002ddab7576ea7514ea5ee57d511213" host="24.199.122.14" Aug 12 23:57:52.137687 containerd[1483]: 2025-08-12 23:57:52.108 [INFO][2716] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.17.66/26] handle="k8s-pod-network.e98cf3895d26db3a5cfc886268bad5f32002ddab7576ea7514ea5ee57d511213" host="24.199.122.14" Aug 12 23:57:52.137687 containerd[1483]: 2025-08-12 23:57:52.108 [INFO][2716] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:57:52.137687 containerd[1483]: 2025-08-12 23:57:52.108 [INFO][2716] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.66/26] IPv6=[] ContainerID="e98cf3895d26db3a5cfc886268bad5f32002ddab7576ea7514ea5ee57d511213" HandleID="k8s-pod-network.e98cf3895d26db3a5cfc886268bad5f32002ddab7576ea7514ea5ee57d511213" Workload="24.199.122.14-k8s-csi--node--driver--l5vvd-eth0" Aug 12 23:57:52.138148 containerd[1483]: 2025-08-12 23:57:52.110 [INFO][2679] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e98cf3895d26db3a5cfc886268bad5f32002ddab7576ea7514ea5ee57d511213" Namespace="calico-system" Pod="csi-node-driver-l5vvd" WorkloadEndpoint="24.199.122.14-k8s-csi--node--driver--l5vvd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"24.199.122.14-k8s-csi--node--driver--l5vvd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fee2ac30-f11f-43b7-ba5e-ccd47684ad80", ResourceVersion:"1315", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 57, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"24.199.122.14", ContainerID:"", Pod:"csi-node-driver-l5vvd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.17.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic664bd135df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:57:52.138148 containerd[1483]: 2025-08-12 23:57:52.110 [INFO][2679] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.17.66/32] ContainerID="e98cf3895d26db3a5cfc886268bad5f32002ddab7576ea7514ea5ee57d511213" Namespace="calico-system" Pod="csi-node-driver-l5vvd" WorkloadEndpoint="24.199.122.14-k8s-csi--node--driver--l5vvd-eth0" Aug 12 23:57:52.138148 containerd[1483]: 2025-08-12 23:57:52.110 [INFO][2679] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic664bd135df ContainerID="e98cf3895d26db3a5cfc886268bad5f32002ddab7576ea7514ea5ee57d511213" Namespace="calico-system" Pod="csi-node-driver-l5vvd" WorkloadEndpoint="24.199.122.14-k8s-csi--node--driver--l5vvd-eth0" Aug 12 23:57:52.138148 containerd[1483]: 2025-08-12 23:57:52.115 [INFO][2679] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e98cf3895d26db3a5cfc886268bad5f32002ddab7576ea7514ea5ee57d511213" Namespace="calico-system" Pod="csi-node-driver-l5vvd" WorkloadEndpoint="24.199.122.14-k8s-csi--node--driver--l5vvd-eth0" Aug 12 23:57:52.138148 containerd[1483]: 2025-08-12 23:57:52.115 [INFO][2679] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e98cf3895d26db3a5cfc886268bad5f32002ddab7576ea7514ea5ee57d511213" Namespace="calico-system" Pod="csi-node-driver-l5vvd" WorkloadEndpoint="24.199.122.14-k8s-csi--node--driver--l5vvd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"24.199.122.14-k8s-csi--node--driver--l5vvd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fee2ac30-f11f-43b7-ba5e-ccd47684ad80", ResourceVersion:"1315", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 57, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"24.199.122.14", ContainerID:"e98cf3895d26db3a5cfc886268bad5f32002ddab7576ea7514ea5ee57d511213", Pod:"csi-node-driver-l5vvd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.17.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic664bd135df", MAC:"fa:15:38:6f:8e:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:57:52.138148 containerd[1483]: 2025-08-12 23:57:52.135 [INFO][2679] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e98cf3895d26db3a5cfc886268bad5f32002ddab7576ea7514ea5ee57d511213" Namespace="calico-system" Pod="csi-node-driver-l5vvd" WorkloadEndpoint="24.199.122.14-k8s-csi--node--driver--l5vvd-eth0" Aug 12 23:57:52.169840 containerd[1483]: time="2025-08-12T23:57:52.169583210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:57:52.169840 containerd[1483]: time="2025-08-12T23:57:52.169660139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:57:52.169840 containerd[1483]: time="2025-08-12T23:57:52.169672842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:57:52.172335 containerd[1483]: time="2025-08-12T23:57:52.170600766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:57:52.194380 systemd[1]: Started cri-containerd-e98cf3895d26db3a5cfc886268bad5f32002ddab7576ea7514ea5ee57d511213.scope - libcontainer container e98cf3895d26db3a5cfc886268bad5f32002ddab7576ea7514ea5ee57d511213. Aug 12 23:57:52.199168 systemd-networkd[1371]: calib09cd0c5232: Link UP Aug 12 23:57:52.200679 systemd-networkd[1371]: calib09cd0c5232: Gained carrier Aug 12 23:57:52.217141 containerd[1483]: 2025-08-12 23:57:51.621 [INFO][2662] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 12 23:57:52.217141 containerd[1483]: 2025-08-12 23:57:51.675 [INFO][2662] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {24.199.122.14-k8s-calico--apiserver--754ccc7fc7--4tmcd-eth0 calico-apiserver-754ccc7fc7- calico-apiserver bd4c9686-c83c-417c-992e-74d7223d78d5 1432 0 2025-08-12 23:57:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:754ccc7fc7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 24.199.122.14 calico-apiserver-754ccc7fc7-4tmcd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib09cd0c5232 [] [] }} ContainerID="d64a92e49bbdb71fd7ada8a4a37fb0e23f9665c9e3ad06be258248df1c7f93f1" Namespace="calico-apiserver" Pod="calico-apiserver-754ccc7fc7-4tmcd" WorkloadEndpoint="24.199.122.14-k8s-calico--apiserver--754ccc7fc7--4tmcd-" Aug 12 23:57:52.217141 containerd[1483]: 2025-08-12 23:57:51.675 [INFO][2662] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d64a92e49bbdb71fd7ada8a4a37fb0e23f9665c9e3ad06be258248df1c7f93f1" Namespace="calico-apiserver" Pod="calico-apiserver-754ccc7fc7-4tmcd" WorkloadEndpoint="24.199.122.14-k8s-calico--apiserver--754ccc7fc7--4tmcd-eth0" Aug 12 23:57:52.217141 containerd[1483]: 2025-08-12 23:57:51.752 [INFO][2717] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d64a92e49bbdb71fd7ada8a4a37fb0e23f9665c9e3ad06be258248df1c7f93f1" HandleID="k8s-pod-network.d64a92e49bbdb71fd7ada8a4a37fb0e23f9665c9e3ad06be258248df1c7f93f1" Workload="24.199.122.14-k8s-calico--apiserver--754ccc7fc7--4tmcd-eth0" Aug 12 23:57:52.217141 containerd[1483]: 2025-08-12 23:57:51.752 [INFO][2717] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d64a92e49bbdb71fd7ada8a4a37fb0e23f9665c9e3ad06be258248df1c7f93f1" HandleID="k8s-pod-network.d64a92e49bbdb71fd7ada8a4a37fb0e23f9665c9e3ad06be258248df1c7f93f1" Workload="24.199.122.14-k8s-calico--apiserver--754ccc7fc7--4tmcd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5e40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"24.199.122.14", "pod":"calico-apiserver-754ccc7fc7-4tmcd", "timestamp":"2025-08-12 23:57:51.752261728 +0000 UTC"}, Hostname:"24.199.122.14", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:57:52.217141 containerd[1483]: 2025-08-12 23:57:51.753 [INFO][2717] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:57:52.217141 containerd[1483]: 2025-08-12 23:57:52.108 [INFO][2717] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:57:52.217141 containerd[1483]: 2025-08-12 23:57:52.108 [INFO][2717] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '24.199.122.14' Aug 12 23:57:52.217141 containerd[1483]: 2025-08-12 23:57:52.124 [INFO][2717] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d64a92e49bbdb71fd7ada8a4a37fb0e23f9665c9e3ad06be258248df1c7f93f1" host="24.199.122.14" Aug 12 23:57:52.217141 containerd[1483]: 2025-08-12 23:57:52.140 [INFO][2717] ipam/ipam.go 394: Looking up existing affinities for host host="24.199.122.14" Aug 12 23:57:52.217141 containerd[1483]: 2025-08-12 23:57:52.154 [INFO][2717] ipam/ipam.go 511: Trying affinity for 192.168.17.64/26 host="24.199.122.14" Aug 12 23:57:52.217141 containerd[1483]: 2025-08-12 23:57:52.159 [INFO][2717] ipam/ipam.go 158: Attempting to load block cidr=192.168.17.64/26 host="24.199.122.14" Aug 12 23:57:52.217141 containerd[1483]: 2025-08-12 23:57:52.163 [INFO][2717] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.17.64/26 host="24.199.122.14" Aug 12 23:57:52.217141 containerd[1483]: 2025-08-12 23:57:52.163 [INFO][2717] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.17.64/26 handle="k8s-pod-network.d64a92e49bbdb71fd7ada8a4a37fb0e23f9665c9e3ad06be258248df1c7f93f1" host="24.199.122.14" Aug 12 23:57:52.217141 containerd[1483]: 2025-08-12 23:57:52.166 [INFO][2717] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d64a92e49bbdb71fd7ada8a4a37fb0e23f9665c9e3ad06be258248df1c7f93f1 Aug 12 23:57:52.217141 containerd[1483]: 2025-08-12 23:57:52.175 [INFO][2717] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.17.64/26 handle="k8s-pod-network.d64a92e49bbdb71fd7ada8a4a37fb0e23f9665c9e3ad06be258248df1c7f93f1" host="24.199.122.14" Aug 12 23:57:52.217141 containerd[1483]: 2025-08-12 23:57:52.187 [INFO][2717] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.17.67/26] block=192.168.17.64/26 handle="k8s-pod-network.d64a92e49bbdb71fd7ada8a4a37fb0e23f9665c9e3ad06be258248df1c7f93f1" host="24.199.122.14" Aug 12 23:57:52.217141 containerd[1483]: 2025-08-12 23:57:52.187 [INFO][2717] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.17.67/26] handle="k8s-pod-network.d64a92e49bbdb71fd7ada8a4a37fb0e23f9665c9e3ad06be258248df1c7f93f1" host="24.199.122.14" Aug 12 23:57:52.217141 containerd[1483]: 2025-08-12 23:57:52.187 [INFO][2717] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:57:52.217141 containerd[1483]: 2025-08-12 23:57:52.187 [INFO][2717] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.67/26] IPv6=[] ContainerID="d64a92e49bbdb71fd7ada8a4a37fb0e23f9665c9e3ad06be258248df1c7f93f1" HandleID="k8s-pod-network.d64a92e49bbdb71fd7ada8a4a37fb0e23f9665c9e3ad06be258248df1c7f93f1" Workload="24.199.122.14-k8s-calico--apiserver--754ccc7fc7--4tmcd-eth0" Aug 12 23:57:52.217793 containerd[1483]: 2025-08-12 23:57:52.191 [INFO][2662] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d64a92e49bbdb71fd7ada8a4a37fb0e23f9665c9e3ad06be258248df1c7f93f1" Namespace="calico-apiserver" Pod="calico-apiserver-754ccc7fc7-4tmcd" WorkloadEndpoint="24.199.122.14-k8s-calico--apiserver--754ccc7fc7--4tmcd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"24.199.122.14-k8s-calico--apiserver--754ccc7fc7--4tmcd-eth0", GenerateName:"calico-apiserver-754ccc7fc7-", Namespace:"calico-apiserver", SelfLink:"", UID:"bd4c9686-c83c-417c-992e-74d7223d78d5", ResourceVersion:"1432", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 57, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"754ccc7fc7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"24.199.122.14", ContainerID:"", Pod:"calico-apiserver-754ccc7fc7-4tmcd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib09cd0c5232", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:57:52.217793 containerd[1483]: 2025-08-12 23:57:52.192 [INFO][2662] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.17.67/32] ContainerID="d64a92e49bbdb71fd7ada8a4a37fb0e23f9665c9e3ad06be258248df1c7f93f1" Namespace="calico-apiserver" Pod="calico-apiserver-754ccc7fc7-4tmcd" WorkloadEndpoint="24.199.122.14-k8s-calico--apiserver--754ccc7fc7--4tmcd-eth0" Aug 12 23:57:52.217793 containerd[1483]: 2025-08-12 23:57:52.192 [INFO][2662] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib09cd0c5232 ContainerID="d64a92e49bbdb71fd7ada8a4a37fb0e23f9665c9e3ad06be258248df1c7f93f1" Namespace="calico-apiserver" Pod="calico-apiserver-754ccc7fc7-4tmcd" WorkloadEndpoint="24.199.122.14-k8s-calico--apiserver--754ccc7fc7--4tmcd-eth0" Aug 12 23:57:52.217793 containerd[1483]: 2025-08-12 23:57:52.203 [INFO][2662] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d64a92e49bbdb71fd7ada8a4a37fb0e23f9665c9e3ad06be258248df1c7f93f1" Namespace="calico-apiserver" Pod="calico-apiserver-754ccc7fc7-4tmcd" WorkloadEndpoint="24.199.122.14-k8s-calico--apiserver--754ccc7fc7--4tmcd-eth0" Aug 12 23:57:52.217793 containerd[1483]: 2025-08-12 23:57:52.204 [INFO][2662] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d64a92e49bbdb71fd7ada8a4a37fb0e23f9665c9e3ad06be258248df1c7f93f1" Namespace="calico-apiserver" Pod="calico-apiserver-754ccc7fc7-4tmcd" WorkloadEndpoint="24.199.122.14-k8s-calico--apiserver--754ccc7fc7--4tmcd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"24.199.122.14-k8s-calico--apiserver--754ccc7fc7--4tmcd-eth0", GenerateName:"calico-apiserver-754ccc7fc7-", Namespace:"calico-apiserver", SelfLink:"", UID:"bd4c9686-c83c-417c-992e-74d7223d78d5", ResourceVersion:"1432", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 57, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"754ccc7fc7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"24.199.122.14", ContainerID:"d64a92e49bbdb71fd7ada8a4a37fb0e23f9665c9e3ad06be258248df1c7f93f1", Pod:"calico-apiserver-754ccc7fc7-4tmcd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib09cd0c5232", MAC:"f2:7f:5a:f2:52:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:57:52.217793 containerd[1483]: 2025-08-12 23:57:52.215 [INFO][2662] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d64a92e49bbdb71fd7ada8a4a37fb0e23f9665c9e3ad06be258248df1c7f93f1" Namespace="calico-apiserver" Pod="calico-apiserver-754ccc7fc7-4tmcd" WorkloadEndpoint="24.199.122.14-k8s-calico--apiserver--754ccc7fc7--4tmcd-eth0" Aug 12 23:57:52.240775 containerd[1483]: time="2025-08-12T23:57:52.240429786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l5vvd,Uid:fee2ac30-f11f-43b7-ba5e-ccd47684ad80,Namespace:calico-system,Attempt:6,} returns sandbox id \"e98cf3895d26db3a5cfc886268bad5f32002ddab7576ea7514ea5ee57d511213\"" Aug 12 23:57:52.249265 kubelet[1804]: E0812 23:57:52.249209 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:57:52.252652 containerd[1483]: time="2025-08-12T23:57:52.252226886Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:57:52.252652 containerd[1483]: time="2025-08-12T23:57:52.252451711Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:57:52.252652 containerd[1483]: time="2025-08-12T23:57:52.252480799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:57:52.253132 containerd[1483]: time="2025-08-12T23:57:52.252723001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:57:52.275309 systemd[1]: Started cri-containerd-d64a92e49bbdb71fd7ada8a4a37fb0e23f9665c9e3ad06be258248df1c7f93f1.scope - libcontainer container d64a92e49bbdb71fd7ada8a4a37fb0e23f9665c9e3ad06be258248df1c7f93f1. Aug 12 23:57:52.319728 containerd[1483]: time="2025-08-12T23:57:52.319675010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-754ccc7fc7-4tmcd,Uid:bd4c9686-c83c-417c-992e-74d7223d78d5,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"d64a92e49bbdb71fd7ada8a4a37fb0e23f9665c9e3ad06be258248df1c7f93f1\"" Aug 12 23:57:53.190379 systemd-networkd[1371]: calic664bd135df: Gained IPv6LL Aug 12 23:57:53.249982 kubelet[1804]: E0812 23:57:53.249921 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:57:53.640182 systemd-networkd[1371]: cali729d8961110: Gained IPv6LL Aug 12 23:57:53.733504 kernel: bpftool[3028]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Aug 12 23:57:53.766809 systemd-networkd[1371]: calib09cd0c5232: Gained IPv6LL Aug 12 23:57:54.071739 systemd-networkd[1371]: vxlan.calico: Link UP Aug 12 23:57:54.071754 systemd-networkd[1371]: vxlan.calico: Gained carrier Aug 12 23:57:54.252443 kubelet[1804]: E0812 23:57:54.251024 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:57:55.234956 kubelet[1804]: E0812 23:57:55.234909 1804 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:57:55.251820 kubelet[1804]: E0812 23:57:55.251777 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:57:55.318942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1391712536.mount: Deactivated successfully. Aug 12 23:57:55.495603 systemd-networkd[1371]: vxlan.calico: Gained IPv6LL Aug 12 23:57:56.252609 kubelet[1804]: E0812 23:57:56.252533 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:57:56.549987 containerd[1483]: time="2025-08-12T23:57:56.549838077Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:56.550937 containerd[1483]: time="2025-08-12T23:57:56.550854590Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73303204" Aug 12 23:57:56.553051 containerd[1483]: time="2025-08-12T23:57:56.551535125Z" level=info msg="ImageCreate event name:\"sha256:f36b8965af58ac17c6fcb27d986b37161ceb26b3d41d3cd53f232b0e16761305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:56.554960 containerd[1483]: time="2025-08-12T23:57:56.554924750Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:a6969d434cb816d30787e9f7ab16b632e12dc05a2c8f4dae701d83ef2199c985\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:56.555979 containerd[1483]: time="2025-08-12T23:57:56.555937767Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:f36b8965af58ac17c6fcb27d986b37161ceb26b3d41d3cd53f232b0e16761305\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:a6969d434cb816d30787e9f7ab16b632e12dc05a2c8f4dae701d83ef2199c985\", size \"73303082\" in 4.505561628s" Aug 12 23:57:56.556118 containerd[1483]: time="2025-08-12T23:57:56.556099077Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:f36b8965af58ac17c6fcb27d986b37161ceb26b3d41d3cd53f232b0e16761305\"" Aug 12 23:57:56.557488 containerd[1483]: time="2025-08-12T23:57:56.557466727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 12 23:57:56.559889 containerd[1483]: time="2025-08-12T23:57:56.559860814Z" level=info msg="CreateContainer within sandbox \"ae7359da4d179ca646e97a3b4cb2412aedda3fa9bf4335b58cb9725170215528\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Aug 12 23:57:56.572805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2276724096.mount: Deactivated successfully. Aug 12 23:57:56.582310 containerd[1483]: time="2025-08-12T23:57:56.582186599Z" level=info msg="CreateContainer within sandbox \"ae7359da4d179ca646e97a3b4cb2412aedda3fa9bf4335b58cb9725170215528\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"16a2ac0fb3a1bb8d8502901221c278bfc0b6ca8bc3b2f3d747db90fca127c42a\"" Aug 12 23:57:56.583033 containerd[1483]: time="2025-08-12T23:57:56.582995477Z" level=info msg="StartContainer for \"16a2ac0fb3a1bb8d8502901221c278bfc0b6ca8bc3b2f3d747db90fca127c42a\"" Aug 12 23:57:56.619493 systemd[1]: run-containerd-runc-k8s.io-16a2ac0fb3a1bb8d8502901221c278bfc0b6ca8bc3b2f3d747db90fca127c42a-runc.BdeKg5.mount: Deactivated successfully. Aug 12 23:57:56.630387 systemd[1]: Started cri-containerd-16a2ac0fb3a1bb8d8502901221c278bfc0b6ca8bc3b2f3d747db90fca127c42a.scope - libcontainer container 16a2ac0fb3a1bb8d8502901221c278bfc0b6ca8bc3b2f3d747db90fca127c42a. Aug 12 23:57:56.658540 containerd[1483]: time="2025-08-12T23:57:56.657306001Z" level=info msg="StartContainer for \"16a2ac0fb3a1bb8d8502901221c278bfc0b6ca8bc3b2f3d747db90fca127c42a\" returns successfully" Aug 12 23:57:57.253487 kubelet[1804]: E0812 23:57:57.253414 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:57:57.754138 containerd[1483]: time="2025-08-12T23:57:57.754085361Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:57.755597 containerd[1483]: time="2025-08-12T23:57:57.755215888Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 12 23:57:57.756336 containerd[1483]: time="2025-08-12T23:57:57.756295653Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:57.759514 containerd[1483]: time="2025-08-12T23:57:57.759463821Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:57.760692 containerd[1483]: time="2025-08-12T23:57:57.760650325Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.202599398s" Aug 12 23:57:57.760968 containerd[1483]: time="2025-08-12T23:57:57.760833986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 12 23:57:57.763138 containerd[1483]: time="2025-08-12T23:57:57.763065079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 12 23:57:57.765560 containerd[1483]: time="2025-08-12T23:57:57.765291684Z" level=info msg="CreateContainer within sandbox \"e98cf3895d26db3a5cfc886268bad5f32002ddab7576ea7514ea5ee57d511213\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 12 23:57:57.781597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2470842537.mount: Deactivated successfully. Aug 12 23:57:57.788612 containerd[1483]: time="2025-08-12T23:57:57.788484691Z" level=info msg="CreateContainer within sandbox \"e98cf3895d26db3a5cfc886268bad5f32002ddab7576ea7514ea5ee57d511213\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"76aaad72905a904a8561f2d6a13860d65724b7a63e741bc966549a6e943445e1\"" Aug 12 23:57:57.789447 containerd[1483]: time="2025-08-12T23:57:57.789408370Z" level=info msg="StartContainer for \"76aaad72905a904a8561f2d6a13860d65724b7a63e741bc966549a6e943445e1\"" Aug 12 23:57:57.839364 systemd[1]: Started cri-containerd-76aaad72905a904a8561f2d6a13860d65724b7a63e741bc966549a6e943445e1.scope - libcontainer container 76aaad72905a904a8561f2d6a13860d65724b7a63e741bc966549a6e943445e1. Aug 12 23:57:57.884015 containerd[1483]: time="2025-08-12T23:57:57.883967264Z" level=info msg="StartContainer for \"76aaad72905a904a8561f2d6a13860d65724b7a63e741bc966549a6e943445e1\" returns successfully" Aug 12 23:57:58.254319 kubelet[1804]: E0812 23:57:58.254276 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:57:59.255299 kubelet[1804]: E0812 23:57:59.255262 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:57:59.996648 containerd[1483]: time="2025-08-12T23:57:59.996582940Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:59.997663 containerd[1483]: time="2025-08-12T23:57:59.997477344Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Aug 12 23:57:59.998525 containerd[1483]: time="2025-08-12T23:57:59.998458922Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:00.004639 containerd[1483]: time="2025-08-12T23:58:00.004543254Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:00.006276 containerd[1483]: time="2025-08-12T23:58:00.005990379Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 2.242877458s" Aug 12 23:58:00.006276 containerd[1483]: time="2025-08-12T23:58:00.006047549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 12 23:58:00.010845 containerd[1483]: time="2025-08-12T23:58:00.010651616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 12 23:58:00.019776 containerd[1483]: time="2025-08-12T23:58:00.019578565Z" level=info msg="CreateContainer within sandbox \"d64a92e49bbdb71fd7ada8a4a37fb0e23f9665c9e3ad06be258248df1c7f93f1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 12 23:58:00.080182 containerd[1483]: time="2025-08-12T23:58:00.079997395Z" level=info msg="CreateContainer within sandbox \"d64a92e49bbdb71fd7ada8a4a37fb0e23f9665c9e3ad06be258248df1c7f93f1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"263ad5f9fa98169956f32eb7d05cc5bcdbd3b24b728e8aa0ff00428c2ed0f8f4\"" Aug 12 23:58:00.083116 containerd[1483]: time="2025-08-12T23:58:00.082274281Z" level=info msg="StartContainer for \"263ad5f9fa98169956f32eb7d05cc5bcdbd3b24b728e8aa0ff00428c2ed0f8f4\"" Aug 12 23:58:00.134852 systemd[1]: Started cri-containerd-263ad5f9fa98169956f32eb7d05cc5bcdbd3b24b728e8aa0ff00428c2ed0f8f4.scope - libcontainer container 263ad5f9fa98169956f32eb7d05cc5bcdbd3b24b728e8aa0ff00428c2ed0f8f4. Aug 12 23:58:00.199491 containerd[1483]: time="2025-08-12T23:58:00.199410735Z" level=info msg="StartContainer for \"263ad5f9fa98169956f32eb7d05cc5bcdbd3b24b728e8aa0ff00428c2ed0f8f4\" returns successfully" Aug 12 23:58:00.256604 kubelet[1804]: E0812 23:58:00.256433 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:58:00.574868 kubelet[1804]: I0812 23:58:00.574687 1804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-ss27p" podStartSLOduration=7.067710119 podStartE2EDuration="11.574648698s" podCreationTimestamp="2025-08-12 23:57:49 +0000 UTC" firstStartedPulling="2025-08-12 23:57:52.049950026 +0000 UTC m=+17.916818730" lastFinishedPulling="2025-08-12 23:57:56.556888608 +0000 UTC m=+22.423757309" observedRunningTime="2025-08-12 23:57:57.553105096 +0000 UTC m=+23.419973802" watchObservedRunningTime="2025-08-12 23:58:00.574648698 +0000 UTC m=+26.441517411" Aug 12 23:58:01.257737 kubelet[1804]: E0812 23:58:01.257685 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:58:01.463109 containerd[1483]: time="2025-08-12T23:58:01.462497999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:01.464117 containerd[1483]: time="2025-08-12T23:58:01.463810856Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 12 23:58:01.464689 containerd[1483]: time="2025-08-12T23:58:01.464651122Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:01.467045 containerd[1483]: time="2025-08-12T23:58:01.467004879Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:01.467700 containerd[1483]: time="2025-08-12T23:58:01.467658802Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.456901868s" Aug 12 23:58:01.467700 containerd[1483]: time="2025-08-12T23:58:01.467697712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 12 23:58:01.473525 containerd[1483]: time="2025-08-12T23:58:01.473267420Z" level=info msg="CreateContainer within sandbox \"e98cf3895d26db3a5cfc886268bad5f32002ddab7576ea7514ea5ee57d511213\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 12 23:58:01.494591 containerd[1483]: time="2025-08-12T23:58:01.494449223Z" level=info msg="CreateContainer within sandbox \"e98cf3895d26db3a5cfc886268bad5f32002ddab7576ea7514ea5ee57d511213\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"eba1a254316831a8805c21f7b2e823da132a2404a680cf39ad14a56aa8b2642d\"" Aug 12 23:58:01.495827 containerd[1483]: time="2025-08-12T23:58:01.495346972Z" level=info msg="StartContainer for \"eba1a254316831a8805c21f7b2e823da132a2404a680cf39ad14a56aa8b2642d\"" Aug 12 23:58:01.557819 systemd[1]: Started cri-containerd-eba1a254316831a8805c21f7b2e823da132a2404a680cf39ad14a56aa8b2642d.scope - libcontainer container eba1a254316831a8805c21f7b2e823da132a2404a680cf39ad14a56aa8b2642d. Aug 12 23:58:01.566260 kubelet[1804]: I0812 23:58:01.566066 1804 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 12 23:58:01.600666 containerd[1483]: time="2025-08-12T23:58:01.600598424Z" level=info msg="StartContainer for \"eba1a254316831a8805c21f7b2e823da132a2404a680cf39ad14a56aa8b2642d\" returns successfully" Aug 12 23:58:02.257942 kubelet[1804]: E0812 23:58:02.257868 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:58:02.408909 kubelet[1804]: I0812 23:58:02.408869 1804 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 12 23:58:02.410345 kubelet[1804]: I0812 23:58:02.410299 1804 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 12 23:58:02.596673 kubelet[1804]: I0812 23:58:02.596369 1804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-754ccc7fc7-4tmcd" podStartSLOduration=7.908039875 podStartE2EDuration="15.596343218s" podCreationTimestamp="2025-08-12 23:57:47 +0000 UTC" firstStartedPulling="2025-08-12 23:57:52.321369592 +0000 UTC m=+18.188238299" lastFinishedPulling="2025-08-12 23:58:00.009672938 +0000 UTC m=+25.876541642" observedRunningTime="2025-08-12 23:58:00.575070458 +0000 UTC m=+26.441939176" watchObservedRunningTime="2025-08-12 23:58:02.596343218 +0000 UTC m=+28.463211931" Aug 12 23:58:03.258899 kubelet[1804]: E0812 23:58:03.258824 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:58:04.260050 kubelet[1804]: E0812 23:58:04.259944 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:58:04.495977 kubelet[1804]: I0812 23:58:04.495896 1804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-l5vvd" podStartSLOduration=20.269802756 podStartE2EDuration="29.495873697s" podCreationTimestamp="2025-08-12 23:57:35 +0000 UTC" firstStartedPulling="2025-08-12 23:57:52.242863843 +0000 UTC m=+18.109732546" lastFinishedPulling="2025-08-12 23:58:01.468934797 +0000 UTC m=+27.335803487" observedRunningTime="2025-08-12 23:58:02.596775628 +0000 UTC m=+28.463644357" watchObservedRunningTime="2025-08-12 23:58:04.495873697 +0000 UTC m=+30.362742410" Aug 12 23:58:04.505848 systemd[1]: Created slice kubepods-besteffort-pod274c4ab1_8ab3_4daf_ab77_7c9576c0cf11.slice - libcontainer container kubepods-besteffort-pod274c4ab1_8ab3_4daf_ab77_7c9576c0cf11.slice. Aug 12 23:58:04.555504 kubelet[1804]: I0812 23:58:04.555358 1804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/274c4ab1-8ab3-4daf-ab77-7c9576c0cf11-data\") pod \"nfs-server-provisioner-0\" (UID: \"274c4ab1-8ab3-4daf-ab77-7c9576c0cf11\") " pod="default/nfs-server-provisioner-0" Aug 12 23:58:04.555970 kubelet[1804]: I0812 23:58:04.555890 1804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz74t\" (UniqueName: \"kubernetes.io/projected/274c4ab1-8ab3-4daf-ab77-7c9576c0cf11-kube-api-access-gz74t\") pod \"nfs-server-provisioner-0\" (UID: \"274c4ab1-8ab3-4daf-ab77-7c9576c0cf11\") " pod="default/nfs-server-provisioner-0" Aug 12 23:58:04.809693 containerd[1483]: time="2025-08-12T23:58:04.809624440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:274c4ab1-8ab3-4daf-ab77-7c9576c0cf11,Namespace:default,Attempt:0,}" Aug 12 23:58:05.032297 systemd-networkd[1371]: cali60e51b789ff: Link UP Aug 12 23:58:05.040274 systemd-networkd[1371]: cali60e51b789ff: Gained carrier Aug 12 23:58:05.071554 containerd[1483]: 2025-08-12 23:58:04.882 [INFO][3337] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {24.199.122.14-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 274c4ab1-8ab3-4daf-ab77-7c9576c0cf11 1607 0 2025-08-12 23:58:04 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 24.199.122.14 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="19f20132f6a91dfc7358a0a259b3959cbb24059e4a42f09f70ada9f48b8a6fc7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="24.199.122.14-k8s-nfs--server--provisioner--0-" Aug 12 23:58:05.071554 containerd[1483]: 2025-08-12 23:58:04.882 [INFO][3337] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="19f20132f6a91dfc7358a0a259b3959cbb24059e4a42f09f70ada9f48b8a6fc7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="24.199.122.14-k8s-nfs--server--provisioner--0-eth0" Aug 12 23:58:05.071554 containerd[1483]: 2025-08-12 23:58:04.921 [INFO][3350] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="19f20132f6a91dfc7358a0a259b3959cbb24059e4a42f09f70ada9f48b8a6fc7" HandleID="k8s-pod-network.19f20132f6a91dfc7358a0a259b3959cbb24059e4a42f09f70ada9f48b8a6fc7" Workload="24.199.122.14-k8s-nfs--server--provisioner--0-eth0" Aug 12 23:58:05.071554 containerd[1483]: 2025-08-12 23:58:04.922 [INFO][3350] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="19f20132f6a91dfc7358a0a259b3959cbb24059e4a42f09f70ada9f48b8a6fc7" HandleID="k8s-pod-network.19f20132f6a91dfc7358a0a259b3959cbb24059e4a42f09f70ada9f48b8a6fc7" Workload="24.199.122.14-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f240), Attrs:map[string]string{"namespace":"default", "node":"24.199.122.14", "pod":"nfs-server-provisioner-0", "timestamp":"2025-08-12 23:58:04.921918604 +0000 UTC"}, Hostname:"24.199.122.14", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:58:05.071554 containerd[1483]: 2025-08-12 23:58:04.922 [INFO][3350] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:58:05.071554 containerd[1483]: 2025-08-12 23:58:04.922 [INFO][3350] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:58:05.071554 containerd[1483]: 2025-08-12 23:58:04.922 [INFO][3350] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '24.199.122.14' Aug 12 23:58:05.071554 containerd[1483]: 2025-08-12 23:58:04.941 [INFO][3350] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.19f20132f6a91dfc7358a0a259b3959cbb24059e4a42f09f70ada9f48b8a6fc7" host="24.199.122.14" Aug 12 23:58:05.071554 containerd[1483]: 2025-08-12 23:58:04.965 [INFO][3350] ipam/ipam.go 394: Looking up existing affinities for host host="24.199.122.14" Aug 12 23:58:05.071554 containerd[1483]: 2025-08-12 23:58:04.988 [INFO][3350] ipam/ipam.go 511: Trying affinity for 192.168.17.64/26 host="24.199.122.14" Aug 12 23:58:05.071554 containerd[1483]: 2025-08-12 23:58:04.995 [INFO][3350] ipam/ipam.go 158: Attempting to load block cidr=192.168.17.64/26 host="24.199.122.14" Aug 12 23:58:05.071554 containerd[1483]: 2025-08-12 23:58:04.999 [INFO][3350] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.17.64/26 host="24.199.122.14" Aug 12 23:58:05.071554 containerd[1483]: 2025-08-12 23:58:04.999 [INFO][3350] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.17.64/26 handle="k8s-pod-network.19f20132f6a91dfc7358a0a259b3959cbb24059e4a42f09f70ada9f48b8a6fc7" host="24.199.122.14" Aug 12 23:58:05.071554 containerd[1483]: 2025-08-12 23:58:05.003 [INFO][3350] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.19f20132f6a91dfc7358a0a259b3959cbb24059e4a42f09f70ada9f48b8a6fc7 Aug 12 23:58:05.071554 containerd[1483]: 2025-08-12 23:58:05.014 [INFO][3350] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.17.64/26 handle="k8s-pod-network.19f20132f6a91dfc7358a0a259b3959cbb24059e4a42f09f70ada9f48b8a6fc7" host="24.199.122.14" Aug 12 23:58:05.071554 containerd[1483]: 2025-08-12 23:58:05.023 [INFO][3350] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.17.68/26] block=192.168.17.64/26 handle="k8s-pod-network.19f20132f6a91dfc7358a0a259b3959cbb24059e4a42f09f70ada9f48b8a6fc7" host="24.199.122.14" Aug 12 23:58:05.071554 containerd[1483]: 2025-08-12 23:58:05.023 [INFO][3350] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.17.68/26] handle="k8s-pod-network.19f20132f6a91dfc7358a0a259b3959cbb24059e4a42f09f70ada9f48b8a6fc7" host="24.199.122.14" Aug 12 23:58:05.071554 containerd[1483]: 2025-08-12 23:58:05.023 [INFO][3350] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:58:05.071554 containerd[1483]: 2025-08-12 23:58:05.023 [INFO][3350] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.68/26] IPv6=[] ContainerID="19f20132f6a91dfc7358a0a259b3959cbb24059e4a42f09f70ada9f48b8a6fc7" HandleID="k8s-pod-network.19f20132f6a91dfc7358a0a259b3959cbb24059e4a42f09f70ada9f48b8a6fc7" Workload="24.199.122.14-k8s-nfs--server--provisioner--0-eth0" Aug 12 23:58:05.073276 containerd[1483]: 2025-08-12 23:58:05.025 [INFO][3337] cni-plugin/k8s.go 418: Populated endpoint ContainerID="19f20132f6a91dfc7358a0a259b3959cbb24059e4a42f09f70ada9f48b8a6fc7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="24.199.122.14-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"24.199.122.14-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"274c4ab1-8ab3-4daf-ab77-7c9576c0cf11", ResourceVersion:"1607", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 58, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"24.199.122.14", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.17.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:58:05.073276 containerd[1483]: 2025-08-12 23:58:05.025 [INFO][3337] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.17.68/32] ContainerID="19f20132f6a91dfc7358a0a259b3959cbb24059e4a42f09f70ada9f48b8a6fc7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="24.199.122.14-k8s-nfs--server--provisioner--0-eth0" Aug 12 23:58:05.073276 containerd[1483]: 2025-08-12 23:58:05.025 [INFO][3337] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="19f20132f6a91dfc7358a0a259b3959cbb24059e4a42f09f70ada9f48b8a6fc7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="24.199.122.14-k8s-nfs--server--provisioner--0-eth0" Aug 12 23:58:05.073276 containerd[1483]: 2025-08-12 23:58:05.038 [INFO][3337] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="19f20132f6a91dfc7358a0a259b3959cbb24059e4a42f09f70ada9f48b8a6fc7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="24.199.122.14-k8s-nfs--server--provisioner--0-eth0" Aug 12 23:58:05.073594 containerd[1483]: 2025-08-12 23:58:05.043 [INFO][3337] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="19f20132f6a91dfc7358a0a259b3959cbb24059e4a42f09f70ada9f48b8a6fc7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="24.199.122.14-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"24.199.122.14-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"274c4ab1-8ab3-4daf-ab77-7c9576c0cf11", ResourceVersion:"1607", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 58, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"24.199.122.14", ContainerID:"19f20132f6a91dfc7358a0a259b3959cbb24059e4a42f09f70ada9f48b8a6fc7", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.17.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"26:4b:f2:50:7f:86", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:58:05.073594 containerd[1483]: 2025-08-12 23:58:05.067 [INFO][3337] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="19f20132f6a91dfc7358a0a259b3959cbb24059e4a42f09f70ada9f48b8a6fc7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="24.199.122.14-k8s-nfs--server--provisioner--0-eth0" Aug 12 23:58:05.098163 containerd[1483]: time="2025-08-12T23:58:05.096216622Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:58:05.098163 containerd[1483]: time="2025-08-12T23:58:05.096282941Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:58:05.098163 containerd[1483]: time="2025-08-12T23:58:05.096296996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:58:05.098163 containerd[1483]: time="2025-08-12T23:58:05.096396058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:58:05.128378 systemd[1]: Started cri-containerd-19f20132f6a91dfc7358a0a259b3959cbb24059e4a42f09f70ada9f48b8a6fc7.scope - libcontainer container 19f20132f6a91dfc7358a0a259b3959cbb24059e4a42f09f70ada9f48b8a6fc7. Aug 12 23:58:05.180432 containerd[1483]: time="2025-08-12T23:58:05.180393183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:274c4ab1-8ab3-4daf-ab77-7c9576c0cf11,Namespace:default,Attempt:0,} returns sandbox id \"19f20132f6a91dfc7358a0a259b3959cbb24059e4a42f09f70ada9f48b8a6fc7\"" Aug 12 23:58:05.182763 containerd[1483]: time="2025-08-12T23:58:05.182712290Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Aug 12 23:58:05.261055 kubelet[1804]: E0812 23:58:05.260993 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:58:06.261955 kubelet[1804]: E0812 23:58:06.261363 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:58:06.289350 kubelet[1804]: I0812 23:58:06.289306 1804 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 12 23:58:06.502586 systemd-networkd[1371]: cali60e51b789ff: Gained IPv6LL Aug 12 23:58:07.262536 kubelet[1804]: E0812 23:58:07.262478 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:58:07.387908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount607808488.mount: Deactivated successfully. Aug 12 23:58:08.263592 kubelet[1804]: E0812 23:58:08.263552 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:58:09.265075 kubelet[1804]: E0812 23:58:09.264776 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:58:09.417751 containerd[1483]: time="2025-08-12T23:58:09.417691754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:09.418969 containerd[1483]: time="2025-08-12T23:58:09.418711106Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Aug 12 23:58:09.419598 containerd[1483]: time="2025-08-12T23:58:09.419564656Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:09.422402 containerd[1483]: time="2025-08-12T23:58:09.422373789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:09.423624 containerd[1483]: time="2025-08-12T23:58:09.423592173Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.24084071s" Aug 12 23:58:09.424066 containerd[1483]: time="2025-08-12T23:58:09.423719735Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Aug 12 23:58:09.427409 containerd[1483]: time="2025-08-12T23:58:09.427244734Z" level=info msg="CreateContainer within sandbox \"19f20132f6a91dfc7358a0a259b3959cbb24059e4a42f09f70ada9f48b8a6fc7\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Aug 12 23:58:09.441811 containerd[1483]: time="2025-08-12T23:58:09.441713533Z" level=info msg="CreateContainer within sandbox \"19f20132f6a91dfc7358a0a259b3959cbb24059e4a42f09f70ada9f48b8a6fc7\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"68985d23e6b9277bbbb0fc2a2690ff3dc98f8a35cc42e80c8588ebb0e3daf3ff\"" Aug 12 23:58:09.442745 containerd[1483]: time="2025-08-12T23:58:09.442678301Z" level=info msg="StartContainer for \"68985d23e6b9277bbbb0fc2a2690ff3dc98f8a35cc42e80c8588ebb0e3daf3ff\"" Aug 12 23:58:09.476310 systemd[1]: Started cri-containerd-68985d23e6b9277bbbb0fc2a2690ff3dc98f8a35cc42e80c8588ebb0e3daf3ff.scope - libcontainer container 68985d23e6b9277bbbb0fc2a2690ff3dc98f8a35cc42e80c8588ebb0e3daf3ff. Aug 12 23:58:09.506535 containerd[1483]: time="2025-08-12T23:58:09.506490540Z" level=info msg="StartContainer for \"68985d23e6b9277bbbb0fc2a2690ff3dc98f8a35cc42e80c8588ebb0e3daf3ff\" returns successfully" Aug 12 23:58:10.265666 kubelet[1804]: E0812 23:58:10.265604 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:58:11.266807 kubelet[1804]: E0812 23:58:11.266752 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:58:12.267641 kubelet[1804]: E0812 23:58:12.267560 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:58:12.399203 update_engine[1469]: I20250812 23:58:12.398746 1469 update_attempter.cc:509] Updating boot flags... Aug 12 23:58:12.445121 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 45 scanned by (udev-worker) (3528) Aug 12 23:58:12.557112 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 45 scanned by (udev-worker) (3530) Aug 12 23:58:12.671682 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 45 scanned by (udev-worker) (3530) Aug 12 23:58:13.268652 kubelet[1804]: E0812 23:58:13.268577 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:58:14.269783 kubelet[1804]: E0812 23:58:14.269718 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:58:14.770122 kubelet[1804]: I0812 23:58:14.769599 1804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=6.526903405 podStartE2EDuration="10.769579342s" podCreationTimestamp="2025-08-12 23:58:04 +0000 UTC" firstStartedPulling="2025-08-12 23:58:05.182007544 +0000 UTC m=+31.048876234" lastFinishedPulling="2025-08-12 23:58:09.424683467 +0000 UTC m=+35.291552171" observedRunningTime="2025-08-12 23:58:09.617762266 +0000 UTC m=+35.484630977" watchObservedRunningTime="2025-08-12 23:58:14.769579342 +0000 UTC m=+40.636448083" Aug 12 23:58:14.778497 systemd[1]: Created slice kubepods-besteffort-poda0fb1215_2fec_44e5_9c6d_7eed61d19cae.slice - libcontainer container kubepods-besteffort-poda0fb1215_2fec_44e5_9c6d_7eed61d19cae.slice. Aug 12 23:58:14.827404 kubelet[1804]: I0812 23:58:14.827344 1804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6bdc\" (UniqueName: \"kubernetes.io/projected/a0fb1215-2fec-44e5-9c6d-7eed61d19cae-kube-api-access-w6bdc\") pod \"test-pod-1\" (UID: \"a0fb1215-2fec-44e5-9c6d-7eed61d19cae\") " pod="default/test-pod-1" Aug 12 23:58:14.827404 kubelet[1804]: I0812 23:58:14.827404 1804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ac201495-8558-46b2-b6be-d9c894c0c1f3\" (UniqueName: \"kubernetes.io/nfs/a0fb1215-2fec-44e5-9c6d-7eed61d19cae-pvc-ac201495-8558-46b2-b6be-d9c894c0c1f3\") pod \"test-pod-1\" (UID: \"a0fb1215-2fec-44e5-9c6d-7eed61d19cae\") " pod="default/test-pod-1" Aug 12 23:58:14.980451 kernel: FS-Cache: Loaded Aug 12 23:58:15.055428 kernel: RPC: Registered named UNIX socket transport module. Aug 12 23:58:15.055620 kernel: RPC: Registered udp transport module. Aug 12 23:58:15.055647 kernel: RPC: Registered tcp transport module. Aug 12 23:58:15.055664 kernel: RPC: Registered tcp-with-tls transport module. Aug 12 23:58:15.056340 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Aug 12 23:58:15.235261 kubelet[1804]: E0812 23:58:15.235178 1804 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:58:15.270816 kubelet[1804]: E0812 23:58:15.270746 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:58:15.375245 kernel: NFS: Registering the id_resolver key type Aug 12 23:58:15.377222 kernel: Key type id_resolver registered Aug 12 23:58:15.377415 kernel: Key type id_legacy registered Aug 12 23:58:15.434242 nfsidmap[3559]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '2.2-e-bc3605f087' Aug 12 23:58:15.441631 nfsidmap[3560]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '2.2-e-bc3605f087' Aug 12 23:58:15.683020 containerd[1483]: time="2025-08-12T23:58:15.682481766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:a0fb1215-2fec-44e5-9c6d-7eed61d19cae,Namespace:default,Attempt:0,}" Aug 12 23:58:15.867290 systemd-networkd[1371]: cali5ec59c6bf6e: Link UP Aug 12 23:58:15.868819 systemd-networkd[1371]: cali5ec59c6bf6e: Gained carrier Aug 12 23:58:15.889901 containerd[1483]: 2025-08-12 23:58:15.760 [INFO][3566] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {24.199.122.14-k8s-test--pod--1-eth0 default a0fb1215-2fec-44e5-9c6d-7eed61d19cae 1722 0 2025-08-12 23:58:05 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 24.199.122.14 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="a8596c38892fa8c3d0dafc0f478ca179e14ded1cf6acfd7a4df5fc6294c68cbd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="24.199.122.14-k8s-test--pod--1-" Aug 12 23:58:15.889901 containerd[1483]: 2025-08-12 23:58:15.760 [INFO][3566] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a8596c38892fa8c3d0dafc0f478ca179e14ded1cf6acfd7a4df5fc6294c68cbd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="24.199.122.14-k8s-test--pod--1-eth0" Aug 12 23:58:15.889901 containerd[1483]: 2025-08-12 23:58:15.800 [INFO][3577] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a8596c38892fa8c3d0dafc0f478ca179e14ded1cf6acfd7a4df5fc6294c68cbd" HandleID="k8s-pod-network.a8596c38892fa8c3d0dafc0f478ca179e14ded1cf6acfd7a4df5fc6294c68cbd" Workload="24.199.122.14-k8s-test--pod--1-eth0" Aug 12 23:58:15.889901 containerd[1483]: 2025-08-12 23:58:15.800 [INFO][3577] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a8596c38892fa8c3d0dafc0f478ca179e14ded1cf6acfd7a4df5fc6294c68cbd" HandleID="k8s-pod-network.a8596c38892fa8c3d0dafc0f478ca179e14ded1cf6acfd7a4df5fc6294c68cbd" Workload="24.199.122.14-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f2b0), Attrs:map[string]string{"namespace":"default", "node":"24.199.122.14", "pod":"test-pod-1", "timestamp":"2025-08-12 23:58:15.800485661 +0000 UTC"}, Hostname:"24.199.122.14", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:58:15.889901 containerd[1483]: 2025-08-12 23:58:15.800 [INFO][3577] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:58:15.889901 containerd[1483]: 2025-08-12 23:58:15.800 [INFO][3577] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:58:15.889901 containerd[1483]: 2025-08-12 23:58:15.800 [INFO][3577] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '24.199.122.14' Aug 12 23:58:15.889901 containerd[1483]: 2025-08-12 23:58:15.814 [INFO][3577] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a8596c38892fa8c3d0dafc0f478ca179e14ded1cf6acfd7a4df5fc6294c68cbd" host="24.199.122.14" Aug 12 23:58:15.889901 containerd[1483]: 2025-08-12 23:58:15.823 [INFO][3577] ipam/ipam.go 394: Looking up existing affinities for host host="24.199.122.14" Aug 12 23:58:15.889901 containerd[1483]: 2025-08-12 23:58:15.832 [INFO][3577] ipam/ipam.go 511: Trying affinity for 192.168.17.64/26 host="24.199.122.14" Aug 12 23:58:15.889901 containerd[1483]: 2025-08-12 23:58:15.836 [INFO][3577] ipam/ipam.go 158: Attempting to load block cidr=192.168.17.64/26 host="24.199.122.14" Aug 12 23:58:15.889901 containerd[1483]: 2025-08-12 23:58:15.840 [INFO][3577] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.17.64/26 host="24.199.122.14" Aug 12 23:58:15.889901 containerd[1483]: 2025-08-12 23:58:15.840 [INFO][3577] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.17.64/26 handle="k8s-pod-network.a8596c38892fa8c3d0dafc0f478ca179e14ded1cf6acfd7a4df5fc6294c68cbd" host="24.199.122.14" Aug 12 23:58:15.889901 containerd[1483]: 2025-08-12 23:58:15.843 [INFO][3577] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a8596c38892fa8c3d0dafc0f478ca179e14ded1cf6acfd7a4df5fc6294c68cbd Aug 12 23:58:15.889901 containerd[1483]: 2025-08-12 23:58:15.850 [INFO][3577] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.17.64/26 handle="k8s-pod-network.a8596c38892fa8c3d0dafc0f478ca179e14ded1cf6acfd7a4df5fc6294c68cbd" host="24.199.122.14" Aug 12 23:58:15.889901 containerd[1483]: 2025-08-12 23:58:15.859 [INFO][3577] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.17.69/26] block=192.168.17.64/26 handle="k8s-pod-network.a8596c38892fa8c3d0dafc0f478ca179e14ded1cf6acfd7a4df5fc6294c68cbd" host="24.199.122.14" Aug 12 23:58:15.889901 containerd[1483]: 2025-08-12 23:58:15.859 [INFO][3577] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.17.69/26] handle="k8s-pod-network.a8596c38892fa8c3d0dafc0f478ca179e14ded1cf6acfd7a4df5fc6294c68cbd" host="24.199.122.14" Aug 12 23:58:15.889901 containerd[1483]: 2025-08-12 23:58:15.859 [INFO][3577] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:58:15.889901 containerd[1483]: 2025-08-12 23:58:15.859 [INFO][3577] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.69/26] IPv6=[] ContainerID="a8596c38892fa8c3d0dafc0f478ca179e14ded1cf6acfd7a4df5fc6294c68cbd" HandleID="k8s-pod-network.a8596c38892fa8c3d0dafc0f478ca179e14ded1cf6acfd7a4df5fc6294c68cbd" Workload="24.199.122.14-k8s-test--pod--1-eth0" Aug 12 23:58:15.889901 containerd[1483]: 2025-08-12 23:58:15.861 [INFO][3566] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a8596c38892fa8c3d0dafc0f478ca179e14ded1cf6acfd7a4df5fc6294c68cbd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="24.199.122.14-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"24.199.122.14-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"a0fb1215-2fec-44e5-9c6d-7eed61d19cae", ResourceVersion:"1722", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 58, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"24.199.122.14", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.17.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:58:15.893501 containerd[1483]: 2025-08-12 23:58:15.862 [INFO][3566] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.17.69/32] ContainerID="a8596c38892fa8c3d0dafc0f478ca179e14ded1cf6acfd7a4df5fc6294c68cbd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="24.199.122.14-k8s-test--pod--1-eth0" Aug 12 23:58:15.893501 containerd[1483]: 2025-08-12 23:58:15.862 [INFO][3566] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="a8596c38892fa8c3d0dafc0f478ca179e14ded1cf6acfd7a4df5fc6294c68cbd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="24.199.122.14-k8s-test--pod--1-eth0" Aug 12 23:58:15.893501 containerd[1483]: 2025-08-12 23:58:15.869 [INFO][3566] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a8596c38892fa8c3d0dafc0f478ca179e14ded1cf6acfd7a4df5fc6294c68cbd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="24.199.122.14-k8s-test--pod--1-eth0" Aug 12 23:58:15.893501 containerd[1483]: 2025-08-12 23:58:15.874 [INFO][3566] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a8596c38892fa8c3d0dafc0f478ca179e14ded1cf6acfd7a4df5fc6294c68cbd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="24.199.122.14-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"24.199.122.14-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"a0fb1215-2fec-44e5-9c6d-7eed61d19cae", ResourceVersion:"1722", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 58, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"24.199.122.14", ContainerID:"a8596c38892fa8c3d0dafc0f478ca179e14ded1cf6acfd7a4df5fc6294c68cbd", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.17.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"06:ad:24:02:81:eb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:58:15.893501 containerd[1483]: 2025-08-12 23:58:15.887 [INFO][3566] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a8596c38892fa8c3d0dafc0f478ca179e14ded1cf6acfd7a4df5fc6294c68cbd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="24.199.122.14-k8s-test--pod--1-eth0" Aug 12 23:58:15.919614 containerd[1483]: time="2025-08-12T23:58:15.919395644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:58:15.919964 containerd[1483]: time="2025-08-12T23:58:15.919508312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:58:15.919964 containerd[1483]: time="2025-08-12T23:58:15.919650519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:58:15.920849 containerd[1483]: time="2025-08-12T23:58:15.920776302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:58:15.947542 systemd[1]: Started cri-containerd-a8596c38892fa8c3d0dafc0f478ca179e14ded1cf6acfd7a4df5fc6294c68cbd.scope - libcontainer container a8596c38892fa8c3d0dafc0f478ca179e14ded1cf6acfd7a4df5fc6294c68cbd. Aug 12 23:58:16.021505 containerd[1483]: time="2025-08-12T23:58:16.021457596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:a0fb1215-2fec-44e5-9c6d-7eed61d19cae,Namespace:default,Attempt:0,} returns sandbox id \"a8596c38892fa8c3d0dafc0f478ca179e14ded1cf6acfd7a4df5fc6294c68cbd\"" Aug 12 23:58:16.027296 containerd[1483]: time="2025-08-12T23:58:16.027246661Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Aug 12 23:58:16.271942 kubelet[1804]: E0812 23:58:16.271746 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:58:16.378687 containerd[1483]: time="2025-08-12T23:58:16.378621814Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:16.379366 containerd[1483]: time="2025-08-12T23:58:16.379310243Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Aug 12 23:58:16.383105 containerd[1483]: time="2025-08-12T23:58:16.383028106Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:f36b8965af58ac17c6fcb27d986b37161ceb26b3d41d3cd53f232b0e16761305\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:a6969d434cb816d30787e9f7ab16b632e12dc05a2c8f4dae701d83ef2199c985\", size \"73303082\" in 355.740925ms" Aug 12 23:58:16.383557 containerd[1483]: time="2025-08-12T23:58:16.383073400Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:f36b8965af58ac17c6fcb27d986b37161ceb26b3d41d3cd53f232b0e16761305\"" Aug 12 23:58:16.387848 containerd[1483]: time="2025-08-12T23:58:16.387796992Z" level=info msg="CreateContainer within sandbox \"a8596c38892fa8c3d0dafc0f478ca179e14ded1cf6acfd7a4df5fc6294c68cbd\" for container &ContainerMetadata{Name:test,Attempt:0,}" Aug 12 23:58:16.410274 containerd[1483]: time="2025-08-12T23:58:16.410140763Z" level=info msg="CreateContainer within sandbox \"a8596c38892fa8c3d0dafc0f478ca179e14ded1cf6acfd7a4df5fc6294c68cbd\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"f3f12976993de2ed588afe294f13a24d2710ad249ac8f9da47d7f2b1a129bdd5\"" Aug 12 23:58:16.411386 containerd[1483]: time="2025-08-12T23:58:16.411255780Z" level=info msg="StartContainer for \"f3f12976993de2ed588afe294f13a24d2710ad249ac8f9da47d7f2b1a129bdd5\"" Aug 12 23:58:16.462390 systemd[1]: Started cri-containerd-f3f12976993de2ed588afe294f13a24d2710ad249ac8f9da47d7f2b1a129bdd5.scope - libcontainer container f3f12976993de2ed588afe294f13a24d2710ad249ac8f9da47d7f2b1a129bdd5. Aug 12 23:58:16.500691 containerd[1483]: time="2025-08-12T23:58:16.500538283Z" level=info msg="StartContainer for \"f3f12976993de2ed588afe294f13a24d2710ad249ac8f9da47d7f2b1a129bdd5\" returns successfully" Aug 12 23:58:17.272742 kubelet[1804]: E0812 23:58:17.272666 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:58:17.894422 systemd-networkd[1371]: cali5ec59c6bf6e: Gained IPv6LL Aug 12 23:58:18.273450 kubelet[1804]: E0812 23:58:18.273283 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:58:19.274074 kubelet[1804]: E0812 23:58:19.274010 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:58:20.275309 kubelet[1804]: E0812 23:58:20.275244 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 12 23:58:21.275982 kubelet[1804]: E0812 23:58:21.275930 1804 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"