Nov 8 00:26:19.919375 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:26:19.919401 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:26:19.919413 kernel: BIOS-provided physical RAM map: Nov 8 00:26:19.919420 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 8 00:26:19.919426 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Nov 8 00:26:19.919432 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Nov 8 00:26:19.919440 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Nov 8 00:26:19.919447 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Nov 8 00:26:19.919454 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Nov 8 00:26:19.919463 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Nov 8 00:26:19.919470 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Nov 8 00:26:19.919477 kernel: NX (Execute Disable) protection: active Nov 8 00:26:19.919483 kernel: APIC: Static calls initialized Nov 8 00:26:19.919491 kernel: efi: EFI v2.7 by EDK II Nov 8 00:26:19.919499 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77002518 Nov 8 00:26:19.919510 kernel: SMBIOS 2.7 present. Nov 8 00:26:19.919517 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Nov 8 00:26:19.919525 kernel: Hypervisor detected: KVM Nov 8 00:26:19.919533 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 8 00:26:19.919544 kernel: kvm-clock: using sched offset of 3699498594 cycles Nov 8 00:26:19.919554 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 8 00:26:19.919562 kernel: tsc: Detected 2499.998 MHz processor Nov 8 00:26:19.919570 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:26:19.919578 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:26:19.919586 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Nov 8 00:26:19.919596 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 8 00:26:19.919604 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:26:19.919612 kernel: Using GB pages for direct mapping Nov 8 00:26:19.919619 kernel: Secure boot disabled Nov 8 00:26:19.919627 kernel: ACPI: Early table checksum verification disabled Nov 8 00:26:19.919634 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Nov 8 00:26:19.919642 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Nov 8 00:26:19.919650 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Nov 8 00:26:19.919657 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Nov 8 00:26:19.919667 kernel: ACPI: FACS 0x00000000789D0000 000040 Nov 8 00:26:19.919675 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Nov 8 00:26:19.919683 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Nov 8 00:26:19.921284 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Nov 8 00:26:19.921295 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Nov 8 00:26:19.921303 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Nov 8 00:26:19.921320 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Nov 8 00:26:19.921340 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Nov 8 00:26:19.921349 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Nov 8 00:26:19.921357 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Nov 8 00:26:19.921366 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Nov 8 00:26:19.921374 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Nov 8 00:26:19.921383 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Nov 8 00:26:19.921391 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Nov 8 00:26:19.921402 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Nov 8 00:26:19.921877 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Nov 8 00:26:19.921890 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Nov 8 00:26:19.921899 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Nov 8 00:26:19.921907 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Nov 8 00:26:19.921915 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Nov 8 00:26:19.921924 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 8 00:26:19.921932 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 8 00:26:19.921940 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Nov 8 00:26:19.921954 kernel: NUMA: Initialized distance table, cnt=1 Nov 8 00:26:19.921962 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Nov 8 00:26:19.921971 kernel: Zone ranges: Nov 8 00:26:19.921979 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:26:19.921987 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Nov 8 00:26:19.921996 kernel: Normal empty Nov 8 00:26:19.922004 kernel: Movable zone start for each node Nov 8 00:26:19.922012 kernel: Early memory node ranges Nov 8 00:26:19.922020 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 8 00:26:19.922031 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Nov 8 00:26:19.922039 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Nov 8 00:26:19.922047 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Nov 8 00:26:19.922056 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:26:19.922064 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 8 00:26:19.922073 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Nov 8 00:26:19.922081 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Nov 8 00:26:19.922090 kernel: ACPI: PM-Timer IO Port: 0xb008 Nov 8 00:26:19.922098 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 8 00:26:19.922106 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Nov 8 00:26:19.922117 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 8 00:26:19.922126 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 00:26:19.922134 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 8 00:26:19.922142 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 8 00:26:19.922150 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:26:19.922159 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 8 00:26:19.922167 kernel: TSC deadline timer available Nov 8 00:26:19.922175 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 8 00:26:19.922183 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 8 00:26:19.922194 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Nov 8 00:26:19.922202 kernel: Booting paravirtualized kernel on KVM Nov 8 00:26:19.922210 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:26:19.922219 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 8 00:26:19.922227 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 8 00:26:19.922236 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 8 00:26:19.922244 kernel: pcpu-alloc: [0] 0 1 Nov 8 00:26:19.922252 kernel: kvm-guest: PV spinlocks enabled Nov 8 00:26:19.922260 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 8 00:26:19.922272 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:26:19.922281 kernel: random: crng init done Nov 8 00:26:19.922289 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:26:19.922297 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 8 00:26:19.922306 kernel: Fallback order for Node 0: 0 Nov 8 00:26:19.922314 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Nov 8 00:26:19.922322 kernel: Policy zone: DMA32 Nov 8 00:26:19.922330 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:26:19.922342 kernel: Memory: 1874600K/2037804K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 162944K reserved, 0K cma-reserved) Nov 8 00:26:19.922350 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 8 00:26:19.922358 kernel: Kernel/User page tables isolation: enabled Nov 8 00:26:19.922366 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:26:19.922375 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:26:19.922383 kernel: Dynamic Preempt: voluntary Nov 8 00:26:19.922391 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:26:19.922404 kernel: rcu: RCU event tracing is enabled. Nov 8 00:26:19.922413 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 8 00:26:19.922423 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:26:19.922432 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:26:19.922440 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:26:19.922448 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:26:19.922456 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 8 00:26:19.922465 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 8 00:26:19.922473 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:26:19.922492 kernel: Console: colour dummy device 80x25 Nov 8 00:26:19.922501 kernel: printk: console [tty0] enabled Nov 8 00:26:19.922509 kernel: printk: console [ttyS0] enabled Nov 8 00:26:19.922518 kernel: ACPI: Core revision 20230628 Nov 8 00:26:19.922527 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Nov 8 00:26:19.922538 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:26:19.922547 kernel: x2apic enabled Nov 8 00:26:19.922556 kernel: APIC: Switched APIC routing to: physical x2apic Nov 8 00:26:19.922565 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Nov 8 00:26:19.922574 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Nov 8 00:26:19.922586 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 8 00:26:19.922594 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 8 00:26:19.922603 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:26:19.922611 kernel: Spectre V2 : Mitigation: Retpolines Nov 8 00:26:19.922620 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 8 00:26:19.922629 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 8 00:26:19.922638 kernel: RETBleed: Vulnerable Nov 8 00:26:19.922646 kernel: Speculative Store Bypass: Vulnerable Nov 8 00:26:19.922655 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:26:19.922663 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:26:19.922675 kernel: GDS: Unknown: Dependent on hypervisor status Nov 8 00:26:19.922683 kernel: active return thunk: its_return_thunk Nov 8 00:26:19.922704 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 8 00:26:19.922713 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:26:19.922722 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:26:19.922730 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:26:19.922739 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Nov 8 00:26:19.922747 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Nov 8 00:26:19.922756 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 8 00:26:19.922765 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 8 00:26:19.922773 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 8 00:26:19.922785 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 8 00:26:19.922793 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:26:19.922802 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Nov 8 00:26:19.922811 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Nov 8 00:26:19.922820 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Nov 8 00:26:19.922828 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Nov 8 00:26:19.922837 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Nov 8 00:26:19.922845 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Nov 8 00:26:19.922854 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Nov 8 00:26:19.922863 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:26:19.922872 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:26:19.922883 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:26:19.922892 kernel: landlock: Up and running. Nov 8 00:26:19.922900 kernel: SELinux: Initializing. Nov 8 00:26:19.922909 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 8 00:26:19.922918 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 8 00:26:19.922927 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Nov 8 00:26:19.922936 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:26:19.922945 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:26:19.922954 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:26:19.922963 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Nov 8 00:26:19.922974 kernel: signal: max sigframe size: 3632 Nov 8 00:26:19.922983 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:26:19.922992 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:26:19.923001 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 8 00:26:19.923010 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:26:19.923019 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:26:19.923027 kernel: .... node #0, CPUs: #1 Nov 8 00:26:19.923037 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Nov 8 00:26:19.923046 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 8 00:26:19.923057 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:26:19.923066 kernel: smpboot: Max logical packages: 1 Nov 8 00:26:19.923075 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Nov 8 00:26:19.923084 kernel: devtmpfs: initialized Nov 8 00:26:19.923093 kernel: x86/mm: Memory block size: 128MB Nov 8 00:26:19.923102 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Nov 8 00:26:19.923110 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:26:19.923119 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 8 00:26:19.923128 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:26:19.923139 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:26:19.923148 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:26:19.923157 kernel: audit: type=2000 audit(1762561579.446:1): state=initialized audit_enabled=0 res=1 Nov 8 00:26:19.923165 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:26:19.923174 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:26:19.923183 kernel: cpuidle: using governor menu Nov 8 00:26:19.923192 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:26:19.923201 kernel: dca service started, version 1.12.1 Nov 8 00:26:19.923209 kernel: PCI: Using configuration type 1 for base access Nov 8 00:26:19.923221 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:26:19.923230 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:26:19.923238 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:26:19.923247 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:26:19.923256 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:26:19.923265 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:26:19.923274 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:26:19.923282 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:26:19.923291 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Nov 8 00:26:19.923303 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:26:19.923311 kernel: ACPI: Interpreter enabled Nov 8 00:26:19.923320 kernel: ACPI: PM: (supports S0 S5) Nov 8 00:26:19.923329 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:26:19.923338 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:26:19.923346 kernel: PCI: Using E820 reservations for host bridge windows Nov 8 00:26:19.923355 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 8 00:26:19.923364 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 8 00:26:19.923535 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:26:19.923639 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 8 00:26:19.926074 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 8 00:26:19.926098 kernel: acpiphp: Slot [3] registered Nov 8 00:26:19.926109 kernel: acpiphp: Slot [4] registered Nov 8 00:26:19.926118 kernel: acpiphp: Slot [5] registered Nov 8 00:26:19.926127 kernel: acpiphp: Slot [6] registered Nov 8 00:26:19.926136 kernel: acpiphp: Slot [7] registered Nov 8 00:26:19.926150 kernel: acpiphp: Slot [8] registered Nov 8 00:26:19.926159 kernel: acpiphp: Slot [9] registered Nov 8 00:26:19.926168 kernel: acpiphp: Slot [10] registered Nov 8 00:26:19.926177 kernel: acpiphp: Slot [11] registered Nov 8 00:26:19.926186 kernel: acpiphp: Slot [12] registered Nov 8 00:26:19.926195 kernel: acpiphp: Slot [13] registered Nov 8 00:26:19.926204 kernel: acpiphp: Slot [14] registered Nov 8 00:26:19.926212 kernel: acpiphp: Slot [15] registered Nov 8 00:26:19.926221 kernel: acpiphp: Slot [16] registered Nov 8 00:26:19.926230 kernel: acpiphp: Slot [17] registered Nov 8 00:26:19.926241 kernel: acpiphp: Slot [18] registered Nov 8 00:26:19.926250 kernel: acpiphp: Slot [19] registered Nov 8 00:26:19.926259 kernel: acpiphp: Slot [20] registered Nov 8 00:26:19.926267 kernel: acpiphp: Slot [21] registered Nov 8 00:26:19.926276 kernel: acpiphp: Slot [22] registered Nov 8 00:26:19.926285 kernel: acpiphp: Slot [23] registered Nov 8 00:26:19.926294 kernel: acpiphp: Slot [24] registered Nov 8 00:26:19.926302 kernel: acpiphp: Slot [25] registered Nov 8 00:26:19.926311 kernel: acpiphp: Slot [26] registered Nov 8 00:26:19.926322 kernel: acpiphp: Slot [27] registered Nov 8 00:26:19.926331 kernel: acpiphp: Slot [28] registered Nov 8 00:26:19.926340 kernel: acpiphp: Slot [29] registered Nov 8 00:26:19.926348 kernel: acpiphp: Slot [30] registered Nov 8 00:26:19.926357 kernel: acpiphp: Slot [31] registered Nov 8 00:26:19.926366 kernel: PCI host bridge to bus 0000:00 Nov 8 00:26:19.926470 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 00:26:19.926557 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 8 00:26:19.926644 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 00:26:19.926747 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 8 00:26:19.926828 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Nov 8 00:26:19.926910 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 8 00:26:19.927016 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 8 00:26:19.927117 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Nov 8 00:26:19.927217 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Nov 8 00:26:19.927315 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Nov 8 00:26:19.927408 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Nov 8 00:26:19.927500 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Nov 8 00:26:19.927592 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Nov 8 00:26:19.927683 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Nov 8 00:26:19.930280 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Nov 8 00:26:19.930381 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Nov 8 00:26:19.930492 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Nov 8 00:26:19.930586 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Nov 8 00:26:19.930678 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Nov 8 00:26:19.930847 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Nov 8 00:26:19.930937 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 8 00:26:19.931040 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Nov 8 00:26:19.931137 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Nov 8 00:26:19.931237 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Nov 8 00:26:19.931328 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Nov 8 00:26:19.931339 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 8 00:26:19.931349 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 8 00:26:19.931358 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 8 00:26:19.931368 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 8 00:26:19.931377 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 8 00:26:19.931390 kernel: iommu: Default domain type: Translated Nov 8 00:26:19.931399 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:26:19.931408 kernel: efivars: Registered efivars operations Nov 8 00:26:19.931416 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:26:19.931425 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 00:26:19.931434 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Nov 8 00:26:19.931443 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Nov 8 00:26:19.931531 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Nov 8 00:26:19.931621 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Nov 8 00:26:19.932842 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 8 00:26:19.932867 kernel: vgaarb: loaded Nov 8 00:26:19.932877 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Nov 8 00:26:19.932887 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Nov 8 00:26:19.932897 kernel: clocksource: Switched to clocksource kvm-clock Nov 8 00:26:19.932905 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:26:19.932915 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:26:19.932924 kernel: pnp: PnP ACPI init Nov 8 00:26:19.932933 kernel: pnp: PnP ACPI: found 5 devices Nov 8 00:26:19.932948 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:26:19.932957 kernel: NET: Registered PF_INET protocol family Nov 8 00:26:19.932966 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:26:19.932975 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 8 00:26:19.932985 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:26:19.932994 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 8 00:26:19.933003 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 8 00:26:19.933012 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 8 00:26:19.933023 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 8 00:26:19.933032 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 8 00:26:19.933041 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:26:19.933050 kernel: NET: Registered PF_XDP protocol family Nov 8 00:26:19.933149 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 8 00:26:19.933234 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 8 00:26:19.933317 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 8 00:26:19.933413 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 8 00:26:19.933495 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Nov 8 00:26:19.934901 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 8 00:26:19.934925 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:26:19.934935 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 8 00:26:19.934946 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Nov 8 00:26:19.934955 kernel: clocksource: Switched to clocksource tsc Nov 8 00:26:19.934964 kernel: Initialise system trusted keyrings Nov 8 00:26:19.934973 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 8 00:26:19.934982 kernel: Key type asymmetric registered Nov 8 00:26:19.934996 kernel: Asymmetric key parser 'x509' registered Nov 8 00:26:19.935005 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:26:19.935014 kernel: io scheduler mq-deadline registered Nov 8 00:26:19.935023 kernel: io scheduler kyber registered Nov 8 00:26:19.935032 kernel: io scheduler bfq registered Nov 8 00:26:19.935041 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:26:19.935050 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:26:19.935059 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:26:19.935068 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 8 00:26:19.935080 kernel: i8042: Warning: Keylock active Nov 8 00:26:19.935089 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 8 00:26:19.935098 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 8 00:26:19.935204 kernel: rtc_cmos 00:00: RTC can wake from S4 Nov 8 00:26:19.935294 kernel: rtc_cmos 00:00: registered as rtc0 Nov 8 00:26:19.935380 kernel: rtc_cmos 00:00: setting system clock to 2025-11-08T00:26:19 UTC (1762561579) Nov 8 00:26:19.935465 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Nov 8 00:26:19.935476 kernel: intel_pstate: CPU model not supported Nov 8 00:26:19.935489 kernel: efifb: probing for efifb Nov 8 00:26:19.935498 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Nov 8 00:26:19.935507 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Nov 8 00:26:19.935516 kernel: efifb: scrolling: redraw Nov 8 00:26:19.935525 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 8 00:26:19.935534 kernel: Console: switching to colour frame buffer device 100x37 Nov 8 00:26:19.935543 kernel: fb0: EFI VGA frame buffer device Nov 8 00:26:19.935552 kernel: pstore: Using crash dump compression: deflate Nov 8 00:26:19.935561 kernel: pstore: Registered efi_pstore as persistent store backend Nov 8 00:26:19.935572 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:26:19.935581 kernel: Segment Routing with IPv6 Nov 8 00:26:19.935590 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:26:19.935599 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:26:19.935609 kernel: Key type dns_resolver registered Nov 8 00:26:19.935617 kernel: IPI shorthand broadcast: enabled Nov 8 00:26:19.935647 kernel: sched_clock: Marking stable (489002260, 128904915)->(682103669, -64196494) Nov 8 00:26:19.935658 kernel: registered taskstats version 1 Nov 8 00:26:19.935668 kernel: Loading compiled-in X.509 certificates Nov 8 00:26:19.935679 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:26:19.937728 kernel: Key type .fscrypt registered Nov 8 00:26:19.937745 kernel: Key type fscrypt-provisioning registered Nov 8 00:26:19.937754 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:26:19.937765 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:26:19.937775 kernel: ima: No architecture policies found Nov 8 00:26:19.937784 kernel: clk: Disabling unused clocks Nov 8 00:26:19.937794 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:26:19.937803 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:26:19.937817 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:26:19.937827 kernel: Run /init as init process Nov 8 00:26:19.937836 kernel: with arguments: Nov 8 00:26:19.937846 kernel: /init Nov 8 00:26:19.937855 kernel: with environment: Nov 8 00:26:19.937864 kernel: HOME=/ Nov 8 00:26:19.937873 kernel: TERM=linux Nov 8 00:26:19.937886 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:26:19.937901 systemd[1]: Detected virtualization amazon. Nov 8 00:26:19.937912 systemd[1]: Detected architecture x86-64. Nov 8 00:26:19.937924 systemd[1]: Running in initrd. Nov 8 00:26:19.937933 systemd[1]: No hostname configured, using default hostname. Nov 8 00:26:19.937943 systemd[1]: Hostname set to . Nov 8 00:26:19.937953 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:26:19.937963 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:26:19.937973 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:26:19.937986 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:26:19.937997 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:26:19.938007 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:26:19.938017 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:26:19.938029 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:26:19.938043 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:26:19.938054 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:26:19.938064 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:26:19.938073 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:26:19.938083 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:26:19.938093 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:26:19.938103 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:26:19.938116 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:26:19.938126 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:26:19.938136 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:26:19.938146 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:26:19.938156 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:26:19.938166 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:26:19.938175 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:26:19.938185 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:26:19.938195 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:26:19.938208 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:26:19.938218 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:26:19.938228 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:26:19.938238 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:26:19.938248 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:26:19.938257 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:26:19.938267 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:26:19.938308 systemd-journald[178]: Collecting audit messages is disabled. Nov 8 00:26:19.938334 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:26:19.938344 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:26:19.938354 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:26:19.938369 systemd-journald[178]: Journal started Nov 8 00:26:19.938390 systemd-journald[178]: Runtime Journal (/run/log/journal/ec227bc69c55ae191f2a8f24c5fada91) is 4.7M, max 38.2M, 33.4M free. Nov 8 00:26:19.942710 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:26:19.943319 systemd-modules-load[179]: Inserted module 'overlay' Nov 8 00:26:19.946804 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:26:19.950735 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:26:19.953408 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:26:19.961085 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:26:19.964876 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:26:19.968018 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:26:19.976074 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:26:19.979057 systemd-modules-load[179]: Inserted module 'br_netfilter' Nov 8 00:26:19.980078 kernel: Bridge firewalling registered Nov 8 00:26:19.982043 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:26:19.994860 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:26:19.996474 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:26:19.998014 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:26:19.998584 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:26:20.005705 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Nov 8 00:26:20.007070 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:26:20.008229 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:26:20.015963 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:26:20.025031 dracut-cmdline[210]: dracut-dracut-053 Nov 8 00:26:20.028718 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:26:20.065311 systemd-resolved[213]: Positive Trust Anchors: Nov 8 00:26:20.065326 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:26:20.065404 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:26:20.075293 systemd-resolved[213]: Defaulting to hostname 'linux'. Nov 8 00:26:20.076747 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:26:20.078237 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:26:20.121723 kernel: SCSI subsystem initialized Nov 8 00:26:20.131716 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:26:20.142716 kernel: iscsi: registered transport (tcp) Nov 8 00:26:20.165001 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:26:20.165077 kernel: QLogic iSCSI HBA Driver Nov 8 00:26:20.205721 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:26:20.210918 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:26:20.238332 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:26:20.238404 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:26:20.238427 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:26:20.281739 kernel: raid6: avx512x4 gen() 17642 MB/s Nov 8 00:26:20.299747 kernel: raid6: avx512x2 gen() 17871 MB/s Nov 8 00:26:20.317755 kernel: raid6: avx512x1 gen() 16727 MB/s Nov 8 00:26:20.335750 kernel: raid6: avx2x4 gen() 17783 MB/s Nov 8 00:26:20.353764 kernel: raid6: avx2x2 gen() 17189 MB/s Nov 8 00:26:20.372063 kernel: raid6: avx2x1 gen() 6435 MB/s Nov 8 00:26:20.372137 kernel: raid6: using algorithm avx512x2 gen() 17871 MB/s Nov 8 00:26:20.390958 kernel: raid6: .... xor() 15836 MB/s, rmw enabled Nov 8 00:26:20.391037 kernel: raid6: using avx512x2 recovery algorithm Nov 8 00:26:20.413725 kernel: xor: automatically using best checksumming function avx Nov 8 00:26:20.580722 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:26:20.591226 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:26:20.597900 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:26:20.612709 systemd-udevd[396]: Using default interface naming scheme 'v255'. Nov 8 00:26:20.617867 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:26:20.625902 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:26:20.644775 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Nov 8 00:26:20.675383 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:26:20.680909 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:26:20.732820 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:26:20.742559 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:26:20.773565 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:26:20.776349 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:26:20.779190 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:26:20.779767 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:26:20.789118 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:26:20.806326 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:26:20.849047 kernel: ena 0000:00:05.0: ENA device version: 0.10 Nov 8 00:26:20.849315 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Nov 8 00:26:20.855645 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Nov 8 00:26:20.855967 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:26:20.854883 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:26:20.855041 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:26:20.857161 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:26:20.857782 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:26:20.857973 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:26:20.858582 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:26:20.868034 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:26:20.879042 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:26:20.879114 kernel: AES CTR mode by8 optimization enabled Nov 8 00:26:20.890712 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:f9:e5:36:de:01 Nov 8 00:26:20.899516 (udev-worker)[440]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:26:20.910667 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:26:20.911582 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:26:20.920305 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:26:20.922809 kernel: nvme nvme0: pci function 0000:00:04.0 Nov 8 00:26:20.927842 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 8 00:26:20.943716 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 8 00:26:20.945458 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:26:20.950904 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:26:20.967066 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 00:26:20.967090 kernel: GPT:9289727 != 33554431 Nov 8 00:26:20.967102 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 00:26:20.967114 kernel: GPT:9289727 != 33554431 Nov 8 00:26:20.967125 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:26:20.967136 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 8 00:26:20.999210 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:26:21.022726 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/nvme0n1p3 scanned by (udev-worker) (449) Nov 8 00:26:21.042127 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Nov 8 00:26:21.047713 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (445) Nov 8 00:26:21.056783 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Nov 8 00:26:21.058162 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Nov 8 00:26:21.065882 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:26:21.080404 disk-uuid[618]: Primary Header is updated. Nov 8 00:26:21.080404 disk-uuid[618]: Secondary Entries is updated. Nov 8 00:26:21.080404 disk-uuid[618]: Secondary Header is updated. Nov 8 00:26:21.094233 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Nov 8 00:26:21.111685 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 8 00:26:22.096469 disk-uuid[623]: The operation has completed successfully. Nov 8 00:26:22.097677 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 8 00:26:22.245319 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:26:22.245455 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:26:22.268900 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:26:22.272738 sh[889]: Success Nov 8 00:26:22.295934 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 8 00:26:22.428951 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:26:22.445831 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:26:22.449709 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:26:22.497768 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:26:22.497846 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:26:22.499950 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:26:22.501765 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:26:22.504294 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:26:22.530741 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 8 00:26:22.534483 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:26:22.536366 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:26:22.547988 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:26:22.552939 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:26:22.574760 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:26:22.578899 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:26:22.578977 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 8 00:26:22.596724 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 8 00:26:22.609105 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:26:22.612108 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:26:22.619293 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:26:22.631878 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:26:22.660181 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:26:22.666960 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:26:22.691731 systemd-networkd[1081]: lo: Link UP Nov 8 00:26:22.691742 systemd-networkd[1081]: lo: Gained carrier Nov 8 00:26:22.693597 systemd-networkd[1081]: Enumeration completed Nov 8 00:26:22.693832 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:26:22.694572 systemd-networkd[1081]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:26:22.694577 systemd-networkd[1081]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:26:22.694710 systemd[1]: Reached target network.target - Network. Nov 8 00:26:22.698561 systemd-networkd[1081]: eth0: Link UP Nov 8 00:26:22.698567 systemd-networkd[1081]: eth0: Gained carrier Nov 8 00:26:22.698580 systemd-networkd[1081]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:26:22.715007 systemd-networkd[1081]: eth0: DHCPv4 address 172.31.23.96/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 8 00:26:22.896884 ignition[1034]: Ignition 2.19.0 Nov 8 00:26:22.896898 ignition[1034]: Stage: fetch-offline Nov 8 00:26:22.897178 ignition[1034]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:26:22.897192 ignition[1034]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:26:22.897824 ignition[1034]: Ignition finished successfully Nov 8 00:26:22.900066 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:26:22.906887 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 8 00:26:22.921758 ignition[1090]: Ignition 2.19.0 Nov 8 00:26:22.921773 ignition[1090]: Stage: fetch Nov 8 00:26:22.922246 ignition[1090]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:26:22.922261 ignition[1090]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:26:22.922389 ignition[1090]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:26:22.939345 ignition[1090]: PUT result: OK Nov 8 00:26:22.942488 ignition[1090]: parsed url from cmdline: "" Nov 8 00:26:22.942502 ignition[1090]: no config URL provided Nov 8 00:26:22.942516 ignition[1090]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:26:22.942535 ignition[1090]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:26:22.942569 ignition[1090]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:26:22.944365 ignition[1090]: PUT result: OK Nov 8 00:26:22.944439 ignition[1090]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Nov 8 00:26:22.946102 ignition[1090]: GET result: OK Nov 8 00:26:22.946167 ignition[1090]: parsing config with SHA512: 1cb6bb3e9bd014ee25b698b9b195629dd46102d5f4305d650ad4a6503985953eba323d1e738d2b8bedd724f2ba77cd47e6657230842b37ae7cf921b9586c5463 Nov 8 00:26:22.950122 unknown[1090]: fetched base config from "system" Nov 8 00:26:22.950137 unknown[1090]: fetched base config from "system" Nov 8 00:26:22.950566 ignition[1090]: fetch: fetch complete Nov 8 00:26:22.950146 unknown[1090]: fetched user config from "aws" Nov 8 00:26:22.950573 ignition[1090]: fetch: fetch passed Nov 8 00:26:22.953243 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 8 00:26:22.950633 ignition[1090]: Ignition finished successfully Nov 8 00:26:22.958911 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:26:22.977320 ignition[1096]: Ignition 2.19.0 Nov 8 00:26:22.977491 ignition[1096]: Stage: kargs Nov 8 00:26:22.978088 ignition[1096]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:26:22.978102 ignition[1096]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:26:22.978229 ignition[1096]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:26:22.979254 ignition[1096]: PUT result: OK Nov 8 00:26:22.981983 ignition[1096]: kargs: kargs passed Nov 8 00:26:22.982062 ignition[1096]: Ignition finished successfully Nov 8 00:26:22.984268 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:26:22.989928 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:26:23.006729 ignition[1102]: Ignition 2.19.0 Nov 8 00:26:23.006743 ignition[1102]: Stage: disks Nov 8 00:26:23.007275 ignition[1102]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:26:23.007290 ignition[1102]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:26:23.007420 ignition[1102]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:26:23.008295 ignition[1102]: PUT result: OK Nov 8 00:26:23.010788 ignition[1102]: disks: disks passed Nov 8 00:26:23.010866 ignition[1102]: Ignition finished successfully Nov 8 00:26:23.012197 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:26:23.013189 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:26:23.013797 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:26:23.014333 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:26:23.014901 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:26:23.015467 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:26:23.020924 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:26:23.060066 systemd-fsck[1110]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 8 00:26:23.063981 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:26:23.069825 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:26:23.178923 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:26:23.179793 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:26:23.180824 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:26:23.195839 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:26:23.198370 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:26:23.199642 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 8 00:26:23.199726 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:26:23.199754 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:26:23.207444 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:26:23.214909 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:26:23.217568 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1129) Nov 8 00:26:23.217604 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:26:23.222714 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:26:23.222783 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 8 00:26:23.234741 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 8 00:26:23.236226 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:26:23.416531 initrd-setup-root[1154]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:26:23.423229 initrd-setup-root[1161]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:26:23.428919 initrd-setup-root[1168]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:26:23.433552 initrd-setup-root[1175]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:26:23.604233 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:26:23.610828 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:26:23.615978 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:26:23.622915 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:26:23.624115 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:26:23.656285 ignition[1242]: INFO : Ignition 2.19.0 Nov 8 00:26:23.657009 ignition[1242]: INFO : Stage: mount Nov 8 00:26:23.657541 ignition[1242]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:26:23.657541 ignition[1242]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:26:23.658428 ignition[1242]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:26:23.659876 ignition[1242]: INFO : PUT result: OK Nov 8 00:26:23.661582 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:26:23.663517 ignition[1242]: INFO : mount: mount passed Nov 8 00:26:23.663517 ignition[1242]: INFO : Ignition finished successfully Nov 8 00:26:23.664791 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:26:23.671850 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:26:23.685963 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:26:23.704726 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1255) Nov 8 00:26:23.709293 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:26:23.709474 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:26:23.709499 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 8 00:26:23.717733 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 8 00:26:23.718463 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:26:23.747100 ignition[1271]: INFO : Ignition 2.19.0 Nov 8 00:26:23.747100 ignition[1271]: INFO : Stage: files Nov 8 00:26:23.748561 ignition[1271]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:26:23.748561 ignition[1271]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:26:23.748561 ignition[1271]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:26:23.749980 ignition[1271]: INFO : PUT result: OK Nov 8 00:26:23.752283 ignition[1271]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:26:23.754112 ignition[1271]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:26:23.754112 ignition[1271]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:26:23.772307 ignition[1271]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:26:23.773629 ignition[1271]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:26:23.773629 ignition[1271]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:26:23.772975 unknown[1271]: wrote ssh authorized keys file for user: core Nov 8 00:26:23.776608 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:26:23.777292 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:26:23.777292 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:26:23.777292 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:26:23.777292 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:26:23.777292 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:26:23.777292 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:26:23.777292 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 8 00:26:24.216909 systemd-networkd[1081]: eth0: Gained IPv6LL Nov 8 00:26:24.218230 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Nov 8 00:26:24.789439 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:26:24.790774 ignition[1271]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:26:24.790774 ignition[1271]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:26:24.790774 ignition[1271]: INFO : files: files passed Nov 8 00:26:24.790774 ignition[1271]: INFO : Ignition finished successfully Nov 8 00:26:24.792351 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:26:24.799917 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:26:24.802918 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:26:24.807624 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:26:24.808869 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:26:24.825796 initrd-setup-root-after-ignition[1300]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:26:24.827614 initrd-setup-root-after-ignition[1304]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:26:24.828813 initrd-setup-root-after-ignition[1300]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:26:24.828519 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:26:24.829942 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:26:24.835983 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:26:24.872347 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:26:24.872507 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:26:24.873941 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:26:24.875002 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:26:24.875826 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:26:24.878871 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:26:24.901015 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:26:24.905922 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:26:24.920242 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:26:24.920980 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:26:24.922135 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:26:24.923058 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:26:24.923243 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:26:24.924509 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:26:24.925522 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:26:24.926400 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:26:24.927223 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:26:24.928036 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:26:24.928864 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:26:24.929775 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:26:24.930576 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:26:24.931756 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:26:24.932503 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:26:24.933232 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:26:24.933476 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:26:24.934588 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:26:24.935399 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:26:24.936087 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:26:24.936234 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:26:24.936918 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:26:24.937092 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:26:24.938541 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:26:24.938753 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:26:24.939455 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:26:24.939611 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:26:24.947518 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:26:24.948011 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:26:24.948192 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:26:24.951962 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:26:24.952356 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:26:24.952528 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:26:24.953658 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:26:24.953834 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:26:24.960950 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:26:24.961480 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:26:24.970911 ignition[1324]: INFO : Ignition 2.19.0 Nov 8 00:26:24.970911 ignition[1324]: INFO : Stage: umount Nov 8 00:26:24.972520 ignition[1324]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:26:24.972520 ignition[1324]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:26:24.972520 ignition[1324]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:26:24.974513 ignition[1324]: INFO : PUT result: OK Nov 8 00:26:24.977982 ignition[1324]: INFO : umount: umount passed Nov 8 00:26:24.977982 ignition[1324]: INFO : Ignition finished successfully Nov 8 00:26:24.979489 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:26:24.979640 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:26:24.981181 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:26:24.981305 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:26:24.983264 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:26:24.983336 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:26:24.983903 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 8 00:26:24.983956 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 8 00:26:24.984454 systemd[1]: Stopped target network.target - Network. Nov 8 00:26:24.986753 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:26:24.986826 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:26:24.987526 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:26:24.987968 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:26:24.991778 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:26:24.992773 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:26:24.993180 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:26:24.993938 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:26:24.994019 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:26:24.994530 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:26:24.994586 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:26:24.995709 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:26:24.995782 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:26:24.996208 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:26:24.996268 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:26:24.999315 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:26:24.999962 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:26:25.002213 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:26:25.002800 systemd-networkd[1081]: eth0: DHCPv6 lease lost Nov 8 00:26:25.003323 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:26:25.003456 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:26:25.005036 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:26:25.005166 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:26:25.007289 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:26:25.007363 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:26:25.008217 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:26:25.008282 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:26:25.015864 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:26:25.016289 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:26:25.016353 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:26:25.016878 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:26:25.017641 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:26:25.019829 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:26:25.030907 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:26:25.031042 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:26:25.033494 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:26:25.033565 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:26:25.034950 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:26:25.035019 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:26:25.037892 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:26:25.038599 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:26:25.039588 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:26:25.040160 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:26:25.041994 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:26:25.042076 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:26:25.042885 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:26:25.042936 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:26:25.043549 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:26:25.043612 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:26:25.044670 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:26:25.044754 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:26:25.045875 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:26:25.045937 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:26:25.055980 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:26:25.056646 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:26:25.056762 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:26:25.057683 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:26:25.059813 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:26:25.064659 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:26:25.064847 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:26:25.066049 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:26:25.070893 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:26:25.080760 systemd[1]: Switching root. Nov 8 00:26:25.103304 systemd-journald[178]: Journal stopped Nov 8 00:26:26.280816 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Nov 8 00:26:26.280932 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:26:26.280960 kernel: SELinux: policy capability open_perms=1 Nov 8 00:26:26.280978 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:26:26.281001 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:26:26.281019 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:26:26.281040 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:26:26.281060 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:26:26.281078 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:26:26.281099 kernel: audit: type=1403 audit(1762561585.315:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:26:26.281118 systemd[1]: Successfully loaded SELinux policy in 43.136ms. Nov 8 00:26:26.281140 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.985ms. Nov 8 00:26:26.281165 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:26:26.281185 systemd[1]: Detected virtualization amazon. Nov 8 00:26:26.281205 systemd[1]: Detected architecture x86-64. Nov 8 00:26:26.281232 systemd[1]: Detected first boot. Nov 8 00:26:26.281249 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:26:26.281268 zram_generator::config[1367]: No configuration found. Nov 8 00:26:26.281291 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:26:26.281312 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 8 00:26:26.281340 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 8 00:26:26.281362 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 8 00:26:26.281384 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:26:26.281410 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:26:26.281430 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:26:26.281452 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:26:26.281473 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:26:26.281496 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:26:26.281518 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:26:26.281538 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:26:26.281559 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:26:26.281584 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:26:26.281605 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:26:26.281626 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:26:26.281647 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:26:26.281668 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:26:26.290432 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 8 00:26:26.290484 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:26:26.290505 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 8 00:26:26.290525 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 8 00:26:26.290552 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 8 00:26:26.290572 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:26:26.290592 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:26:26.290611 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:26:26.290630 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:26:26.290648 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:26:26.290667 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:26:26.290703 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:26:26.290726 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:26:26.290745 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:26:26.290763 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:26:26.290782 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:26:26.290805 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:26:26.290869 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:26:26.290891 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:26:26.290912 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:26:26.290934 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:26:26.290960 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:26:26.290982 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:26:26.291007 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:26:26.291029 systemd[1]: Reached target machines.target - Containers. Nov 8 00:26:26.291051 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:26:26.291073 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:26:26.291095 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:26:26.291117 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:26:26.291142 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:26:26.291164 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:26:26.291185 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:26:26.291206 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:26:26.291228 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:26:26.291250 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:26:26.291272 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 8 00:26:26.291293 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 8 00:26:26.291315 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 8 00:26:26.291341 systemd[1]: Stopped systemd-fsck-usr.service. Nov 8 00:26:26.291361 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:26:26.291379 kernel: loop: module loaded Nov 8 00:26:26.291398 kernel: fuse: init (API version 7.39) Nov 8 00:26:26.291417 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:26:26.291434 kernel: ACPI: bus type drm_connector registered Nov 8 00:26:26.291452 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:26:26.291471 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:26:26.291535 systemd-journald[1452]: Collecting audit messages is disabled. Nov 8 00:26:26.291579 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:26:26.291600 systemd[1]: verity-setup.service: Deactivated successfully. Nov 8 00:26:26.291622 systemd[1]: Stopped verity-setup.service. Nov 8 00:26:26.291646 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:26:26.291667 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:26:26.293843 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:26:26.293883 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:26:26.293910 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:26:26.293931 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:26:26.293950 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:26:26.293972 systemd-journald[1452]: Journal started Nov 8 00:26:26.294020 systemd-journald[1452]: Runtime Journal (/run/log/journal/ec227bc69c55ae191f2a8f24c5fada91) is 4.7M, max 38.2M, 33.4M free. Nov 8 00:26:25.948191 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:26:25.966550 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Nov 8 00:26:26.298068 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:26:26.298137 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:26:25.967065 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 8 00:26:26.301650 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:26:26.302033 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:26:26.303349 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:26:26.304117 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:26:26.305246 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:26:26.305459 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:26:26.308207 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:26:26.308449 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:26:26.309867 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:26:26.310090 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:26:26.311276 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:26:26.311483 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:26:26.313903 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:26:26.315324 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:26:26.316349 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:26:26.323177 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:26:26.339496 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:26:26.351285 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:26:26.360968 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:26:26.362554 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:26:26.362734 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:26:26.364984 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:26:26.374215 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:26:26.384240 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:26:26.386028 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:26:26.390761 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:26:26.394917 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:26:26.396412 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:26:26.404918 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:26:26.405635 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:26:26.411924 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:26:26.417176 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:26:26.438818 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:26:26.448014 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:26:26.450470 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:26:26.451291 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:26:26.455114 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:26:26.466049 systemd-journald[1452]: Time spent on flushing to /var/log/journal/ec227bc69c55ae191f2a8f24c5fada91 is 177.714ms for 964 entries. Nov 8 00:26:26.466049 systemd-journald[1452]: System Journal (/var/log/journal/ec227bc69c55ae191f2a8f24c5fada91) is 8.0M, max 195.6M, 187.6M free. Nov 8 00:26:26.680235 systemd-journald[1452]: Received client request to flush runtime journal. Nov 8 00:26:26.680322 kernel: loop0: detected capacity change from 0 to 140768 Nov 8 00:26:26.680358 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:26:26.680387 kernel: loop1: detected capacity change from 0 to 142488 Nov 8 00:26:26.471150 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:26:26.476548 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:26:26.486814 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:26:26.496973 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:26:26.505914 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:26:26.572705 udevadm[1508]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 8 00:26:26.610498 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:26:26.623122 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:26:26.624881 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:26:26.626421 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:26:26.684981 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:26:26.694866 systemd-tmpfiles[1513]: ACLs are not supported, ignoring. Nov 8 00:26:26.694894 systemd-tmpfiles[1513]: ACLs are not supported, ignoring. Nov 8 00:26:26.709388 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:26:26.769749 kernel: loop2: detected capacity change from 0 to 61336 Nov 8 00:26:26.902720 kernel: loop3: detected capacity change from 0 to 219144 Nov 8 00:26:27.035741 kernel: loop4: detected capacity change from 0 to 140768 Nov 8 00:26:27.085847 kernel: loop5: detected capacity change from 0 to 142488 Nov 8 00:26:27.129149 kernel: loop6: detected capacity change from 0 to 61336 Nov 8 00:26:27.160719 kernel: loop7: detected capacity change from 0 to 219144 Nov 8 00:26:27.189946 (sd-merge)[1523]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Nov 8 00:26:27.190713 (sd-merge)[1523]: Merged extensions into '/usr'. Nov 8 00:26:27.200990 systemd[1]: Reloading requested from client PID 1496 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:26:27.201009 systemd[1]: Reloading... Nov 8 00:26:27.344714 ldconfig[1491]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:26:27.386790 zram_generator::config[1550]: No configuration found. Nov 8 00:26:27.524778 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:26:27.578777 systemd[1]: Reloading finished in 376 ms. Nov 8 00:26:27.610270 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:26:27.611040 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:26:27.611705 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:26:27.625942 systemd[1]: Starting ensure-sysext.service... Nov 8 00:26:27.627578 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:26:27.630013 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:26:27.640955 systemd[1]: Reloading requested from client PID 1602 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:26:27.640982 systemd[1]: Reloading... Nov 8 00:26:27.672221 systemd-tmpfiles[1603]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:26:27.672564 systemd-tmpfiles[1603]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:26:27.673471 systemd-tmpfiles[1603]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:26:27.675808 systemd-tmpfiles[1603]: ACLs are not supported, ignoring. Nov 8 00:26:27.675881 systemd-tmpfiles[1603]: ACLs are not supported, ignoring. Nov 8 00:26:27.679213 systemd-udevd[1604]: Using default interface naming scheme 'v255'. Nov 8 00:26:27.680073 systemd-tmpfiles[1603]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:26:27.680159 systemd-tmpfiles[1603]: Skipping /boot Nov 8 00:26:27.693219 systemd-tmpfiles[1603]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:26:27.693540 systemd-tmpfiles[1603]: Skipping /boot Nov 8 00:26:27.727408 zram_generator::config[1627]: No configuration found. Nov 8 00:26:27.781674 (udev-worker)[1642]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:26:27.871249 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 8 00:26:27.871313 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Nov 8 00:26:27.895740 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Nov 8 00:26:27.902922 kernel: ACPI: button: Power Button [PWRF] Nov 8 00:26:27.902993 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Nov 8 00:26:27.905714 kernel: ACPI: button: Sleep Button [SLPF] Nov 8 00:26:27.958298 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 00:26:27.957240 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:26:27.972715 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (1650) Nov 8 00:26:28.053172 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 8 00:26:28.053518 systemd[1]: Reloading finished in 411 ms. Nov 8 00:26:28.073492 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:26:28.076291 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:26:28.162627 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:26:28.166099 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 8 00:26:28.171836 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:26:28.177029 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:26:28.183080 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:26:28.184041 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:26:28.186823 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:26:28.191740 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:26:28.196167 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:26:28.200022 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:26:28.203810 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:26:28.207569 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:26:28.215070 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:26:28.237042 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:26:28.241922 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:26:28.244829 lvm[1795]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:26:28.253136 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:26:28.262645 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:26:28.276079 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:26:28.285492 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:26:28.287171 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:26:28.293217 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:26:28.295828 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:26:28.301816 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:26:28.302028 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:26:28.311105 systemd[1]: Finished ensure-sysext.service. Nov 8 00:26:28.313443 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:26:28.318087 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:26:28.335392 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:26:28.336048 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:26:28.339510 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:26:28.351115 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:26:28.351790 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:26:28.355760 augenrules[1829]: No rules Nov 8 00:26:28.359735 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:26:28.363342 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:26:28.365404 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:26:28.365878 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:26:28.369821 lvm[1827]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:26:28.375521 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:26:28.383108 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:26:28.397261 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:26:28.408981 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:26:28.410018 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:26:28.427517 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:26:28.430224 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:26:28.432530 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:26:28.451809 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:26:28.470818 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:26:28.536125 systemd-networkd[1807]: lo: Link UP Nov 8 00:26:28.536137 systemd-networkd[1807]: lo: Gained carrier Nov 8 00:26:28.539301 systemd-networkd[1807]: Enumeration completed Nov 8 00:26:28.539466 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:26:28.542337 systemd-networkd[1807]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:26:28.542352 systemd-networkd[1807]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:26:28.548998 systemd-networkd[1807]: eth0: Link UP Nov 8 00:26:28.549282 systemd-networkd[1807]: eth0: Gained carrier Nov 8 00:26:28.549326 systemd-networkd[1807]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:26:28.549762 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:26:28.550478 systemd-resolved[1808]: Positive Trust Anchors: Nov 8 00:26:28.550834 systemd-resolved[1808]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:26:28.550894 systemd-resolved[1808]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:26:28.557647 systemd-resolved[1808]: Defaulting to hostname 'linux'. Nov 8 00:26:28.561406 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:26:28.562180 systemd[1]: Reached target network.target - Network. Nov 8 00:26:28.562681 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:26:28.562872 systemd-networkd[1807]: eth0: DHCPv4 address 172.31.23.96/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 8 00:26:28.563212 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:26:28.563888 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:26:28.564470 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:26:28.565189 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:26:28.565881 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:26:28.566413 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:26:28.566935 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:26:28.566967 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:26:28.567394 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:26:28.569095 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:26:28.570958 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:26:28.578953 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:26:28.580133 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:26:28.580818 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:26:28.581528 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:26:28.583024 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:26:28.583067 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:26:28.596162 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:26:28.601937 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 8 00:26:28.605917 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:26:28.608878 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:26:28.613537 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:26:28.614190 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:26:28.618501 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:26:28.624858 systemd[1]: Started ntpd.service - Network Time Service. Nov 8 00:26:28.639861 jq[1859]: false Nov 8 00:26:28.636849 systemd[1]: Starting setup-oem.service - Setup OEM... Nov 8 00:26:28.646994 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:26:28.650919 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:26:28.667486 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:26:28.677014 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 00:26:28.677707 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:26:28.683252 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:26:28.689813 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:26:28.698195 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:26:28.698826 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:26:28.729044 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:26:28.729922 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:26:28.751514 (ntainerd)[1877]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:26:28.761344 jq[1868]: true Nov 8 00:26:28.765277 update_engine[1867]: I20251108 00:26:28.758646 1867 main.cc:92] Flatcar Update Engine starting Nov 8 00:26:28.804787 ntpd[1862]: 8 Nov 00:26:28 ntpd[1862]: ntpd 4.2.8p17@1.4004-o Fri Nov 7 22:06:24 UTC 2025 (1): Starting Nov 8 00:26:28.804787 ntpd[1862]: 8 Nov 00:26:28 ntpd[1862]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 8 00:26:28.804787 ntpd[1862]: 8 Nov 00:26:28 ntpd[1862]: ---------------------------------------------------- Nov 8 00:26:28.804787 ntpd[1862]: 8 Nov 00:26:28 ntpd[1862]: ntp-4 is maintained by Network Time Foundation, Nov 8 00:26:28.804787 ntpd[1862]: 8 Nov 00:26:28 ntpd[1862]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 8 00:26:28.804787 ntpd[1862]: 8 Nov 00:26:28 ntpd[1862]: corporation. Support and training for ntp-4 are Nov 8 00:26:28.804787 ntpd[1862]: 8 Nov 00:26:28 ntpd[1862]: available at https://www.nwtime.org/support Nov 8 00:26:28.804787 ntpd[1862]: 8 Nov 00:26:28 ntpd[1862]: ---------------------------------------------------- Nov 8 00:26:28.801902 ntpd[1862]: ntpd 4.2.8p17@1.4004-o Fri Nov 7 22:06:24 UTC 2025 (1): Starting Nov 8 00:26:28.801932 ntpd[1862]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 8 00:26:28.801944 ntpd[1862]: ---------------------------------------------------- Nov 8 00:26:28.801954 ntpd[1862]: ntp-4 is maintained by Network Time Foundation, Nov 8 00:26:28.805676 systemd[1]: Finished setup-oem.service - Setup OEM. Nov 8 00:26:28.801964 ntpd[1862]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 8 00:26:28.801974 ntpd[1862]: corporation. Support and training for ntp-4 are Nov 8 00:26:28.801983 ntpd[1862]: available at https://www.nwtime.org/support Nov 8 00:26:28.801993 ntpd[1862]: ---------------------------------------------------- Nov 8 00:26:28.807407 ntpd[1862]: proto: precision = 0.094 usec (-23) Nov 8 00:26:28.812937 ntpd[1862]: 8 Nov 00:26:28 ntpd[1862]: proto: precision = 0.094 usec (-23) Nov 8 00:26:28.812937 ntpd[1862]: 8 Nov 00:26:28 ntpd[1862]: basedate set to 2025-10-26 Nov 8 00:26:28.812937 ntpd[1862]: 8 Nov 00:26:28 ntpd[1862]: gps base set to 2025-10-26 (week 2390) Nov 8 00:26:28.808914 ntpd[1862]: basedate set to 2025-10-26 Nov 8 00:26:28.808935 ntpd[1862]: gps base set to 2025-10-26 (week 2390) Nov 8 00:26:28.814213 dbus-daemon[1858]: [system] SELinux support is enabled Nov 8 00:26:28.819289 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:26:28.826719 extend-filesystems[1860]: Found loop4 Nov 8 00:26:28.826719 extend-filesystems[1860]: Found loop5 Nov 8 00:26:28.826719 extend-filesystems[1860]: Found loop6 Nov 8 00:26:28.826719 extend-filesystems[1860]: Found loop7 Nov 8 00:26:28.826719 extend-filesystems[1860]: Found nvme0n1 Nov 8 00:26:28.826719 extend-filesystems[1860]: Found nvme0n1p1 Nov 8 00:26:28.826719 extend-filesystems[1860]: Found nvme0n1p2 Nov 8 00:26:28.826719 extend-filesystems[1860]: Found nvme0n1p3 Nov 8 00:26:28.826719 extend-filesystems[1860]: Found usr Nov 8 00:26:28.826719 extend-filesystems[1860]: Found nvme0n1p4 Nov 8 00:26:28.826719 extend-filesystems[1860]: Found nvme0n1p6 Nov 8 00:26:28.826719 extend-filesystems[1860]: Found nvme0n1p7 Nov 8 00:26:28.826719 extend-filesystems[1860]: Found nvme0n1p9 Nov 8 00:26:28.826719 extend-filesystems[1860]: Checking size of /dev/nvme0n1p9 Nov 8 00:26:28.892756 extend-filesystems[1860]: Resized partition /dev/nvme0n1p9 Nov 8 00:26:28.896873 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Nov 8 00:26:28.897841 ntpd[1862]: 8 Nov 00:26:28 ntpd[1862]: Listen and drop on 0 v6wildcard [::]:123 Nov 8 00:26:28.897841 ntpd[1862]: 8 Nov 00:26:28 ntpd[1862]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 8 00:26:28.897841 ntpd[1862]: 8 Nov 00:26:28 ntpd[1862]: Listen normally on 2 lo 127.0.0.1:123 Nov 8 00:26:28.897841 ntpd[1862]: 8 Nov 00:26:28 ntpd[1862]: Listen normally on 3 eth0 172.31.23.96:123 Nov 8 00:26:28.897841 ntpd[1862]: 8 Nov 00:26:28 ntpd[1862]: Listen normally on 4 lo [::1]:123 Nov 8 00:26:28.897841 ntpd[1862]: 8 Nov 00:26:28 ntpd[1862]: bind(21) AF_INET6 fe80::4f9:e5ff:fe36:de01%2#123 flags 0x11 failed: Cannot assign requested address Nov 8 00:26:28.897841 ntpd[1862]: 8 Nov 00:26:28 ntpd[1862]: unable to create socket on eth0 (5) for fe80::4f9:e5ff:fe36:de01%2#123 Nov 8 00:26:28.897841 ntpd[1862]: 8 Nov 00:26:28 ntpd[1862]: failed to init interface for address fe80::4f9:e5ff:fe36:de01%2 Nov 8 00:26:28.897841 ntpd[1862]: 8 Nov 00:26:28 ntpd[1862]: Listening on routing socket on fd #21 for interface updates Nov 8 00:26:28.897841 ntpd[1862]: 8 Nov 00:26:28 ntpd[1862]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 8 00:26:28.897841 ntpd[1862]: 8 Nov 00:26:28 ntpd[1862]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 8 00:26:28.898273 update_engine[1867]: I20251108 00:26:28.865944 1867 update_check_scheduler.cc:74] Next update check in 6m49s Nov 8 00:26:28.898317 jq[1887]: true Nov 8 00:26:28.830633 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:26:28.833400 ntpd[1862]: Listen and drop on 0 v6wildcard [::]:123 Nov 8 00:26:28.898628 extend-filesystems[1906]: resize2fs 1.47.1 (20-May-2024) Nov 8 00:26:28.830669 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:26:28.833464 ntpd[1862]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 8 00:26:28.831313 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:26:28.836755 dbus-daemon[1858]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1807 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 8 00:26:28.831338 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:26:28.843593 ntpd[1862]: Listen normally on 2 lo 127.0.0.1:123 Nov 8 00:26:28.843090 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:26:28.843649 ntpd[1862]: Listen normally on 3 eth0 172.31.23.96:123 Nov 8 00:26:28.843348 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:26:28.843716 ntpd[1862]: Listen normally on 4 lo [::1]:123 Nov 8 00:26:28.869667 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 8 00:26:28.843771 ntpd[1862]: bind(21) AF_INET6 fe80::4f9:e5ff:fe36:de01%2#123 flags 0x11 failed: Cannot assign requested address Nov 8 00:26:28.874065 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:26:28.843795 ntpd[1862]: unable to create socket on eth0 (5) for fe80::4f9:e5ff:fe36:de01%2#123 Nov 8 00:26:28.889290 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:26:28.843812 ntpd[1862]: failed to init interface for address fe80::4f9:e5ff:fe36:de01%2 Nov 8 00:26:28.843850 ntpd[1862]: Listening on routing socket on fd #21 for interface updates Nov 8 00:26:28.845372 dbus-daemon[1858]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 8 00:26:28.871866 ntpd[1862]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 8 00:26:28.871901 ntpd[1862]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 8 00:26:28.994813 systemd-logind[1866]: Watching system buttons on /dev/input/event1 (Power Button) Nov 8 00:26:28.994843 systemd-logind[1866]: Watching system buttons on /dev/input/event3 (Sleep Button) Nov 8 00:26:28.994867 systemd-logind[1866]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 8 00:26:29.000939 systemd-logind[1866]: New seat seat0. Nov 8 00:26:29.005389 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:26:29.015186 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Nov 8 00:26:29.020035 coreos-metadata[1857]: Nov 08 00:26:29.018 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 8 00:26:29.031552 coreos-metadata[1857]: Nov 08 00:26:29.023 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Nov 8 00:26:29.031552 coreos-metadata[1857]: Nov 08 00:26:29.024 INFO Fetch successful Nov 8 00:26:29.031552 coreos-metadata[1857]: Nov 08 00:26:29.024 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Nov 8 00:26:29.031552 coreos-metadata[1857]: Nov 08 00:26:29.028 INFO Fetch successful Nov 8 00:26:29.031552 coreos-metadata[1857]: Nov 08 00:26:29.028 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Nov 8 00:26:29.031552 coreos-metadata[1857]: Nov 08 00:26:29.029 INFO Fetch successful Nov 8 00:26:29.031552 coreos-metadata[1857]: Nov 08 00:26:29.029 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Nov 8 00:26:29.031552 coreos-metadata[1857]: Nov 08 00:26:29.030 INFO Fetch successful Nov 8 00:26:29.031552 coreos-metadata[1857]: Nov 08 00:26:29.030 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Nov 8 00:26:29.031552 coreos-metadata[1857]: Nov 08 00:26:29.031 INFO Fetch failed with 404: resource not found Nov 8 00:26:29.031975 coreos-metadata[1857]: Nov 08 00:26:29.031 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Nov 8 00:26:29.035897 coreos-metadata[1857]: Nov 08 00:26:29.032 INFO Fetch successful Nov 8 00:26:29.035897 coreos-metadata[1857]: Nov 08 00:26:29.032 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Nov 8 00:26:29.035897 coreos-metadata[1857]: Nov 08 00:26:29.034 INFO Fetch successful Nov 8 00:26:29.035897 coreos-metadata[1857]: Nov 08 00:26:29.034 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Nov 8 00:26:29.035897 coreos-metadata[1857]: Nov 08 00:26:29.034 INFO Fetch successful Nov 8 00:26:29.035897 coreos-metadata[1857]: Nov 08 00:26:29.034 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Nov 8 00:26:29.036214 coreos-metadata[1857]: Nov 08 00:26:29.036 INFO Fetch successful Nov 8 00:26:29.036214 coreos-metadata[1857]: Nov 08 00:26:29.036 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Nov 8 00:26:29.036644 extend-filesystems[1906]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Nov 8 00:26:29.036644 extend-filesystems[1906]: old_desc_blocks = 1, new_desc_blocks = 2 Nov 8 00:26:29.036644 extend-filesystems[1906]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Nov 8 00:26:29.047277 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (1649) Nov 8 00:26:29.047316 coreos-metadata[1857]: Nov 08 00:26:29.038 INFO Fetch successful Nov 8 00:26:29.040133 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:26:29.047458 extend-filesystems[1860]: Resized filesystem in /dev/nvme0n1p9 Nov 8 00:26:29.040390 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:26:29.084330 dbus-daemon[1858]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 8 00:26:29.084520 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 8 00:26:29.091278 dbus-daemon[1858]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1904 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 8 00:26:29.108961 systemd[1]: Starting polkit.service - Authorization Manager... Nov 8 00:26:29.145899 bash[1941]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:26:29.156164 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:26:29.160038 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 8 00:26:29.164341 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:26:29.176114 systemd[1]: Starting sshkeys.service... Nov 8 00:26:29.223218 polkitd[1949]: Started polkitd version 121 Nov 8 00:26:29.244187 polkitd[1949]: Loading rules from directory /etc/polkit-1/rules.d Nov 8 00:26:29.244271 polkitd[1949]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 8 00:26:29.244817 polkitd[1949]: Finished loading, compiling and executing 2 rules Nov 8 00:26:29.261007 dbus-daemon[1858]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 8 00:26:29.261808 systemd[1]: Started polkit.service - Authorization Manager. Nov 8 00:26:29.263980 polkitd[1949]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 8 00:26:29.292641 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 8 00:26:29.318200 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 8 00:26:29.394014 systemd-hostnamed[1904]: Hostname set to (transient) Nov 8 00:26:29.394137 systemd-resolved[1808]: System hostname changed to 'ip-172-31-23-96'. Nov 8 00:26:29.437595 sshd_keygen[1898]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:26:29.442263 locksmithd[1908]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:26:29.486193 coreos-metadata[2024]: Nov 08 00:26:29.486 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 8 00:26:29.490574 coreos-metadata[2024]: Nov 08 00:26:29.487 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Nov 8 00:26:29.490574 coreos-metadata[2024]: Nov 08 00:26:29.488 INFO Fetch successful Nov 8 00:26:29.490574 coreos-metadata[2024]: Nov 08 00:26:29.488 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 8 00:26:29.490574 coreos-metadata[2024]: Nov 08 00:26:29.489 INFO Fetch successful Nov 8 00:26:29.491412 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:26:29.496051 unknown[2024]: wrote ssh authorized keys file for user: core Nov 8 00:26:29.501949 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:26:29.523495 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:26:29.524594 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:26:29.535633 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:26:29.543127 update-ssh-keys[2059]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:26:29.545145 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 8 00:26:29.552587 systemd[1]: Finished sshkeys.service. Nov 8 00:26:29.554682 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:26:29.566286 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:26:29.567632 containerd[1877]: time="2025-11-08T00:26:29.567260616Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:26:29.574218 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 8 00:26:29.577057 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:26:29.597864 containerd[1877]: time="2025-11-08T00:26:29.597782980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:26:29.599497 containerd[1877]: time="2025-11-08T00:26:29.599449445Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:26:29.599497 containerd[1877]: time="2025-11-08T00:26:29.599489252Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:26:29.599662 containerd[1877]: time="2025-11-08T00:26:29.599511440Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:26:29.599777 containerd[1877]: time="2025-11-08T00:26:29.599746716Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:26:29.599823 containerd[1877]: time="2025-11-08T00:26:29.599778355Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:26:29.599880 containerd[1877]: time="2025-11-08T00:26:29.599857008Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:26:29.599920 containerd[1877]: time="2025-11-08T00:26:29.599879543Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:26:29.600228 containerd[1877]: time="2025-11-08T00:26:29.600190247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:26:29.600228 containerd[1877]: time="2025-11-08T00:26:29.600217926Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:26:29.600332 containerd[1877]: time="2025-11-08T00:26:29.600240210Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:26:29.600332 containerd[1877]: time="2025-11-08T00:26:29.600254793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:26:29.600459 containerd[1877]: time="2025-11-08T00:26:29.600364933Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:26:29.600645 containerd[1877]: time="2025-11-08T00:26:29.600616898Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:26:29.600806 containerd[1877]: time="2025-11-08T00:26:29.600779718Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:26:29.600806 containerd[1877]: time="2025-11-08T00:26:29.600801759Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:26:29.600928 containerd[1877]: time="2025-11-08T00:26:29.600904704Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:26:29.600997 containerd[1877]: time="2025-11-08T00:26:29.600974903Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:26:29.613614 containerd[1877]: time="2025-11-08T00:26:29.613548644Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:26:29.613614 containerd[1877]: time="2025-11-08T00:26:29.613617407Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:26:29.613790 containerd[1877]: time="2025-11-08T00:26:29.613637898Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:26:29.613790 containerd[1877]: time="2025-11-08T00:26:29.613666911Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:26:29.613790 containerd[1877]: time="2025-11-08T00:26:29.613696817Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:26:29.613885 containerd[1877]: time="2025-11-08T00:26:29.613860782Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:26:29.614129 containerd[1877]: time="2025-11-08T00:26:29.614102062Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:26:29.614236 containerd[1877]: time="2025-11-08T00:26:29.614216177Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:26:29.614268 containerd[1877]: time="2025-11-08T00:26:29.614238427Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:26:29.614268 containerd[1877]: time="2025-11-08T00:26:29.614254369Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:26:29.614320 containerd[1877]: time="2025-11-08T00:26:29.614269053Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:26:29.614320 containerd[1877]: time="2025-11-08T00:26:29.614282300Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:26:29.614320 containerd[1877]: time="2025-11-08T00:26:29.614294394Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:26:29.614320 containerd[1877]: time="2025-11-08T00:26:29.614307632Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:26:29.614405 containerd[1877]: time="2025-11-08T00:26:29.614321515Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:26:29.614405 containerd[1877]: time="2025-11-08T00:26:29.614335622Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:26:29.614405 containerd[1877]: time="2025-11-08T00:26:29.614347189Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:26:29.614405 containerd[1877]: time="2025-11-08T00:26:29.614360289Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:26:29.614405 containerd[1877]: time="2025-11-08T00:26:29.614378304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:26:29.614405 containerd[1877]: time="2025-11-08T00:26:29.614392055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:26:29.614405 containerd[1877]: time="2025-11-08T00:26:29.614403596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:26:29.614562 containerd[1877]: time="2025-11-08T00:26:29.614417044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:26:29.614562 containerd[1877]: time="2025-11-08T00:26:29.614429071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:26:29.614562 containerd[1877]: time="2025-11-08T00:26:29.614441784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:26:29.614562 containerd[1877]: time="2025-11-08T00:26:29.614453022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:26:29.614562 containerd[1877]: time="2025-11-08T00:26:29.614471003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:26:29.614562 containerd[1877]: time="2025-11-08T00:26:29.614483265Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:26:29.614562 containerd[1877]: time="2025-11-08T00:26:29.614502892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:26:29.614562 containerd[1877]: time="2025-11-08T00:26:29.614514105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:26:29.614562 containerd[1877]: time="2025-11-08T00:26:29.614526366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:26:29.614562 containerd[1877]: time="2025-11-08T00:26:29.614539280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:26:29.614562 containerd[1877]: time="2025-11-08T00:26:29.614553089Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:26:29.614815 containerd[1877]: time="2025-11-08T00:26:29.614572787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:26:29.614815 containerd[1877]: time="2025-11-08T00:26:29.614583681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:26:29.614815 containerd[1877]: time="2025-11-08T00:26:29.614593809Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:26:29.614815 containerd[1877]: time="2025-11-08T00:26:29.614633769Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:26:29.614815 containerd[1877]: time="2025-11-08T00:26:29.614653492Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:26:29.614815 containerd[1877]: time="2025-11-08T00:26:29.614664525Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:26:29.614815 containerd[1877]: time="2025-11-08T00:26:29.614676499Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:26:29.614815 containerd[1877]: time="2025-11-08T00:26:29.614685685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:26:29.614815 containerd[1877]: time="2025-11-08T00:26:29.614718730Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:26:29.614815 containerd[1877]: time="2025-11-08T00:26:29.614728772Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:26:29.614815 containerd[1877]: time="2025-11-08T00:26:29.614739123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:26:29.615149 containerd[1877]: time="2025-11-08T00:26:29.615049307Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:26:29.615149 containerd[1877]: time="2025-11-08T00:26:29.615119395Z" level=info msg="Connect containerd service" Nov 8 00:26:29.615392 containerd[1877]: time="2025-11-08T00:26:29.615172431Z" level=info msg="using legacy CRI server" Nov 8 00:26:29.615392 containerd[1877]: time="2025-11-08T00:26:29.615179923Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:26:29.617069 containerd[1877]: time="2025-11-08T00:26:29.616099872Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:26:29.617709 containerd[1877]: time="2025-11-08T00:26:29.617631496Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:26:29.617955 containerd[1877]: time="2025-11-08T00:26:29.617884926Z" level=info msg="Start subscribing containerd event" Nov 8 00:26:29.618467 containerd[1877]: time="2025-11-08T00:26:29.618431689Z" level=info msg="Start recovering state" Nov 8 00:26:29.618536 containerd[1877]: time="2025-11-08T00:26:29.618522633Z" level=info msg="Start event monitor" Nov 8 00:26:29.618580 containerd[1877]: time="2025-11-08T00:26:29.618548049Z" level=info msg="Start snapshots syncer" Nov 8 00:26:29.618580 containerd[1877]: time="2025-11-08T00:26:29.618562112Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:26:29.618580 containerd[1877]: time="2025-11-08T00:26:29.618573148Z" level=info msg="Start streaming server" Nov 8 00:26:29.618725 containerd[1877]: time="2025-11-08T00:26:29.618400071Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:26:29.618806 containerd[1877]: time="2025-11-08T00:26:29.618785091Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:26:29.618942 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:26:29.620733 containerd[1877]: time="2025-11-08T00:26:29.620409323Z" level=info msg="containerd successfully booted in 0.054801s" Nov 8 00:26:29.656904 systemd-networkd[1807]: eth0: Gained IPv6LL Nov 8 00:26:29.660210 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:26:29.661523 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:26:29.668112 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Nov 8 00:26:29.674587 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:26:29.679809 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:26:29.721123 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:26:29.731987 amazon-ssm-agent[2075]: Initializing new seelog logger Nov 8 00:26:29.732343 amazon-ssm-agent[2075]: New Seelog Logger Creation Complete Nov 8 00:26:29.732343 amazon-ssm-agent[2075]: 2025/11/08 00:26:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:26:29.732343 amazon-ssm-agent[2075]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:26:29.732746 amazon-ssm-agent[2075]: 2025/11/08 00:26:29 processing appconfig overrides Nov 8 00:26:29.733410 amazon-ssm-agent[2075]: 2025/11/08 00:26:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:26:29.733488 amazon-ssm-agent[2075]: 2025-11-08 00:26:29 INFO Proxy environment variables: Nov 8 00:26:29.734034 amazon-ssm-agent[2075]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:26:29.734167 amazon-ssm-agent[2075]: 2025/11/08 00:26:29 processing appconfig overrides Nov 8 00:26:29.734551 amazon-ssm-agent[2075]: 2025/11/08 00:26:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:26:29.734551 amazon-ssm-agent[2075]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:26:29.734641 amazon-ssm-agent[2075]: 2025/11/08 00:26:29 processing appconfig overrides Nov 8 00:26:29.737226 amazon-ssm-agent[2075]: 2025/11/08 00:26:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:26:29.737552 amazon-ssm-agent[2075]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:26:29.737754 amazon-ssm-agent[2075]: 2025/11/08 00:26:29 processing appconfig overrides Nov 8 00:26:29.834767 amazon-ssm-agent[2075]: 2025-11-08 00:26:29 INFO http_proxy: Nov 8 00:26:29.932462 amazon-ssm-agent[2075]: 2025-11-08 00:26:29 INFO no_proxy: Nov 8 00:26:29.963005 amazon-ssm-agent[2075]: 2025-11-08 00:26:29 INFO https_proxy: Nov 8 00:26:29.963005 amazon-ssm-agent[2075]: 2025-11-08 00:26:29 INFO Checking if agent identity type OnPrem can be assumed Nov 8 00:26:29.963005 amazon-ssm-agent[2075]: 2025-11-08 00:26:29 INFO Checking if agent identity type EC2 can be assumed Nov 8 00:26:29.963005 amazon-ssm-agent[2075]: 2025-11-08 00:26:29 INFO Agent will take identity from EC2 Nov 8 00:26:29.963005 amazon-ssm-agent[2075]: 2025-11-08 00:26:29 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 8 00:26:29.963005 amazon-ssm-agent[2075]: 2025-11-08 00:26:29 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 8 00:26:29.963005 amazon-ssm-agent[2075]: 2025-11-08 00:26:29 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 8 00:26:29.963315 amazon-ssm-agent[2075]: 2025-11-08 00:26:29 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Nov 8 00:26:29.963315 amazon-ssm-agent[2075]: 2025-11-08 00:26:29 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Nov 8 00:26:29.963315 amazon-ssm-agent[2075]: 2025-11-08 00:26:29 INFO [amazon-ssm-agent] Starting Core Agent Nov 8 00:26:29.963315 amazon-ssm-agent[2075]: 2025-11-08 00:26:29 INFO [amazon-ssm-agent] registrar detected. Attempting registration Nov 8 00:26:29.963315 amazon-ssm-agent[2075]: 2025-11-08 00:26:29 INFO [Registrar] Starting registrar module Nov 8 00:26:29.963315 amazon-ssm-agent[2075]: 2025-11-08 00:26:29 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Nov 8 00:26:29.963315 amazon-ssm-agent[2075]: 2025-11-08 00:26:29 INFO [EC2Identity] EC2 registration was successful. Nov 8 00:26:29.963315 amazon-ssm-agent[2075]: 2025-11-08 00:26:29 INFO [CredentialRefresher] credentialRefresher has started Nov 8 00:26:29.963315 amazon-ssm-agent[2075]: 2025-11-08 00:26:29 INFO [CredentialRefresher] Starting credentials refresher loop Nov 8 00:26:29.963315 amazon-ssm-agent[2075]: 2025-11-08 00:26:29 INFO EC2RoleProvider Successfully connected with instance profile role credentials Nov 8 00:26:30.031075 amazon-ssm-agent[2075]: 2025-11-08 00:26:29 INFO [CredentialRefresher] Next credential rotation will be in 31.7916604329 minutes Nov 8 00:26:30.980762 amazon-ssm-agent[2075]: 2025-11-08 00:26:30 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Nov 8 00:26:31.082815 amazon-ssm-agent[2075]: 2025-11-08 00:26:30 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2094) started Nov 8 00:26:31.183757 amazon-ssm-agent[2075]: 2025-11-08 00:26:30 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Nov 8 00:26:31.802379 ntpd[1862]: Listen normally on 6 eth0 [fe80::4f9:e5ff:fe36:de01%2]:123 Nov 8 00:26:31.802801 ntpd[1862]: 8 Nov 00:26:31 ntpd[1862]: Listen normally on 6 eth0 [fe80::4f9:e5ff:fe36:de01%2]:123 Nov 8 00:26:31.951044 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:26:31.952681 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:26:31.954545 systemd[1]: Startup finished in 619ms (kernel) + 5.616s (initrd) + 6.680s (userspace) = 12.915s. Nov 8 00:26:31.959602 (kubelet)[2109]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:26:32.910916 kubelet[2109]: E1108 00:26:32.910860 2109 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:26:32.913749 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:26:32.913905 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:26:32.914226 systemd[1]: kubelet.service: Consumed 1.019s CPU time. Nov 8 00:26:33.562766 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:26:33.575211 systemd[1]: Started sshd@0-172.31.23.96:22-139.178.89.65:56936.service - OpenSSH per-connection server daemon (139.178.89.65:56936). Nov 8 00:26:33.741812 sshd[2121]: Accepted publickey for core from 139.178.89.65 port 56936 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:26:33.745043 sshd[2121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:26:33.753626 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:26:33.758007 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:26:33.760148 systemd-logind[1866]: New session 1 of user core. Nov 8 00:26:33.772971 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:26:33.779035 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:26:33.784088 (systemd)[2125]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:26:33.900821 systemd[2125]: Queued start job for default target default.target. Nov 8 00:26:33.920030 systemd[2125]: Created slice app.slice - User Application Slice. Nov 8 00:26:33.920084 systemd[2125]: Reached target paths.target - Paths. Nov 8 00:26:33.920106 systemd[2125]: Reached target timers.target - Timers. Nov 8 00:26:33.929815 systemd[2125]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:26:33.978054 systemd[2125]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:26:33.978179 systemd[2125]: Reached target sockets.target - Sockets. Nov 8 00:26:33.978194 systemd[2125]: Reached target basic.target - Basic System. Nov 8 00:26:33.978237 systemd[2125]: Reached target default.target - Main User Target. Nov 8 00:26:33.978267 systemd[2125]: Startup finished in 187ms. Nov 8 00:26:33.978519 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:26:33.986944 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:26:34.139073 systemd[1]: Started sshd@1-172.31.23.96:22-139.178.89.65:56944.service - OpenSSH per-connection server daemon (139.178.89.65:56944). Nov 8 00:26:34.300121 sshd[2136]: Accepted publickey for core from 139.178.89.65 port 56944 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:26:34.301588 sshd[2136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:26:34.306833 systemd-logind[1866]: New session 2 of user core. Nov 8 00:26:34.315979 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:26:34.438054 sshd[2136]: pam_unix(sshd:session): session closed for user core Nov 8 00:26:34.441832 systemd[1]: sshd@1-172.31.23.96:22-139.178.89.65:56944.service: Deactivated successfully. Nov 8 00:26:34.443452 systemd[1]: session-2.scope: Deactivated successfully. Nov 8 00:26:34.444077 systemd-logind[1866]: Session 2 logged out. Waiting for processes to exit. Nov 8 00:26:34.444968 systemd-logind[1866]: Removed session 2. Nov 8 00:26:34.476114 systemd[1]: Started sshd@2-172.31.23.96:22-139.178.89.65:56950.service - OpenSSH per-connection server daemon (139.178.89.65:56950). Nov 8 00:26:34.635095 sshd[2143]: Accepted publickey for core from 139.178.89.65 port 56950 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:26:34.636549 sshd[2143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:26:34.641411 systemd-logind[1866]: New session 3 of user core. Nov 8 00:26:34.650961 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:26:34.767233 sshd[2143]: pam_unix(sshd:session): session closed for user core Nov 8 00:26:34.770178 systemd[1]: sshd@2-172.31.23.96:22-139.178.89.65:56950.service: Deactivated successfully. Nov 8 00:26:34.771749 systemd[1]: session-3.scope: Deactivated successfully. Nov 8 00:26:34.772963 systemd-logind[1866]: Session 3 logged out. Waiting for processes to exit. Nov 8 00:26:34.774243 systemd-logind[1866]: Removed session 3. Nov 8 00:26:34.801639 systemd[1]: Started sshd@3-172.31.23.96:22-139.178.89.65:56960.service - OpenSSH per-connection server daemon (139.178.89.65:56960). Nov 8 00:26:34.960809 sshd[2150]: Accepted publickey for core from 139.178.89.65 port 56960 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:26:34.962285 sshd[2150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:26:34.968169 systemd-logind[1866]: New session 4 of user core. Nov 8 00:26:34.973911 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:26:35.095643 sshd[2150]: pam_unix(sshd:session): session closed for user core Nov 8 00:26:35.099014 systemd[1]: sshd@3-172.31.23.96:22-139.178.89.65:56960.service: Deactivated successfully. Nov 8 00:26:35.100885 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:26:35.101946 systemd-logind[1866]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:26:35.102841 systemd-logind[1866]: Removed session 4. Nov 8 00:26:35.127855 systemd[1]: Started sshd@4-172.31.23.96:22-139.178.89.65:56962.service - OpenSSH per-connection server daemon (139.178.89.65:56962). Nov 8 00:26:35.290317 sshd[2157]: Accepted publickey for core from 139.178.89.65 port 56962 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:26:35.292008 sshd[2157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:26:35.297458 systemd-logind[1866]: New session 5 of user core. Nov 8 00:26:35.302956 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:26:35.416671 sudo[2160]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:26:35.416999 sudo[2160]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:26:35.431335 sudo[2160]: pam_unix(sudo:session): session closed for user root Nov 8 00:26:35.454904 sshd[2157]: pam_unix(sshd:session): session closed for user core Nov 8 00:26:35.458743 systemd[1]: sshd@4-172.31.23.96:22-139.178.89.65:56962.service: Deactivated successfully. Nov 8 00:26:35.461008 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:26:35.462520 systemd-logind[1866]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:26:35.463901 systemd-logind[1866]: Removed session 5. Nov 8 00:26:35.492087 systemd[1]: Started sshd@5-172.31.23.96:22-139.178.89.65:46488.service - OpenSSH per-connection server daemon (139.178.89.65:46488). Nov 8 00:26:35.648877 sshd[2165]: Accepted publickey for core from 139.178.89.65 port 46488 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:26:35.650506 sshd[2165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:26:35.655790 systemd-logind[1866]: New session 6 of user core. Nov 8 00:26:35.660959 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:26:35.761887 sudo[2169]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:26:35.762198 sudo[2169]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:26:35.766186 sudo[2169]: pam_unix(sudo:session): session closed for user root Nov 8 00:26:35.771844 sudo[2168]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:26:35.772137 sudo[2168]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:26:35.792028 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:26:35.793953 auditctl[2172]: No rules Nov 8 00:26:35.794327 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:26:35.794513 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:26:35.801152 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:26:36.353444 systemd-resolved[1808]: Clock change detected. Flushing caches. Nov 8 00:26:36.378963 augenrules[2190]: No rules Nov 8 00:26:36.380795 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:26:36.382037 sudo[2168]: pam_unix(sudo:session): session closed for user root Nov 8 00:26:36.405294 sshd[2165]: pam_unix(sshd:session): session closed for user core Nov 8 00:26:36.409818 systemd[1]: sshd@5-172.31.23.96:22-139.178.89.65:46488.service: Deactivated successfully. Nov 8 00:26:36.411597 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:26:36.412411 systemd-logind[1866]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:26:36.413563 systemd-logind[1866]: Removed session 6. Nov 8 00:26:36.440682 systemd[1]: Started sshd@6-172.31.23.96:22-139.178.89.65:46496.service - OpenSSH per-connection server daemon (139.178.89.65:46496). Nov 8 00:26:36.605934 sshd[2198]: Accepted publickey for core from 139.178.89.65 port 46496 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:26:36.607880 sshd[2198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:26:36.612703 systemd-logind[1866]: New session 7 of user core. Nov 8 00:26:36.618862 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:26:36.720984 sudo[2201]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:26:36.721519 sudo[2201]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:26:37.777100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:26:37.777864 systemd[1]: kubelet.service: Consumed 1.019s CPU time. Nov 8 00:26:37.787133 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:26:37.825767 systemd[1]: Reloading requested from client PID 2235 ('systemctl') (unit session-7.scope)... Nov 8 00:26:37.825786 systemd[1]: Reloading... Nov 8 00:26:37.968654 zram_generator::config[2275]: No configuration found. Nov 8 00:26:38.112494 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:26:38.199365 systemd[1]: Reloading finished in 372 ms. Nov 8 00:26:38.244561 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 8 00:26:38.244668 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 8 00:26:38.244877 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:26:38.247984 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:26:38.439137 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:26:38.449053 (kubelet)[2339]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:26:38.492660 kubelet[2339]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:26:38.492660 kubelet[2339]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:26:38.492660 kubelet[2339]: I1108 00:26:38.492052 2339 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:26:39.150137 kubelet[2339]: I1108 00:26:39.150099 2339 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 8 00:26:39.150286 kubelet[2339]: I1108 00:26:39.150258 2339 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:26:39.151140 kubelet[2339]: I1108 00:26:39.151117 2339 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 8 00:26:39.151140 kubelet[2339]: I1108 00:26:39.151141 2339 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:26:39.151466 kubelet[2339]: I1108 00:26:39.151445 2339 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:26:39.159655 kubelet[2339]: I1108 00:26:39.157879 2339 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:26:39.164583 kubelet[2339]: E1108 00:26:39.164544 2339 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:26:39.164807 kubelet[2339]: I1108 00:26:39.164788 2339 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 8 00:26:39.167674 kubelet[2339]: I1108 00:26:39.167625 2339 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 8 00:26:39.167957 kubelet[2339]: I1108 00:26:39.167915 2339 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:26:39.168147 kubelet[2339]: I1108 00:26:39.167956 2339 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.23.96","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:26:39.168306 kubelet[2339]: I1108 00:26:39.168154 2339 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:26:39.168306 kubelet[2339]: I1108 00:26:39.168170 2339 container_manager_linux.go:306] "Creating device plugin manager" Nov 8 00:26:39.168306 kubelet[2339]: I1108 00:26:39.168302 2339 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 8 00:26:39.176599 kubelet[2339]: I1108 00:26:39.176554 2339 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:26:39.178524 kubelet[2339]: I1108 00:26:39.178483 2339 kubelet.go:475] "Attempting to sync node with API server" Nov 8 00:26:39.178524 kubelet[2339]: I1108 00:26:39.178509 2339 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:26:39.178524 kubelet[2339]: I1108 00:26:39.178532 2339 kubelet.go:387] "Adding apiserver pod source" Nov 8 00:26:39.178784 kubelet[2339]: I1108 00:26:39.178547 2339 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:26:39.179076 kubelet[2339]: E1108 00:26:39.178998 2339 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:26:39.179076 kubelet[2339]: E1108 00:26:39.179043 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:26:39.181304 kubelet[2339]: I1108 00:26:39.181220 2339 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:26:39.181801 kubelet[2339]: I1108 00:26:39.181782 2339 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:26:39.181866 kubelet[2339]: I1108 00:26:39.181811 2339 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 8 00:26:39.181866 kubelet[2339]: W1108 00:26:39.181857 2339 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:26:39.186577 kubelet[2339]: I1108 00:26:39.184654 2339 server.go:1262] "Started kubelet" Nov 8 00:26:39.186577 kubelet[2339]: I1108 00:26:39.185215 2339 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:26:39.186577 kubelet[2339]: I1108 00:26:39.186077 2339 server.go:310] "Adding debug handlers to kubelet server" Nov 8 00:26:39.190887 kubelet[2339]: I1108 00:26:39.190829 2339 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:26:39.191129 kubelet[2339]: I1108 00:26:39.190971 2339 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 8 00:26:39.192659 kubelet[2339]: I1108 00:26:39.191503 2339 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:26:39.193476 kubelet[2339]: I1108 00:26:39.193455 2339 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:26:39.194205 kubelet[2339]: E1108 00:26:39.194152 2339 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"172.31.23.96\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:26:39.194824 kubelet[2339]: I1108 00:26:39.194216 2339 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:26:39.197255 kubelet[2339]: I1108 00:26:39.196871 2339 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 8 00:26:39.197255 kubelet[2339]: E1108 00:26:39.196985 2339 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.23.96\" not found" Nov 8 00:26:39.197255 kubelet[2339]: I1108 00:26:39.197239 2339 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 8 00:26:39.197481 kubelet[2339]: I1108 00:26:39.197290 2339 reconciler.go:29] "Reconciler: start to sync state" Nov 8 00:26:39.197967 kubelet[2339]: E1108 00:26:39.194455 2339 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:26:39.198124 kubelet[2339]: I1108 00:26:39.198095 2339 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:26:39.198227 kubelet[2339]: I1108 00:26:39.198205 2339 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:26:39.202976 kubelet[2339]: I1108 00:26:39.202021 2339 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:26:39.203267 kubelet[2339]: E1108 00:26:39.199442 2339 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.23.96.1875e06f9e24818e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.23.96,UID:172.31.23.96,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.23.96,},FirstTimestamp:2025-11-08 00:26:39.184601486 +0000 UTC m=+0.731190776,LastTimestamp:2025-11-08 00:26:39.184601486 +0000 UTC m=+0.731190776,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.23.96,}" Nov 8 00:26:39.219270 kubelet[2339]: E1108 00:26:39.219229 2339 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:26:39.219405 kubelet[2339]: E1108 00:26:39.219382 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.23.96\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Nov 8 00:26:39.231370 kubelet[2339]: I1108 00:26:39.231345 2339 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:26:39.231668 kubelet[2339]: I1108 00:26:39.231533 2339 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:26:39.231668 kubelet[2339]: I1108 00:26:39.231557 2339 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:26:39.239568 kubelet[2339]: I1108 00:26:39.239260 2339 policy_none.go:49] "None policy: Start" Nov 8 00:26:39.239568 kubelet[2339]: I1108 00:26:39.239290 2339 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 8 00:26:39.239568 kubelet[2339]: I1108 00:26:39.239305 2339 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 8 00:26:39.243590 kubelet[2339]: I1108 00:26:39.243564 2339 policy_none.go:47] "Start" Nov 8 00:26:39.251478 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 8 00:26:39.267867 kubelet[2339]: E1108 00:26:39.267452 2339 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.23.96.1875e06fa0ca349d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.23.96,UID:172.31.23.96,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 172.31.23.96 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:172.31.23.96,},FirstTimestamp:2025-11-08 00:26:39.229015197 +0000 UTC m=+0.775604673,LastTimestamp:2025-11-08 00:26:39.229015197 +0000 UTC m=+0.775604673,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.23.96,}" Nov 8 00:26:39.273565 kubelet[2339]: E1108 00:26:39.273417 2339 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.23.96.1875e06fa0ca643a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.23.96,UID:172.31.23.96,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 172.31.23.96 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:172.31.23.96,},FirstTimestamp:2025-11-08 00:26:39.229027386 +0000 UTC m=+0.775616660,LastTimestamp:2025-11-08 00:26:39.229027386 +0000 UTC m=+0.775616660,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.23.96,}" Nov 8 00:26:39.279418 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 8 00:26:39.291284 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 8 00:26:39.295923 kubelet[2339]: E1108 00:26:39.295886 2339 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:26:39.296152 kubelet[2339]: I1108 00:26:39.296123 2339 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:26:39.296222 kubelet[2339]: I1108 00:26:39.296140 2339 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:26:39.297088 kubelet[2339]: I1108 00:26:39.296966 2339 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:26:39.297088 kubelet[2339]: E1108 00:26:39.297025 2339 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.23.96\" not found" Nov 8 00:26:39.302315 kubelet[2339]: E1108 00:26:39.300845 2339 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:26:39.302315 kubelet[2339]: E1108 00:26:39.300894 2339 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.23.96\" not found" Nov 8 00:26:39.311012 kubelet[2339]: I1108 00:26:39.310967 2339 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 8 00:26:39.312680 kubelet[2339]: I1108 00:26:39.312287 2339 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 8 00:26:39.312680 kubelet[2339]: I1108 00:26:39.312308 2339 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 8 00:26:39.312680 kubelet[2339]: I1108 00:26:39.312331 2339 kubelet.go:2427] "Starting kubelet main sync loop" Nov 8 00:26:39.312680 kubelet[2339]: E1108 00:26:39.312375 2339 kubelet.go:2451] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Nov 8 00:26:39.401850 kubelet[2339]: I1108 00:26:39.401737 2339 kubelet_node_status.go:75] "Attempting to register node" node="172.31.23.96" Nov 8 00:26:39.422679 kubelet[2339]: I1108 00:26:39.422625 2339 kubelet_node_status.go:78] "Successfully registered node" node="172.31.23.96" Nov 8 00:26:39.422679 kubelet[2339]: E1108 00:26:39.422675 2339 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"172.31.23.96\": node \"172.31.23.96\" not found" Nov 8 00:26:39.449113 kubelet[2339]: E1108 00:26:39.449066 2339 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.23.96\" not found" Nov 8 00:26:39.454263 sudo[2201]: pam_unix(sudo:session): session closed for user root Nov 8 00:26:39.477943 sshd[2198]: pam_unix(sshd:session): session closed for user core Nov 8 00:26:39.481102 systemd[1]: sshd@6-172.31.23.96:22-139.178.89.65:46496.service: Deactivated successfully. Nov 8 00:26:39.483001 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:26:39.484856 systemd-logind[1866]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:26:39.486484 systemd-logind[1866]: Removed session 7. Nov 8 00:26:39.549737 kubelet[2339]: E1108 00:26:39.549691 2339 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.23.96\" not found" Nov 8 00:26:39.650408 kubelet[2339]: E1108 00:26:39.650347 2339 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.23.96\" not found" Nov 8 00:26:39.750538 kubelet[2339]: E1108 00:26:39.750494 2339 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.23.96\" not found" Nov 8 00:26:39.851386 kubelet[2339]: E1108 00:26:39.851342 2339 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.23.96\" not found" Nov 8 00:26:39.952134 kubelet[2339]: E1108 00:26:39.952088 2339 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.23.96\" not found" Nov 8 00:26:40.053079 kubelet[2339]: E1108 00:26:40.052949 2339 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.23.96\" not found" Nov 8 00:26:40.153865 kubelet[2339]: E1108 00:26:40.153812 2339 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.23.96\" not found" Nov 8 00:26:40.153865 kubelet[2339]: I1108 00:26:40.153849 2339 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Nov 8 00:26:40.154061 kubelet[2339]: I1108 00:26:40.154031 2339 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Nov 8 00:26:40.180349 kubelet[2339]: E1108 00:26:40.180214 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:26:40.254003 kubelet[2339]: E1108 00:26:40.253921 2339 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.23.96\" not found" Nov 8 00:26:40.355168 kubelet[2339]: E1108 00:26:40.355033 2339 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.23.96\" not found" Nov 8 00:26:40.455876 kubelet[2339]: E1108 00:26:40.455659 2339 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.23.96\" not found" Nov 8 00:26:40.556646 kubelet[2339]: E1108 00:26:40.556596 2339 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.23.96\" not found" Nov 8 00:26:40.658478 kubelet[2339]: I1108 00:26:40.658293 2339 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Nov 8 00:26:40.658887 containerd[1877]: time="2025-11-08T00:26:40.658786622Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:26:40.659347 kubelet[2339]: I1108 00:26:40.658985 2339 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Nov 8 00:26:41.181163 kubelet[2339]: I1108 00:26:41.181090 2339 apiserver.go:52] "Watching apiserver" Nov 8 00:26:41.181163 kubelet[2339]: E1108 00:26:41.181117 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:26:41.211815 kubelet[2339]: E1108 00:26:41.211482 2339 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sqf2j" podUID="7e6b44f6-8f09-463d-b422-20e45aa79602" Nov 8 00:26:41.217943 systemd[1]: Created slice kubepods-besteffort-podb9aa2490_fb89_4921_873e_4d05909944e9.slice - libcontainer container kubepods-besteffort-podb9aa2490_fb89_4921_873e_4d05909944e9.slice. Nov 8 00:26:41.237511 systemd[1]: Created slice kubepods-besteffort-pod8d6cd128_4b37_49e8_a376_76238b190121.slice - libcontainer container kubepods-besteffort-pod8d6cd128_4b37_49e8_a376_76238b190121.slice. Nov 8 00:26:41.298025 kubelet[2339]: I1108 00:26:41.297975 2339 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 8 00:26:41.312237 kubelet[2339]: I1108 00:26:41.312193 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7e6b44f6-8f09-463d-b422-20e45aa79602-socket-dir\") pod \"csi-node-driver-sqf2j\" (UID: \"7e6b44f6-8f09-463d-b422-20e45aa79602\") " pod="calico-system/csi-node-driver-sqf2j" Nov 8 00:26:41.312530 kubelet[2339]: I1108 00:26:41.312359 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7e6b44f6-8f09-463d-b422-20e45aa79602-varrun\") pod \"csi-node-driver-sqf2j\" (UID: \"7e6b44f6-8f09-463d-b422-20e45aa79602\") " pod="calico-system/csi-node-driver-sqf2j" Nov 8 00:26:41.312530 kubelet[2339]: I1108 00:26:41.312388 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b9aa2490-fb89-4921-873e-4d05909944e9-node-certs\") pod \"calico-node-kcjkh\" (UID: \"b9aa2490-fb89-4921-873e-4d05909944e9\") " pod="calico-system/calico-node-kcjkh" Nov 8 00:26:41.312530 kubelet[2339]: I1108 00:26:41.312407 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b9aa2490-fb89-4921-873e-4d05909944e9-policysync\") pod \"calico-node-kcjkh\" (UID: \"b9aa2490-fb89-4921-873e-4d05909944e9\") " pod="calico-system/calico-node-kcjkh" Nov 8 00:26:41.312530 kubelet[2339]: I1108 00:26:41.312425 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9aa2490-fb89-4921-873e-4d05909944e9-tigera-ca-bundle\") pod \"calico-node-kcjkh\" (UID: \"b9aa2490-fb89-4921-873e-4d05909944e9\") " pod="calico-system/calico-node-kcjkh" Nov 8 00:26:41.312530 kubelet[2339]: I1108 00:26:41.312440 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b9aa2490-fb89-4921-873e-4d05909944e9-var-lib-calico\") pod \"calico-node-kcjkh\" (UID: \"b9aa2490-fb89-4921-873e-4d05909944e9\") " pod="calico-system/calico-node-kcjkh" Nov 8 00:26:41.312692 kubelet[2339]: I1108 00:26:41.312454 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b9aa2490-fb89-4921-873e-4d05909944e9-var-run-calico\") pod \"calico-node-kcjkh\" (UID: \"b9aa2490-fb89-4921-873e-4d05909944e9\") " pod="calico-system/calico-node-kcjkh" Nov 8 00:26:41.312692 kubelet[2339]: I1108 00:26:41.312468 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9aa2490-fb89-4921-873e-4d05909944e9-lib-modules\") pod \"calico-node-kcjkh\" (UID: \"b9aa2490-fb89-4921-873e-4d05909944e9\") " pod="calico-system/calico-node-kcjkh" Nov 8 00:26:41.312692 kubelet[2339]: I1108 00:26:41.312482 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pdj6\" (UniqueName: \"kubernetes.io/projected/7e6b44f6-8f09-463d-b422-20e45aa79602-kube-api-access-5pdj6\") pod \"csi-node-driver-sqf2j\" (UID: \"7e6b44f6-8f09-463d-b422-20e45aa79602\") " pod="calico-system/csi-node-driver-sqf2j" Nov 8 00:26:41.312692 kubelet[2339]: I1108 00:26:41.312499 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d6cd128-4b37-49e8-a376-76238b190121-xtables-lock\") pod \"kube-proxy-749hs\" (UID: \"8d6cd128-4b37-49e8-a376-76238b190121\") " pod="kube-system/kube-proxy-749hs" Nov 8 00:26:41.312692 kubelet[2339]: I1108 00:26:41.312514 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b9aa2490-fb89-4921-873e-4d05909944e9-cni-bin-dir\") pod \"calico-node-kcjkh\" (UID: \"b9aa2490-fb89-4921-873e-4d05909944e9\") " pod="calico-system/calico-node-kcjkh" Nov 8 00:26:41.312808 kubelet[2339]: I1108 00:26:41.312527 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b9aa2490-fb89-4921-873e-4d05909944e9-cni-log-dir\") pod \"calico-node-kcjkh\" (UID: \"b9aa2490-fb89-4921-873e-4d05909944e9\") " pod="calico-system/calico-node-kcjkh" Nov 8 00:26:41.312808 kubelet[2339]: I1108 00:26:41.312545 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9aa2490-fb89-4921-873e-4d05909944e9-xtables-lock\") pod \"calico-node-kcjkh\" (UID: \"b9aa2490-fb89-4921-873e-4d05909944e9\") " pod="calico-system/calico-node-kcjkh" Nov 8 00:26:41.312808 kubelet[2339]: I1108 00:26:41.312559 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47d5g\" (UniqueName: \"kubernetes.io/projected/b9aa2490-fb89-4921-873e-4d05909944e9-kube-api-access-47d5g\") pod \"calico-node-kcjkh\" (UID: \"b9aa2490-fb89-4921-873e-4d05909944e9\") " pod="calico-system/calico-node-kcjkh" Nov 8 00:26:41.312808 kubelet[2339]: I1108 00:26:41.312573 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8d6cd128-4b37-49e8-a376-76238b190121-kube-proxy\") pod \"kube-proxy-749hs\" (UID: \"8d6cd128-4b37-49e8-a376-76238b190121\") " pod="kube-system/kube-proxy-749hs" Nov 8 00:26:41.312808 kubelet[2339]: I1108 00:26:41.312588 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d6cd128-4b37-49e8-a376-76238b190121-lib-modules\") pod \"kube-proxy-749hs\" (UID: \"8d6cd128-4b37-49e8-a376-76238b190121\") " pod="kube-system/kube-proxy-749hs" Nov 8 00:26:41.312919 kubelet[2339]: I1108 00:26:41.312604 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b9aa2490-fb89-4921-873e-4d05909944e9-cni-net-dir\") pod \"calico-node-kcjkh\" (UID: \"b9aa2490-fb89-4921-873e-4d05909944e9\") " pod="calico-system/calico-node-kcjkh" Nov 8 00:26:41.312919 kubelet[2339]: I1108 00:26:41.312618 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e6b44f6-8f09-463d-b422-20e45aa79602-kubelet-dir\") pod \"csi-node-driver-sqf2j\" (UID: \"7e6b44f6-8f09-463d-b422-20e45aa79602\") " pod="calico-system/csi-node-driver-sqf2j" Nov 8 00:26:41.312919 kubelet[2339]: I1108 00:26:41.312649 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7e6b44f6-8f09-463d-b422-20e45aa79602-registration-dir\") pod \"csi-node-driver-sqf2j\" (UID: \"7e6b44f6-8f09-463d-b422-20e45aa79602\") " pod="calico-system/csi-node-driver-sqf2j" Nov 8 00:26:41.312919 kubelet[2339]: I1108 00:26:41.312664 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7tz6\" (UniqueName: \"kubernetes.io/projected/8d6cd128-4b37-49e8-a376-76238b190121-kube-api-access-c7tz6\") pod \"kube-proxy-749hs\" (UID: \"8d6cd128-4b37-49e8-a376-76238b190121\") " pod="kube-system/kube-proxy-749hs" Nov 8 00:26:41.312919 kubelet[2339]: I1108 00:26:41.312689 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b9aa2490-fb89-4921-873e-4d05909944e9-flexvol-driver-host\") pod \"calico-node-kcjkh\" (UID: \"b9aa2490-fb89-4921-873e-4d05909944e9\") " pod="calico-system/calico-node-kcjkh" Nov 8 00:26:41.420408 kubelet[2339]: E1108 00:26:41.420323 2339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:41.420408 kubelet[2339]: W1108 00:26:41.420345 2339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:41.420408 kubelet[2339]: E1108 00:26:41.420366 2339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:41.429512 kubelet[2339]: E1108 00:26:41.429149 2339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:41.429512 kubelet[2339]: W1108 00:26:41.429167 2339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:41.429512 kubelet[2339]: E1108 00:26:41.429187 2339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:41.431226 kubelet[2339]: E1108 00:26:41.431154 2339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:41.431226 kubelet[2339]: W1108 00:26:41.431174 2339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:41.431226 kubelet[2339]: E1108 00:26:41.431217 2339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:41.433867 kubelet[2339]: E1108 00:26:41.433807 2339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:41.433867 kubelet[2339]: W1108 00:26:41.433821 2339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:41.433867 kubelet[2339]: E1108 00:26:41.433838 2339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:41.537472 containerd[1877]: time="2025-11-08T00:26:41.537433064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kcjkh,Uid:b9aa2490-fb89-4921-873e-4d05909944e9,Namespace:calico-system,Attempt:0,}" Nov 8 00:26:41.547344 containerd[1877]: time="2025-11-08T00:26:41.546995569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-749hs,Uid:8d6cd128-4b37-49e8-a376-76238b190121,Namespace:kube-system,Attempt:0,}" Nov 8 00:26:42.113666 containerd[1877]: time="2025-11-08T00:26:42.111974585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:26:42.119080 containerd[1877]: time="2025-11-08T00:26:42.118951284Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:26:42.120515 containerd[1877]: time="2025-11-08T00:26:42.120468954Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:26:42.121829 containerd[1877]: time="2025-11-08T00:26:42.121787205Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 8 00:26:42.125582 containerd[1877]: time="2025-11-08T00:26:42.124152079Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:26:42.127703 containerd[1877]: time="2025-11-08T00:26:42.127621526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:26:42.129675 containerd[1877]: time="2025-11-08T00:26:42.129619685Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 592.110971ms" Nov 8 00:26:42.130694 containerd[1877]: time="2025-11-08T00:26:42.130658901Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 583.541893ms" Nov 8 00:26:42.182308 kubelet[2339]: E1108 00:26:42.181482 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:26:42.285492 containerd[1877]: time="2025-11-08T00:26:42.284588933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:26:42.285492 containerd[1877]: time="2025-11-08T00:26:42.284662626Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:26:42.285492 containerd[1877]: time="2025-11-08T00:26:42.284675387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:42.285492 containerd[1877]: time="2025-11-08T00:26:42.284741806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:42.287902 containerd[1877]: time="2025-11-08T00:26:42.287623255Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:26:42.287902 containerd[1877]: time="2025-11-08T00:26:42.287683683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:26:42.287902 containerd[1877]: time="2025-11-08T00:26:42.287699745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:42.287902 containerd[1877]: time="2025-11-08T00:26:42.287769516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:42.368903 systemd[1]: Started cri-containerd-fc7a3adbb7231c9c97bb8b64784d1c8d946f366ff59caf4b656308caa0935bc8.scope - libcontainer container fc7a3adbb7231c9c97bb8b64784d1c8d946f366ff59caf4b656308caa0935bc8. Nov 8 00:26:42.370436 systemd[1]: Started cri-containerd-fe40337e70d0e42b4b7953926e662c9ad97a6b92b2991b03189ac4262adbfaf4.scope - libcontainer container fe40337e70d0e42b4b7953926e662c9ad97a6b92b2991b03189ac4262adbfaf4. Nov 8 00:26:42.398990 containerd[1877]: time="2025-11-08T00:26:42.398959684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kcjkh,Uid:b9aa2490-fb89-4921-873e-4d05909944e9,Namespace:calico-system,Attempt:0,} returns sandbox id \"fc7a3adbb7231c9c97bb8b64784d1c8d946f366ff59caf4b656308caa0935bc8\"" Nov 8 00:26:42.402806 containerd[1877]: time="2025-11-08T00:26:42.402618340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 8 00:26:42.405033 containerd[1877]: time="2025-11-08T00:26:42.404974661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-749hs,Uid:8d6cd128-4b37-49e8-a376-76238b190121,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe40337e70d0e42b4b7953926e662c9ad97a6b92b2991b03189ac4262adbfaf4\"" Nov 8 00:26:42.421837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1914563057.mount: Deactivated successfully. Nov 8 00:26:43.182308 kubelet[2339]: E1108 00:26:43.182250 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:26:43.314377 kubelet[2339]: E1108 00:26:43.314085 2339 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sqf2j" podUID="7e6b44f6-8f09-463d-b422-20e45aa79602" Nov 8 00:26:43.594335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1826275670.mount: Deactivated successfully. Nov 8 00:26:43.696659 containerd[1877]: time="2025-11-08T00:26:43.696596614Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:43.698322 containerd[1877]: time="2025-11-08T00:26:43.698266595Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Nov 8 00:26:43.700374 containerd[1877]: time="2025-11-08T00:26:43.700304758Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:43.704321 containerd[1877]: time="2025-11-08T00:26:43.703511513Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:43.704321 containerd[1877]: time="2025-11-08T00:26:43.704076609Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.301413715s" Nov 8 00:26:43.704321 containerd[1877]: time="2025-11-08T00:26:43.704106948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 8 00:26:43.705470 containerd[1877]: time="2025-11-08T00:26:43.705449612Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 8 00:26:43.710734 containerd[1877]: time="2025-11-08T00:26:43.710698679Z" level=info msg="CreateContainer within sandbox \"fc7a3adbb7231c9c97bb8b64784d1c8d946f366ff59caf4b656308caa0935bc8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 8 00:26:43.732545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount266472340.mount: Deactivated successfully. Nov 8 00:26:43.741653 containerd[1877]: time="2025-11-08T00:26:43.741579530Z" level=info msg="CreateContainer within sandbox \"fc7a3adbb7231c9c97bb8b64784d1c8d946f366ff59caf4b656308caa0935bc8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"371e10bd216d6dbf7ad1e225abca8dd5f3f93451bb84f4c4f7b20a6ad694816a\"" Nov 8 00:26:43.742352 containerd[1877]: time="2025-11-08T00:26:43.742318924Z" level=info msg="StartContainer for \"371e10bd216d6dbf7ad1e225abca8dd5f3f93451bb84f4c4f7b20a6ad694816a\"" Nov 8 00:26:43.777035 systemd[1]: Started cri-containerd-371e10bd216d6dbf7ad1e225abca8dd5f3f93451bb84f4c4f7b20a6ad694816a.scope - libcontainer container 371e10bd216d6dbf7ad1e225abca8dd5f3f93451bb84f4c4f7b20a6ad694816a. Nov 8 00:26:43.812048 containerd[1877]: time="2025-11-08T00:26:43.811995387Z" level=info msg="StartContainer for \"371e10bd216d6dbf7ad1e225abca8dd5f3f93451bb84f4c4f7b20a6ad694816a\" returns successfully" Nov 8 00:26:43.822362 systemd[1]: cri-containerd-371e10bd216d6dbf7ad1e225abca8dd5f3f93451bb84f4c4f7b20a6ad694816a.scope: Deactivated successfully. Nov 8 00:26:43.887645 containerd[1877]: time="2025-11-08T00:26:43.887122336Z" level=info msg="shim disconnected" id=371e10bd216d6dbf7ad1e225abca8dd5f3f93451bb84f4c4f7b20a6ad694816a namespace=k8s.io Nov 8 00:26:43.887645 containerd[1877]: time="2025-11-08T00:26:43.887177238Z" level=warning msg="cleaning up after shim disconnected" id=371e10bd216d6dbf7ad1e225abca8dd5f3f93451bb84f4c4f7b20a6ad694816a namespace=k8s.io Nov 8 00:26:43.887645 containerd[1877]: time="2025-11-08T00:26:43.887186381Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:26:44.182612 kubelet[2339]: E1108 00:26:44.182566 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:26:44.558208 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-371e10bd216d6dbf7ad1e225abca8dd5f3f93451bb84f4c4f7b20a6ad694816a-rootfs.mount: Deactivated successfully. Nov 8 00:26:44.742995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount776026870.mount: Deactivated successfully. Nov 8 00:26:45.146565 containerd[1877]: time="2025-11-08T00:26:45.146508201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:45.148708 containerd[1877]: time="2025-11-08T00:26:45.148653045Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Nov 8 00:26:45.151378 containerd[1877]: time="2025-11-08T00:26:45.151306547Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:45.154566 containerd[1877]: time="2025-11-08T00:26:45.154501987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:45.155600 containerd[1877]: time="2025-11-08T00:26:45.155110688Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.449537659s" Nov 8 00:26:45.155600 containerd[1877]: time="2025-11-08T00:26:45.155150021Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 8 00:26:45.157277 containerd[1877]: time="2025-11-08T00:26:45.156971191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 8 00:26:45.161340 containerd[1877]: time="2025-11-08T00:26:45.161286201Z" level=info msg="CreateContainer within sandbox \"fe40337e70d0e42b4b7953926e662c9ad97a6b92b2991b03189ac4262adbfaf4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:26:45.183096 kubelet[2339]: E1108 00:26:45.183034 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:26:45.188770 containerd[1877]: time="2025-11-08T00:26:45.188717790Z" level=info msg="CreateContainer within sandbox \"fe40337e70d0e42b4b7953926e662c9ad97a6b92b2991b03189ac4262adbfaf4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f1385a006f2e73edae8e504d09293f84e15292698e6c1c5a65b7b8e712668239\"" Nov 8 00:26:45.189600 containerd[1877]: time="2025-11-08T00:26:45.189562447Z" level=info msg="StartContainer for \"f1385a006f2e73edae8e504d09293f84e15292698e6c1c5a65b7b8e712668239\"" Nov 8 00:26:45.229892 systemd[1]: Started cri-containerd-f1385a006f2e73edae8e504d09293f84e15292698e6c1c5a65b7b8e712668239.scope - libcontainer container f1385a006f2e73edae8e504d09293f84e15292698e6c1c5a65b7b8e712668239. Nov 8 00:26:45.261973 containerd[1877]: time="2025-11-08T00:26:45.261927429Z" level=info msg="StartContainer for \"f1385a006f2e73edae8e504d09293f84e15292698e6c1c5a65b7b8e712668239\" returns successfully" Nov 8 00:26:45.312819 kubelet[2339]: E1108 00:26:45.312776 2339 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sqf2j" podUID="7e6b44f6-8f09-463d-b422-20e45aa79602" Nov 8 00:26:46.184153 kubelet[2339]: E1108 00:26:46.184100 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:26:47.184294 kubelet[2339]: E1108 00:26:47.184223 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:26:47.314937 kubelet[2339]: E1108 00:26:47.314134 2339 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sqf2j" podUID="7e6b44f6-8f09-463d-b422-20e45aa79602" Nov 8 00:26:48.121414 containerd[1877]: time="2025-11-08T00:26:48.121364273Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:48.122766 containerd[1877]: time="2025-11-08T00:26:48.122520752Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 8 00:26:48.124660 containerd[1877]: time="2025-11-08T00:26:48.124486929Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:48.126787 containerd[1877]: time="2025-11-08T00:26:48.126725502Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:48.127413 containerd[1877]: time="2025-11-08T00:26:48.127252913Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.970250495s" Nov 8 00:26:48.127413 containerd[1877]: time="2025-11-08T00:26:48.127287905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 8 00:26:48.132156 containerd[1877]: time="2025-11-08T00:26:48.132069766Z" level=info msg="CreateContainer within sandbox \"fc7a3adbb7231c9c97bb8b64784d1c8d946f366ff59caf4b656308caa0935bc8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 8 00:26:48.151228 containerd[1877]: time="2025-11-08T00:26:48.151175176Z" level=info msg="CreateContainer within sandbox \"fc7a3adbb7231c9c97bb8b64784d1c8d946f366ff59caf4b656308caa0935bc8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"512f2b67f1843cc2172d4f501e113e792074928117afdca7c9eef2f5b703c231\"" Nov 8 00:26:48.152093 containerd[1877]: time="2025-11-08T00:26:48.152051339Z" level=info msg="StartContainer for \"512f2b67f1843cc2172d4f501e113e792074928117afdca7c9eef2f5b703c231\"" Nov 8 00:26:48.188154 kubelet[2339]: E1108 00:26:48.185728 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:26:48.187293 systemd[1]: run-containerd-runc-k8s.io-512f2b67f1843cc2172d4f501e113e792074928117afdca7c9eef2f5b703c231-runc.On5hVS.mount: Deactivated successfully. Nov 8 00:26:48.193835 systemd[1]: Started cri-containerd-512f2b67f1843cc2172d4f501e113e792074928117afdca7c9eef2f5b703c231.scope - libcontainer container 512f2b67f1843cc2172d4f501e113e792074928117afdca7c9eef2f5b703c231. Nov 8 00:26:48.226733 containerd[1877]: time="2025-11-08T00:26:48.226531169Z" level=info msg="StartContainer for \"512f2b67f1843cc2172d4f501e113e792074928117afdca7c9eef2f5b703c231\" returns successfully" Nov 8 00:26:48.366570 kubelet[2339]: I1108 00:26:48.366512 2339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-749hs" podStartSLOduration=6.617558084 podStartE2EDuration="9.366498662s" podCreationTimestamp="2025-11-08 00:26:39 +0000 UTC" firstStartedPulling="2025-11-08 00:26:42.407012026 +0000 UTC m=+3.953601299" lastFinishedPulling="2025-11-08 00:26:45.155952607 +0000 UTC m=+6.702541877" observedRunningTime="2025-11-08 00:26:45.406075138 +0000 UTC m=+6.952664430" watchObservedRunningTime="2025-11-08 00:26:48.366498662 +0000 UTC m=+9.913087956" Nov 8 00:26:49.186369 kubelet[2339]: E1108 00:26:49.186272 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:26:49.315168 kubelet[2339]: E1108 00:26:49.314564 2339 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sqf2j" podUID="7e6b44f6-8f09-463d-b422-20e45aa79602" Nov 8 00:26:49.436705 systemd[1]: cri-containerd-512f2b67f1843cc2172d4f501e113e792074928117afdca7c9eef2f5b703c231.scope: Deactivated successfully. Nov 8 00:26:49.461788 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-512f2b67f1843cc2172d4f501e113e792074928117afdca7c9eef2f5b703c231-rootfs.mount: Deactivated successfully. Nov 8 00:26:49.484655 kubelet[2339]: I1108 00:26:49.484610 2339 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 8 00:26:50.077364 containerd[1877]: time="2025-11-08T00:26:50.077280866Z" level=info msg="shim disconnected" id=512f2b67f1843cc2172d4f501e113e792074928117afdca7c9eef2f5b703c231 namespace=k8s.io Nov 8 00:26:50.077364 containerd[1877]: time="2025-11-08T00:26:50.077366719Z" level=warning msg="cleaning up after shim disconnected" id=512f2b67f1843cc2172d4f501e113e792074928117afdca7c9eef2f5b703c231 namespace=k8s.io Nov 8 00:26:50.077880 containerd[1877]: time="2025-11-08T00:26:50.077379585Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:26:50.187614 kubelet[2339]: E1108 00:26:50.187466 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:26:50.357197 containerd[1877]: time="2025-11-08T00:26:50.356959898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 8 00:26:51.188707 kubelet[2339]: E1108 00:26:51.188613 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:26:51.318653 systemd[1]: Created slice kubepods-besteffort-pod7e6b44f6_8f09_463d_b422_20e45aa79602.slice - libcontainer container kubepods-besteffort-pod7e6b44f6_8f09_463d_b422_20e45aa79602.slice. Nov 8 00:26:51.325146 containerd[1877]: time="2025-11-08T00:26:51.325104931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sqf2j,Uid:7e6b44f6-8f09-463d-b422-20e45aa79602,Namespace:calico-system,Attempt:0,}" Nov 8 00:26:51.409362 containerd[1877]: time="2025-11-08T00:26:51.409307143Z" level=error msg="Failed to destroy network for sandbox \"33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:51.409843 containerd[1877]: time="2025-11-08T00:26:51.409804221Z" level=error msg="encountered an error cleaning up failed sandbox \"33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:51.410098 containerd[1877]: time="2025-11-08T00:26:51.409881737Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sqf2j,Uid:7e6b44f6-8f09-463d-b422-20e45aa79602,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:51.410162 kubelet[2339]: E1108 00:26:51.410126 2339 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:51.410218 kubelet[2339]: E1108 00:26:51.410201 2339 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sqf2j" Nov 8 00:26:51.410317 kubelet[2339]: E1108 00:26:51.410228 2339 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sqf2j" Nov 8 00:26:51.410360 kubelet[2339]: E1108 00:26:51.410303 2339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sqf2j_calico-system(7e6b44f6-8f09-463d-b422-20e45aa79602)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sqf2j_calico-system(7e6b44f6-8f09-463d-b422-20e45aa79602)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sqf2j" podUID="7e6b44f6-8f09-463d-b422-20e45aa79602" Nov 8 00:26:51.413386 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588-shm.mount: Deactivated successfully. Nov 8 00:26:52.189586 kubelet[2339]: E1108 00:26:52.189531 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:26:52.363701 kubelet[2339]: I1108 00:26:52.363667 2339 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" Nov 8 00:26:52.364456 containerd[1877]: time="2025-11-08T00:26:52.364397428Z" level=info msg="StopPodSandbox for \"33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588\"" Nov 8 00:26:52.364807 containerd[1877]: time="2025-11-08T00:26:52.364616696Z" level=info msg="Ensure that sandbox 33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588 in task-service has been cleanup successfully" Nov 8 00:26:52.393145 containerd[1877]: time="2025-11-08T00:26:52.393080857Z" level=error msg="StopPodSandbox for \"33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588\" failed" error="failed to destroy network for sandbox \"33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:52.393350 kubelet[2339]: E1108 00:26:52.393308 2339 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" Nov 8 00:26:52.393437 kubelet[2339]: E1108 00:26:52.393370 2339 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588"} Nov 8 00:26:52.393497 kubelet[2339]: E1108 00:26:52.393433 2339 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7e6b44f6-8f09-463d-b422-20e45aa79602\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:26:52.393497 kubelet[2339]: E1108 00:26:52.393470 2339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7e6b44f6-8f09-463d-b422-20e45aa79602\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sqf2j" podUID="7e6b44f6-8f09-463d-b422-20e45aa79602" Nov 8 00:26:52.611323 systemd[1]: Created slice kubepods-besteffort-podc913df09_aa74_4701_9dc0_2544dbe42937.slice - libcontainer container kubepods-besteffort-podc913df09_aa74_4701_9dc0_2544dbe42937.slice. Nov 8 00:26:52.696163 kubelet[2339]: I1108 00:26:52.695137 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmn7q\" (UniqueName: \"kubernetes.io/projected/c913df09-aa74-4701-9dc0-2544dbe42937-kube-api-access-hmn7q\") pod \"nginx-deployment-bb8f74bfb-dzhh5\" (UID: \"c913df09-aa74-4701-9dc0-2544dbe42937\") " pod="default/nginx-deployment-bb8f74bfb-dzhh5" Nov 8 00:26:52.919808 containerd[1877]: time="2025-11-08T00:26:52.919701439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-bb8f74bfb-dzhh5,Uid:c913df09-aa74-4701-9dc0-2544dbe42937,Namespace:default,Attempt:0,}" Nov 8 00:26:53.040428 containerd[1877]: time="2025-11-08T00:26:53.040293795Z" level=error msg="Failed to destroy network for sandbox \"437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:53.041031 containerd[1877]: time="2025-11-08T00:26:53.040846339Z" level=error msg="encountered an error cleaning up failed sandbox \"437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:53.041031 containerd[1877]: time="2025-11-08T00:26:53.040913462Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-bb8f74bfb-dzhh5,Uid:c913df09-aa74-4701-9dc0-2544dbe42937,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:53.041822 kubelet[2339]: E1108 00:26:53.041380 2339 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:53.041822 kubelet[2339]: E1108 00:26:53.041442 2339 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-bb8f74bfb-dzhh5" Nov 8 00:26:53.041822 kubelet[2339]: E1108 00:26:53.041466 2339 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-bb8f74bfb-dzhh5" Nov 8 00:26:53.042036 kubelet[2339]: E1108 00:26:53.041526 2339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-bb8f74bfb-dzhh5_default(c913df09-aa74-4701-9dc0-2544dbe42937)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-bb8f74bfb-dzhh5_default(c913df09-aa74-4701-9dc0-2544dbe42937)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-bb8f74bfb-dzhh5" podUID="c913df09-aa74-4701-9dc0-2544dbe42937" Nov 8 00:26:53.043907 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c-shm.mount: Deactivated successfully. Nov 8 00:26:53.189951 kubelet[2339]: E1108 00:26:53.189913 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:26:53.368271 kubelet[2339]: I1108 00:26:53.368218 2339 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" Nov 8 00:26:53.368799 containerd[1877]: time="2025-11-08T00:26:53.368763506Z" level=info msg="StopPodSandbox for \"437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c\"" Nov 8 00:26:53.369193 containerd[1877]: time="2025-11-08T00:26:53.368970224Z" level=info msg="Ensure that sandbox 437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c in task-service has been cleanup successfully" Nov 8 00:26:53.418607 containerd[1877]: time="2025-11-08T00:26:53.418267498Z" level=error msg="StopPodSandbox for \"437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c\" failed" error="failed to destroy network for sandbox \"437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:53.419047 kubelet[2339]: E1108 00:26:53.419003 2339 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" Nov 8 00:26:53.419172 kubelet[2339]: E1108 00:26:53.419057 2339 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c"} Nov 8 00:26:53.419172 kubelet[2339]: E1108 00:26:53.419107 2339 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c913df09-aa74-4701-9dc0-2544dbe42937\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:26:53.419172 kubelet[2339]: E1108 00:26:53.419144 2339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c913df09-aa74-4701-9dc0-2544dbe42937\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-bb8f74bfb-dzhh5" podUID="c913df09-aa74-4701-9dc0-2544dbe42937" Nov 8 00:26:54.191162 kubelet[2339]: E1108 00:26:54.190888 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:26:55.192102 kubelet[2339]: E1108 00:26:55.192047 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:26:56.063083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3379001797.mount: Deactivated successfully. Nov 8 00:26:56.117285 containerd[1877]: time="2025-11-08T00:26:56.117220440Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:56.119173 containerd[1877]: time="2025-11-08T00:26:56.119023212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 8 00:26:56.121383 containerd[1877]: time="2025-11-08T00:26:56.121304209Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:56.125413 containerd[1877]: time="2025-11-08T00:26:56.124879392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:56.125413 containerd[1877]: time="2025-11-08T00:26:56.125259104Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 5.768254467s" Nov 8 00:26:56.125413 containerd[1877]: time="2025-11-08T00:26:56.125284870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 8 00:26:56.155157 containerd[1877]: time="2025-11-08T00:26:56.154961456Z" level=info msg="CreateContainer within sandbox \"fc7a3adbb7231c9c97bb8b64784d1c8d946f366ff59caf4b656308caa0935bc8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 8 00:26:56.192984 kubelet[2339]: E1108 00:26:56.192906 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:26:56.209212 containerd[1877]: time="2025-11-08T00:26:56.209140988Z" level=info msg="CreateContainer within sandbox \"fc7a3adbb7231c9c97bb8b64784d1c8d946f366ff59caf4b656308caa0935bc8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9341796569e860aa5e89feb6145ebd892e5eab21d6735745d67a4735d548425c\"" Nov 8 00:26:56.209801 containerd[1877]: time="2025-11-08T00:26:56.209764260Z" level=info msg="StartContainer for \"9341796569e860aa5e89feb6145ebd892e5eab21d6735745d67a4735d548425c\"" Nov 8 00:26:56.290849 systemd[1]: Started cri-containerd-9341796569e860aa5e89feb6145ebd892e5eab21d6735745d67a4735d548425c.scope - libcontainer container 9341796569e860aa5e89feb6145ebd892e5eab21d6735745d67a4735d548425c. Nov 8 00:26:56.331388 containerd[1877]: time="2025-11-08T00:26:56.331152503Z" level=info msg="StartContainer for \"9341796569e860aa5e89feb6145ebd892e5eab21d6735745d67a4735d548425c\" returns successfully" Nov 8 00:26:56.404001 kubelet[2339]: I1108 00:26:56.403941 2339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kcjkh" podStartSLOduration=3.679728624 podStartE2EDuration="17.403922755s" podCreationTimestamp="2025-11-08 00:26:39 +0000 UTC" firstStartedPulling="2025-11-08 00:26:42.402007468 +0000 UTC m=+3.948596742" lastFinishedPulling="2025-11-08 00:26:56.126201599 +0000 UTC m=+17.672790873" observedRunningTime="2025-11-08 00:26:56.401950976 +0000 UTC m=+17.948540265" watchObservedRunningTime="2025-11-08 00:26:56.403922755 +0000 UTC m=+17.950512039" Nov 8 00:26:56.440815 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 8 00:26:56.440949 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 8 00:26:57.193738 kubelet[2339]: E1108 00:26:57.193666 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:26:58.123662 kernel: bpftool[3074]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 8 00:26:58.194660 kubelet[2339]: E1108 00:26:58.194581 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:26:58.342776 (udev-worker)[2933]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:26:58.345789 systemd-networkd[1807]: vxlan.calico: Link UP Nov 8 00:26:58.346666 systemd-networkd[1807]: vxlan.calico: Gained carrier Nov 8 00:26:58.381560 (udev-worker)[3098]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:26:59.179228 kubelet[2339]: E1108 00:26:59.179175 2339 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:26:59.195798 kubelet[2339]: E1108 00:26:59.195742 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:26:59.584145 systemd-networkd[1807]: vxlan.calico: Gained IPv6LL Nov 8 00:26:59.965138 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 8 00:27:00.196141 kubelet[2339]: E1108 00:27:00.196094 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:01.197024 kubelet[2339]: E1108 00:27:01.196841 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:02.198348 kubelet[2339]: E1108 00:27:02.197913 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:02.353516 ntpd[1862]: Listen normally on 7 vxlan.calico 192.168.87.192:123 Nov 8 00:27:02.353760 ntpd[1862]: Listen normally on 8 vxlan.calico [fe80::643a:43ff:fed2:6d19%3]:123 Nov 8 00:27:02.354118 ntpd[1862]: 8 Nov 00:27:02 ntpd[1862]: Listen normally on 7 vxlan.calico 192.168.87.192:123 Nov 8 00:27:02.354118 ntpd[1862]: 8 Nov 00:27:02 ntpd[1862]: Listen normally on 8 vxlan.calico [fe80::643a:43ff:fed2:6d19%3]:123 Nov 8 00:27:03.199472 kubelet[2339]: E1108 00:27:03.199355 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:04.200006 kubelet[2339]: E1108 00:27:04.199955 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:05.200560 kubelet[2339]: E1108 00:27:05.200502 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:05.313863 containerd[1877]: time="2025-11-08T00:27:05.313535462Z" level=info msg="StopPodSandbox for \"437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c\"" Nov 8 00:27:05.742099 containerd[1877]: 2025-11-08 00:27:05.487 [INFO][3164] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" Nov 8 00:27:05.742099 containerd[1877]: 2025-11-08 00:27:05.489 [INFO][3164] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" iface="eth0" netns="/var/run/netns/cni-c21c969d-0a9b-a545-079a-4de4ca4fa592" Nov 8 00:27:05.742099 containerd[1877]: 2025-11-08 00:27:05.491 [INFO][3164] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" iface="eth0" netns="/var/run/netns/cni-c21c969d-0a9b-a545-079a-4de4ca4fa592" Nov 8 00:27:05.742099 containerd[1877]: 2025-11-08 00:27:05.492 [INFO][3164] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" iface="eth0" netns="/var/run/netns/cni-c21c969d-0a9b-a545-079a-4de4ca4fa592" Nov 8 00:27:05.742099 containerd[1877]: 2025-11-08 00:27:05.492 [INFO][3164] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" Nov 8 00:27:05.742099 containerd[1877]: 2025-11-08 00:27:05.492 [INFO][3164] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" Nov 8 00:27:05.742099 containerd[1877]: 2025-11-08 00:27:05.723 [INFO][3171] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" HandleID="k8s-pod-network.437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" Workload="172.31.23.96-k8s-nginx--deployment--bb8f74bfb--dzhh5-eth0" Nov 8 00:27:05.742099 containerd[1877]: 2025-11-08 00:27:05.723 [INFO][3171] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:27:05.742099 containerd[1877]: 2025-11-08 00:27:05.723 [INFO][3171] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:27:05.742099 containerd[1877]: 2025-11-08 00:27:05.735 [WARNING][3171] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" HandleID="k8s-pod-network.437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" Workload="172.31.23.96-k8s-nginx--deployment--bb8f74bfb--dzhh5-eth0" Nov 8 00:27:05.742099 containerd[1877]: 2025-11-08 00:27:05.735 [INFO][3171] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" HandleID="k8s-pod-network.437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" Workload="172.31.23.96-k8s-nginx--deployment--bb8f74bfb--dzhh5-eth0" Nov 8 00:27:05.742099 containerd[1877]: 2025-11-08 00:27:05.737 [INFO][3171] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:27:05.742099 containerd[1877]: 2025-11-08 00:27:05.740 [INFO][3164] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" Nov 8 00:27:05.745004 containerd[1877]: time="2025-11-08T00:27:05.744834661Z" level=info msg="TearDown network for sandbox \"437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c\" successfully" Nov 8 00:27:05.745004 containerd[1877]: time="2025-11-08T00:27:05.744890195Z" level=info msg="StopPodSandbox for \"437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c\" returns successfully" Nov 8 00:27:05.746144 systemd[1]: run-netns-cni\x2dc21c969d\x2d0a9b\x2da545\x2d079a\x2d4de4ca4fa592.mount: Deactivated successfully. Nov 8 00:27:05.753742 containerd[1877]: time="2025-11-08T00:27:05.753692244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-bb8f74bfb-dzhh5,Uid:c913df09-aa74-4701-9dc0-2544dbe42937,Namespace:default,Attempt:1,}" Nov 8 00:27:05.926188 systemd-networkd[1807]: cali9b5bea79ca5: Link UP Nov 8 00:27:05.927366 systemd-networkd[1807]: cali9b5bea79ca5: Gained carrier Nov 8 00:27:05.927742 (udev-worker)[3197]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:27:05.940862 containerd[1877]: 2025-11-08 00:27:05.821 [INFO][3179] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.23.96-k8s-nginx--deployment--bb8f74bfb--dzhh5-eth0 nginx-deployment-bb8f74bfb- default c913df09-aa74-4701-9dc0-2544dbe42937 1244 0 2025-11-08 00:26:52 +0000 UTC map[app:nginx pod-template-hash:bb8f74bfb projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.23.96 nginx-deployment-bb8f74bfb-dzhh5 eth0 default [] [] [kns.default ksa.default.default] cali9b5bea79ca5 [] [] }} ContainerID="188d5f3e7c9eeede0e6ae5417fc4bc29d8a3938b98af7de181660fb18b50e2f3" Namespace="default" Pod="nginx-deployment-bb8f74bfb-dzhh5" WorkloadEndpoint="172.31.23.96-k8s-nginx--deployment--bb8f74bfb--dzhh5-" Nov 8 00:27:05.940862 containerd[1877]: 2025-11-08 00:27:05.821 [INFO][3179] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="188d5f3e7c9eeede0e6ae5417fc4bc29d8a3938b98af7de181660fb18b50e2f3" Namespace="default" Pod="nginx-deployment-bb8f74bfb-dzhh5" WorkloadEndpoint="172.31.23.96-k8s-nginx--deployment--bb8f74bfb--dzhh5-eth0" Nov 8 00:27:05.940862 containerd[1877]: 2025-11-08 00:27:05.857 [INFO][3191] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="188d5f3e7c9eeede0e6ae5417fc4bc29d8a3938b98af7de181660fb18b50e2f3" HandleID="k8s-pod-network.188d5f3e7c9eeede0e6ae5417fc4bc29d8a3938b98af7de181660fb18b50e2f3" Workload="172.31.23.96-k8s-nginx--deployment--bb8f74bfb--dzhh5-eth0" Nov 8 00:27:05.940862 containerd[1877]: 2025-11-08 00:27:05.858 [INFO][3191] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="188d5f3e7c9eeede0e6ae5417fc4bc29d8a3938b98af7de181660fb18b50e2f3" HandleID="k8s-pod-network.188d5f3e7c9eeede0e6ae5417fc4bc29d8a3938b98af7de181660fb18b50e2f3" Workload="172.31.23.96-k8s-nginx--deployment--bb8f74bfb--dzhh5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"default", "node":"172.31.23.96", "pod":"nginx-deployment-bb8f74bfb-dzhh5", "timestamp":"2025-11-08 00:27:05.857911651 +0000 UTC"}, Hostname:"172.31.23.96", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:27:05.940862 containerd[1877]: 2025-11-08 00:27:05.858 [INFO][3191] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:27:05.940862 containerd[1877]: 2025-11-08 00:27:05.858 [INFO][3191] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:27:05.940862 containerd[1877]: 2025-11-08 00:27:05.858 [INFO][3191] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.23.96' Nov 8 00:27:05.940862 containerd[1877]: 2025-11-08 00:27:05.867 [INFO][3191] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.188d5f3e7c9eeede0e6ae5417fc4bc29d8a3938b98af7de181660fb18b50e2f3" host="172.31.23.96" Nov 8 00:27:05.940862 containerd[1877]: 2025-11-08 00:27:05.893 [INFO][3191] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.23.96" Nov 8 00:27:05.940862 containerd[1877]: 2025-11-08 00:27:05.899 [INFO][3191] ipam/ipam.go 511: Trying affinity for 192.168.87.192/26 host="172.31.23.96" Nov 8 00:27:05.940862 containerd[1877]: 2025-11-08 00:27:05.901 [INFO][3191] ipam/ipam.go 158: Attempting to load block cidr=192.168.87.192/26 host="172.31.23.96" Nov 8 00:27:05.940862 containerd[1877]: 2025-11-08 00:27:05.904 [INFO][3191] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.87.192/26 host="172.31.23.96" Nov 8 00:27:05.940862 containerd[1877]: 2025-11-08 00:27:05.905 [INFO][3191] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.87.192/26 handle="k8s-pod-network.188d5f3e7c9eeede0e6ae5417fc4bc29d8a3938b98af7de181660fb18b50e2f3" host="172.31.23.96" Nov 8 00:27:05.940862 containerd[1877]: 2025-11-08 00:27:05.907 [INFO][3191] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.188d5f3e7c9eeede0e6ae5417fc4bc29d8a3938b98af7de181660fb18b50e2f3 Nov 8 00:27:05.940862 containerd[1877]: 2025-11-08 00:27:05.912 [INFO][3191] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.87.192/26 handle="k8s-pod-network.188d5f3e7c9eeede0e6ae5417fc4bc29d8a3938b98af7de181660fb18b50e2f3" host="172.31.23.96" Nov 8 00:27:05.940862 containerd[1877]: 2025-11-08 00:27:05.918 [INFO][3191] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.87.193/26] block=192.168.87.192/26 handle="k8s-pod-network.188d5f3e7c9eeede0e6ae5417fc4bc29d8a3938b98af7de181660fb18b50e2f3" host="172.31.23.96" Nov 8 00:27:05.940862 containerd[1877]: 2025-11-08 00:27:05.918 [INFO][3191] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.87.193/26] handle="k8s-pod-network.188d5f3e7c9eeede0e6ae5417fc4bc29d8a3938b98af7de181660fb18b50e2f3" host="172.31.23.96" Nov 8 00:27:05.940862 containerd[1877]: 2025-11-08 00:27:05.918 [INFO][3191] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:27:05.940862 containerd[1877]: 2025-11-08 00:27:05.918 [INFO][3191] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.87.193/26] IPv6=[] ContainerID="188d5f3e7c9eeede0e6ae5417fc4bc29d8a3938b98af7de181660fb18b50e2f3" HandleID="k8s-pod-network.188d5f3e7c9eeede0e6ae5417fc4bc29d8a3938b98af7de181660fb18b50e2f3" Workload="172.31.23.96-k8s-nginx--deployment--bb8f74bfb--dzhh5-eth0" Nov 8 00:27:05.942052 containerd[1877]: 2025-11-08 00:27:05.920 [INFO][3179] cni-plugin/k8s.go 418: Populated endpoint ContainerID="188d5f3e7c9eeede0e6ae5417fc4bc29d8a3938b98af7de181660fb18b50e2f3" Namespace="default" Pod="nginx-deployment-bb8f74bfb-dzhh5" WorkloadEndpoint="172.31.23.96-k8s-nginx--deployment--bb8f74bfb--dzhh5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.96-k8s-nginx--deployment--bb8f74bfb--dzhh5-eth0", GenerateName:"nginx-deployment-bb8f74bfb-", Namespace:"default", SelfLink:"", UID:"c913df09-aa74-4701-9dc0-2544dbe42937", ResourceVersion:"1244", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"bb8f74bfb", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.23.96", ContainerID:"", Pod:"nginx-deployment-bb8f74bfb-dzhh5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.87.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali9b5bea79ca5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:27:05.942052 containerd[1877]: 2025-11-08 00:27:05.920 [INFO][3179] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.193/32] ContainerID="188d5f3e7c9eeede0e6ae5417fc4bc29d8a3938b98af7de181660fb18b50e2f3" Namespace="default" Pod="nginx-deployment-bb8f74bfb-dzhh5" WorkloadEndpoint="172.31.23.96-k8s-nginx--deployment--bb8f74bfb--dzhh5-eth0" Nov 8 00:27:05.942052 containerd[1877]: 2025-11-08 00:27:05.920 [INFO][3179] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b5bea79ca5 ContainerID="188d5f3e7c9eeede0e6ae5417fc4bc29d8a3938b98af7de181660fb18b50e2f3" Namespace="default" Pod="nginx-deployment-bb8f74bfb-dzhh5" WorkloadEndpoint="172.31.23.96-k8s-nginx--deployment--bb8f74bfb--dzhh5-eth0" Nov 8 00:27:05.942052 containerd[1877]: 2025-11-08 00:27:05.927 [INFO][3179] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="188d5f3e7c9eeede0e6ae5417fc4bc29d8a3938b98af7de181660fb18b50e2f3" Namespace="default" Pod="nginx-deployment-bb8f74bfb-dzhh5" WorkloadEndpoint="172.31.23.96-k8s-nginx--deployment--bb8f74bfb--dzhh5-eth0" Nov 8 00:27:05.942052 containerd[1877]: 2025-11-08 00:27:05.928 [INFO][3179] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="188d5f3e7c9eeede0e6ae5417fc4bc29d8a3938b98af7de181660fb18b50e2f3" Namespace="default" Pod="nginx-deployment-bb8f74bfb-dzhh5" WorkloadEndpoint="172.31.23.96-k8s-nginx--deployment--bb8f74bfb--dzhh5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.96-k8s-nginx--deployment--bb8f74bfb--dzhh5-eth0", GenerateName:"nginx-deployment-bb8f74bfb-", Namespace:"default", SelfLink:"", UID:"c913df09-aa74-4701-9dc0-2544dbe42937", ResourceVersion:"1244", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"bb8f74bfb", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.23.96", ContainerID:"188d5f3e7c9eeede0e6ae5417fc4bc29d8a3938b98af7de181660fb18b50e2f3", Pod:"nginx-deployment-bb8f74bfb-dzhh5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.87.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali9b5bea79ca5", MAC:"4a:c2:ac:c6:92:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:27:05.942052 containerd[1877]: 2025-11-08 00:27:05.939 [INFO][3179] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="188d5f3e7c9eeede0e6ae5417fc4bc29d8a3938b98af7de181660fb18b50e2f3" Namespace="default" Pod="nginx-deployment-bb8f74bfb-dzhh5" WorkloadEndpoint="172.31.23.96-k8s-nginx--deployment--bb8f74bfb--dzhh5-eth0" Nov 8 00:27:05.974937 containerd[1877]: time="2025-11-08T00:27:05.974832184Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:27:05.975240 containerd[1877]: time="2025-11-08T00:27:05.974987301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:27:05.975240 containerd[1877]: time="2025-11-08T00:27:05.975052402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:27:05.975459 containerd[1877]: time="2025-11-08T00:27:05.975402348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:27:06.008876 systemd[1]: Started cri-containerd-188d5f3e7c9eeede0e6ae5417fc4bc29d8a3938b98af7de181660fb18b50e2f3.scope - libcontainer container 188d5f3e7c9eeede0e6ae5417fc4bc29d8a3938b98af7de181660fb18b50e2f3. Nov 8 00:27:06.059664 containerd[1877]: time="2025-11-08T00:27:06.059003437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-bb8f74bfb-dzhh5,Uid:c913df09-aa74-4701-9dc0-2544dbe42937,Namespace:default,Attempt:1,} returns sandbox id \"188d5f3e7c9eeede0e6ae5417fc4bc29d8a3938b98af7de181660fb18b50e2f3\"" Nov 8 00:27:06.060910 containerd[1877]: time="2025-11-08T00:27:06.060883868Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Nov 8 00:27:06.201140 kubelet[2339]: E1108 00:27:06.201099 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:06.313787 containerd[1877]: time="2025-11-08T00:27:06.313443743Z" level=info msg="StopPodSandbox for \"33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588\"" Nov 8 00:27:06.413312 containerd[1877]: 2025-11-08 00:27:06.365 [INFO][3262] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" Nov 8 00:27:06.413312 containerd[1877]: 2025-11-08 00:27:06.366 [INFO][3262] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" iface="eth0" netns="/var/run/netns/cni-28cbfe68-38d0-5d26-9c82-789040d52091" Nov 8 00:27:06.413312 containerd[1877]: 2025-11-08 00:27:06.366 [INFO][3262] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" iface="eth0" netns="/var/run/netns/cni-28cbfe68-38d0-5d26-9c82-789040d52091" Nov 8 00:27:06.413312 containerd[1877]: 2025-11-08 00:27:06.368 [INFO][3262] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" iface="eth0" netns="/var/run/netns/cni-28cbfe68-38d0-5d26-9c82-789040d52091" Nov 8 00:27:06.413312 containerd[1877]: 2025-11-08 00:27:06.368 [INFO][3262] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" Nov 8 00:27:06.413312 containerd[1877]: 2025-11-08 00:27:06.368 [INFO][3262] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" Nov 8 00:27:06.413312 containerd[1877]: 2025-11-08 00:27:06.398 [INFO][3269] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" HandleID="k8s-pod-network.33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" Workload="172.31.23.96-k8s-csi--node--driver--sqf2j-eth0" Nov 8 00:27:06.413312 containerd[1877]: 2025-11-08 00:27:06.399 [INFO][3269] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:27:06.413312 containerd[1877]: 2025-11-08 00:27:06.399 [INFO][3269] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:27:06.413312 containerd[1877]: 2025-11-08 00:27:06.407 [WARNING][3269] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" HandleID="k8s-pod-network.33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" Workload="172.31.23.96-k8s-csi--node--driver--sqf2j-eth0" Nov 8 00:27:06.413312 containerd[1877]: 2025-11-08 00:27:06.407 [INFO][3269] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" HandleID="k8s-pod-network.33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" Workload="172.31.23.96-k8s-csi--node--driver--sqf2j-eth0" Nov 8 00:27:06.413312 containerd[1877]: 2025-11-08 00:27:06.410 [INFO][3269] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:27:06.413312 containerd[1877]: 2025-11-08 00:27:06.411 [INFO][3262] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" Nov 8 00:27:06.414840 containerd[1877]: time="2025-11-08T00:27:06.414722915Z" level=info msg="TearDown network for sandbox \"33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588\" successfully" Nov 8 00:27:06.414840 containerd[1877]: time="2025-11-08T00:27:06.414793022Z" level=info msg="StopPodSandbox for \"33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588\" returns successfully" Nov 8 00:27:06.415399 systemd[1]: run-netns-cni\x2d28cbfe68\x2d38d0\x2d5d26\x2d9c82\x2d789040d52091.mount: Deactivated successfully. Nov 8 00:27:06.419208 containerd[1877]: time="2025-11-08T00:27:06.419164628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sqf2j,Uid:7e6b44f6-8f09-463d-b422-20e45aa79602,Namespace:calico-system,Attempt:1,}" Nov 8 00:27:06.563873 (udev-worker)[3199]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:27:06.567325 systemd-networkd[1807]: cali16b724f94f9: Link UP Nov 8 00:27:06.567677 systemd-networkd[1807]: cali16b724f94f9: Gained carrier Nov 8 00:27:06.587650 containerd[1877]: 2025-11-08 00:27:06.475 [INFO][3276] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.23.96-k8s-csi--node--driver--sqf2j-eth0 csi-node-driver- calico-system 7e6b44f6-8f09-463d-b422-20e45aa79602 1250 0 2025-11-08 00:26:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.31.23.96 csi-node-driver-sqf2j eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali16b724f94f9 [] [] }} ContainerID="1201dc07630422aeb9764bdcf6bd3c274c6a245da72767d08a07c39f7cb42349" Namespace="calico-system" Pod="csi-node-driver-sqf2j" WorkloadEndpoint="172.31.23.96-k8s-csi--node--driver--sqf2j-" Nov 8 00:27:06.587650 containerd[1877]: 2025-11-08 00:27:06.476 [INFO][3276] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1201dc07630422aeb9764bdcf6bd3c274c6a245da72767d08a07c39f7cb42349" Namespace="calico-system" Pod="csi-node-driver-sqf2j" WorkloadEndpoint="172.31.23.96-k8s-csi--node--driver--sqf2j-eth0" Nov 8 00:27:06.587650 containerd[1877]: 2025-11-08 00:27:06.507 [INFO][3287] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1201dc07630422aeb9764bdcf6bd3c274c6a245da72767d08a07c39f7cb42349" HandleID="k8s-pod-network.1201dc07630422aeb9764bdcf6bd3c274c6a245da72767d08a07c39f7cb42349" Workload="172.31.23.96-k8s-csi--node--driver--sqf2j-eth0" Nov 8 00:27:06.587650 containerd[1877]: 2025-11-08 00:27:06.507 [INFO][3287] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1201dc07630422aeb9764bdcf6bd3c274c6a245da72767d08a07c39f7cb42349" HandleID="k8s-pod-network.1201dc07630422aeb9764bdcf6bd3c274c6a245da72767d08a07c39f7cb42349" Workload="172.31.23.96-k8s-csi--node--driver--sqf2j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c55b0), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.23.96", "pod":"csi-node-driver-sqf2j", "timestamp":"2025-11-08 00:27:06.507272434 +0000 UTC"}, Hostname:"172.31.23.96", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:27:06.587650 containerd[1877]: 2025-11-08 00:27:06.507 [INFO][3287] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:27:06.587650 containerd[1877]: 2025-11-08 00:27:06.507 [INFO][3287] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:27:06.587650 containerd[1877]: 2025-11-08 00:27:06.507 [INFO][3287] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.23.96' Nov 8 00:27:06.587650 containerd[1877]: 2025-11-08 00:27:06.516 [INFO][3287] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1201dc07630422aeb9764bdcf6bd3c274c6a245da72767d08a07c39f7cb42349" host="172.31.23.96" Nov 8 00:27:06.587650 containerd[1877]: 2025-11-08 00:27:06.522 [INFO][3287] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.23.96" Nov 8 00:27:06.587650 containerd[1877]: 2025-11-08 00:27:06.529 [INFO][3287] ipam/ipam.go 511: Trying affinity for 192.168.87.192/26 host="172.31.23.96" Nov 8 00:27:06.587650 containerd[1877]: 2025-11-08 00:27:06.532 [INFO][3287] ipam/ipam.go 158: Attempting to load block cidr=192.168.87.192/26 host="172.31.23.96" Nov 8 00:27:06.587650 containerd[1877]: 2025-11-08 00:27:06.535 [INFO][3287] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.87.192/26 host="172.31.23.96" Nov 8 00:27:06.587650 containerd[1877]: 2025-11-08 00:27:06.535 [INFO][3287] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.87.192/26 handle="k8s-pod-network.1201dc07630422aeb9764bdcf6bd3c274c6a245da72767d08a07c39f7cb42349" host="172.31.23.96" Nov 8 00:27:06.587650 containerd[1877]: 2025-11-08 00:27:06.538 [INFO][3287] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1201dc07630422aeb9764bdcf6bd3c274c6a245da72767d08a07c39f7cb42349 Nov 8 00:27:06.587650 containerd[1877]: 2025-11-08 00:27:06.546 [INFO][3287] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.87.192/26 handle="k8s-pod-network.1201dc07630422aeb9764bdcf6bd3c274c6a245da72767d08a07c39f7cb42349" host="172.31.23.96" Nov 8 00:27:06.587650 containerd[1877]: 2025-11-08 00:27:06.554 [INFO][3287] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.87.194/26] block=192.168.87.192/26 handle="k8s-pod-network.1201dc07630422aeb9764bdcf6bd3c274c6a245da72767d08a07c39f7cb42349" host="172.31.23.96" Nov 8 00:27:06.587650 containerd[1877]: 2025-11-08 00:27:06.554 [INFO][3287] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.87.194/26] handle="k8s-pod-network.1201dc07630422aeb9764bdcf6bd3c274c6a245da72767d08a07c39f7cb42349" host="172.31.23.96" Nov 8 00:27:06.587650 containerd[1877]: 2025-11-08 00:27:06.554 [INFO][3287] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:27:06.587650 containerd[1877]: 2025-11-08 00:27:06.554 [INFO][3287] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.87.194/26] IPv6=[] ContainerID="1201dc07630422aeb9764bdcf6bd3c274c6a245da72767d08a07c39f7cb42349" HandleID="k8s-pod-network.1201dc07630422aeb9764bdcf6bd3c274c6a245da72767d08a07c39f7cb42349" Workload="172.31.23.96-k8s-csi--node--driver--sqf2j-eth0" Nov 8 00:27:06.588953 containerd[1877]: 2025-11-08 00:27:06.556 [INFO][3276] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1201dc07630422aeb9764bdcf6bd3c274c6a245da72767d08a07c39f7cb42349" Namespace="calico-system" Pod="csi-node-driver-sqf2j" WorkloadEndpoint="172.31.23.96-k8s-csi--node--driver--sqf2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.96-k8s-csi--node--driver--sqf2j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7e6b44f6-8f09-463d-b422-20e45aa79602", ResourceVersion:"1250", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.23.96", ContainerID:"", Pod:"csi-node-driver-sqf2j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.87.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali16b724f94f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:27:06.588953 containerd[1877]: 2025-11-08 00:27:06.557 [INFO][3276] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.194/32] ContainerID="1201dc07630422aeb9764bdcf6bd3c274c6a245da72767d08a07c39f7cb42349" Namespace="calico-system" Pod="csi-node-driver-sqf2j" WorkloadEndpoint="172.31.23.96-k8s-csi--node--driver--sqf2j-eth0" Nov 8 00:27:06.588953 containerd[1877]: 2025-11-08 00:27:06.557 [INFO][3276] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali16b724f94f9 ContainerID="1201dc07630422aeb9764bdcf6bd3c274c6a245da72767d08a07c39f7cb42349" Namespace="calico-system" Pod="csi-node-driver-sqf2j" WorkloadEndpoint="172.31.23.96-k8s-csi--node--driver--sqf2j-eth0" Nov 8 00:27:06.588953 containerd[1877]: 2025-11-08 00:27:06.562 [INFO][3276] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1201dc07630422aeb9764bdcf6bd3c274c6a245da72767d08a07c39f7cb42349" Namespace="calico-system" Pod="csi-node-driver-sqf2j" WorkloadEndpoint="172.31.23.96-k8s-csi--node--driver--sqf2j-eth0" Nov 8 00:27:06.588953 containerd[1877]: 2025-11-08 00:27:06.563 [INFO][3276] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1201dc07630422aeb9764bdcf6bd3c274c6a245da72767d08a07c39f7cb42349" Namespace="calico-system" Pod="csi-node-driver-sqf2j" WorkloadEndpoint="172.31.23.96-k8s-csi--node--driver--sqf2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.96-k8s-csi--node--driver--sqf2j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7e6b44f6-8f09-463d-b422-20e45aa79602", ResourceVersion:"1250", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.23.96", ContainerID:"1201dc07630422aeb9764bdcf6bd3c274c6a245da72767d08a07c39f7cb42349", Pod:"csi-node-driver-sqf2j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.87.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali16b724f94f9", MAC:"a6:d3:01:a8:38:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:27:06.588953 containerd[1877]: 2025-11-08 00:27:06.584 [INFO][3276] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1201dc07630422aeb9764bdcf6bd3c274c6a245da72767d08a07c39f7cb42349" Namespace="calico-system" Pod="csi-node-driver-sqf2j" WorkloadEndpoint="172.31.23.96-k8s-csi--node--driver--sqf2j-eth0" Nov 8 00:27:06.616138 containerd[1877]: time="2025-11-08T00:27:06.615874409Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:27:06.616138 containerd[1877]: time="2025-11-08T00:27:06.615927661Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:27:06.616138 containerd[1877]: time="2025-11-08T00:27:06.615944329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:27:06.616944 containerd[1877]: time="2025-11-08T00:27:06.616034123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:27:06.635886 systemd[1]: Started cri-containerd-1201dc07630422aeb9764bdcf6bd3c274c6a245da72767d08a07c39f7cb42349.scope - libcontainer container 1201dc07630422aeb9764bdcf6bd3c274c6a245da72767d08a07c39f7cb42349. Nov 8 00:27:06.665446 containerd[1877]: time="2025-11-08T00:27:06.665161214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sqf2j,Uid:7e6b44f6-8f09-463d-b422-20e45aa79602,Namespace:calico-system,Attempt:1,} returns sandbox id \"1201dc07630422aeb9764bdcf6bd3c274c6a245da72767d08a07c39f7cb42349\"" Nov 8 00:27:07.201290 kubelet[2339]: E1108 00:27:07.201244 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:07.776717 systemd-networkd[1807]: cali9b5bea79ca5: Gained IPv6LL Nov 8 00:27:08.033446 systemd-networkd[1807]: cali16b724f94f9: Gained IPv6LL Nov 8 00:27:08.202221 kubelet[2339]: E1108 00:27:08.202158 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:08.724765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1001724894.mount: Deactivated successfully. Nov 8 00:27:09.203414 kubelet[2339]: E1108 00:27:09.202908 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:10.164796 containerd[1877]: time="2025-11-08T00:27:10.164725904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:27:10.166694 containerd[1877]: time="2025-11-08T00:27:10.166610919Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73311946" Nov 8 00:27:10.168828 containerd[1877]: time="2025-11-08T00:27:10.168763838Z" level=info msg="ImageCreate event name:\"sha256:8d14817f00613fe76ef7459f977ad93e7b71a3948346b7ac4d50e35f3dd518e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:27:10.173856 containerd[1877]: time="2025-11-08T00:27:10.173147715Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:0537df20ac7c5485a0f6b7bfb8e3fbbc8714fce070bab2a6344e5cadfba58d90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:27:10.176737 containerd[1877]: time="2025-11-08T00:27:10.176680851Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:8d14817f00613fe76ef7459f977ad93e7b71a3948346b7ac4d50e35f3dd518e9\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:0537df20ac7c5485a0f6b7bfb8e3fbbc8714fce070bab2a6344e5cadfba58d90\", size \"73311824\" in 4.115493597s" Nov 8 00:27:10.176737 containerd[1877]: time="2025-11-08T00:27:10.176735421Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:8d14817f00613fe76ef7459f977ad93e7b71a3948346b7ac4d50e35f3dd518e9\"" Nov 8 00:27:10.177917 containerd[1877]: time="2025-11-08T00:27:10.177884385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:27:10.183244 containerd[1877]: time="2025-11-08T00:27:10.183186141Z" level=info msg="CreateContainer within sandbox \"188d5f3e7c9eeede0e6ae5417fc4bc29d8a3938b98af7de181660fb18b50e2f3\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Nov 8 00:27:10.203401 kubelet[2339]: E1108 00:27:10.203334 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:10.212704 containerd[1877]: time="2025-11-08T00:27:10.212519014Z" level=info msg="CreateContainer within sandbox \"188d5f3e7c9eeede0e6ae5417fc4bc29d8a3938b98af7de181660fb18b50e2f3\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"9de3ffe1455471755a0371f5e94aef177b07a0f9f0bc806ab998e85796c8238a\"" Nov 8 00:27:10.213373 containerd[1877]: time="2025-11-08T00:27:10.213173934Z" level=info msg="StartContainer for \"9de3ffe1455471755a0371f5e94aef177b07a0f9f0bc806ab998e85796c8238a\"" Nov 8 00:27:10.252932 systemd[1]: run-containerd-runc-k8s.io-9de3ffe1455471755a0371f5e94aef177b07a0f9f0bc806ab998e85796c8238a-runc.8chpRx.mount: Deactivated successfully. Nov 8 00:27:10.263886 systemd[1]: Started cri-containerd-9de3ffe1455471755a0371f5e94aef177b07a0f9f0bc806ab998e85796c8238a.scope - libcontainer container 9de3ffe1455471755a0371f5e94aef177b07a0f9f0bc806ab998e85796c8238a. Nov 8 00:27:10.292770 containerd[1877]: time="2025-11-08T00:27:10.292604091Z" level=info msg="StartContainer for \"9de3ffe1455471755a0371f5e94aef177b07a0f9f0bc806ab998e85796c8238a\" returns successfully" Nov 8 00:27:10.353318 ntpd[1862]: Listen normally on 9 cali9b5bea79ca5 [fe80::ecee:eeff:feee:eeee%6]:123 Nov 8 00:27:10.353744 ntpd[1862]: 8 Nov 00:27:10 ntpd[1862]: Listen normally on 9 cali9b5bea79ca5 [fe80::ecee:eeff:feee:eeee%6]:123 Nov 8 00:27:10.353744 ntpd[1862]: 8 Nov 00:27:10 ntpd[1862]: Listen normally on 10 cali16b724f94f9 [fe80::ecee:eeff:feee:eeee%7]:123 Nov 8 00:27:10.353395 ntpd[1862]: Listen normally on 10 cali16b724f94f9 [fe80::ecee:eeff:feee:eeee%7]:123 Nov 8 00:27:10.495942 kubelet[2339]: I1108 00:27:10.495889 2339 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:27:10.637439 containerd[1877]: time="2025-11-08T00:27:10.637048542Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:10.639428 containerd[1877]: time="2025-11-08T00:27:10.639360057Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:27:10.639869 containerd[1877]: time="2025-11-08T00:27:10.639499672Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:27:10.643448 kubelet[2339]: E1108 00:27:10.643376 2339 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:27:10.643940 kubelet[2339]: E1108 00:27:10.643669 2339 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:27:10.645607 kubelet[2339]: E1108 00:27:10.644406 2339 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-sqf2j_calico-system(7e6b44f6-8f09-463d-b422-20e45aa79602): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:10.645607 kubelet[2339]: I1108 00:27:10.645393 2339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-bb8f74bfb-dzhh5" podStartSLOduration=14.527713048 podStartE2EDuration="18.645378733s" podCreationTimestamp="2025-11-08 00:26:52 +0000 UTC" firstStartedPulling="2025-11-08 00:27:06.060045543 +0000 UTC m=+27.606634813" lastFinishedPulling="2025-11-08 00:27:10.177711222 +0000 UTC m=+31.724300498" observedRunningTime="2025-11-08 00:27:10.448879825 +0000 UTC m=+31.995469118" watchObservedRunningTime="2025-11-08 00:27:10.645378733 +0000 UTC m=+32.191968028" Nov 8 00:27:10.648834 containerd[1877]: time="2025-11-08T00:27:10.648795760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:27:10.935823 containerd[1877]: time="2025-11-08T00:27:10.935774492Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:10.938682 containerd[1877]: time="2025-11-08T00:27:10.938584649Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:27:10.938849 containerd[1877]: time="2025-11-08T00:27:10.938706297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:27:10.938995 kubelet[2339]: E1108 00:27:10.938926 2339 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:27:10.938995 kubelet[2339]: E1108 00:27:10.938971 2339 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:27:10.939201 kubelet[2339]: E1108 00:27:10.939044 2339 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-sqf2j_calico-system(7e6b44f6-8f09-463d-b422-20e45aa79602): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:10.939201 kubelet[2339]: E1108 00:27:10.939142 2339 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sqf2j" podUID="7e6b44f6-8f09-463d-b422-20e45aa79602" Nov 8 00:27:11.203971 kubelet[2339]: E1108 00:27:11.203844 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:11.431244 kubelet[2339]: E1108 00:27:11.431199 2339 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sqf2j" podUID="7e6b44f6-8f09-463d-b422-20e45aa79602" Nov 8 00:27:12.204499 kubelet[2339]: E1108 00:27:12.204424 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:13.205159 kubelet[2339]: E1108 00:27:13.205058 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:14.206179 kubelet[2339]: E1108 00:27:14.206132 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:14.592611 update_engine[1867]: I20251108 00:27:14.592150 1867 update_attempter.cc:509] Updating boot flags... Nov 8 00:27:14.648668 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (3492) Nov 8 00:27:14.775786 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (3494) Nov 8 00:27:15.206925 kubelet[2339]: E1108 00:27:15.206852 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:16.207384 kubelet[2339]: E1108 00:27:16.207330 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:17.208522 kubelet[2339]: E1108 00:27:17.208447 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:17.946391 systemd[1]: Created slice kubepods-besteffort-pod1091ec65_8fd6_4079_b1c5_d78518cfd46c.slice - libcontainer container kubepods-besteffort-pod1091ec65_8fd6_4079_b1c5_d78518cfd46c.slice. Nov 8 00:27:18.066098 kubelet[2339]: I1108 00:27:18.066045 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/1091ec65-8fd6-4079-b1c5-d78518cfd46c-data\") pod \"nfs-server-provisioner-0\" (UID: \"1091ec65-8fd6-4079-b1c5-d78518cfd46c\") " pod="default/nfs-server-provisioner-0" Nov 8 00:27:18.066404 kubelet[2339]: I1108 00:27:18.066134 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26w4c\" (UniqueName: \"kubernetes.io/projected/1091ec65-8fd6-4079-b1c5-d78518cfd46c-kube-api-access-26w4c\") pod \"nfs-server-provisioner-0\" (UID: \"1091ec65-8fd6-4079-b1c5-d78518cfd46c\") " pod="default/nfs-server-provisioner-0" Nov 8 00:27:18.209119 kubelet[2339]: E1108 00:27:18.208974 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:18.255612 containerd[1877]: time="2025-11-08T00:27:18.255546605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:1091ec65-8fd6-4079-b1c5-d78518cfd46c,Namespace:default,Attempt:0,}" Nov 8 00:27:18.427670 systemd-networkd[1807]: cali60e51b789ff: Link UP Nov 8 00:27:18.428603 systemd-networkd[1807]: cali60e51b789ff: Gained carrier Nov 8 00:27:18.431281 (udev-worker)[3686]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:27:18.448451 containerd[1877]: 2025-11-08 00:27:18.331 [INFO][3668] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.23.96-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 1091ec65-8fd6-4079-b1c5-d78518cfd46c 1343 0 2025-11-08 00:27:17 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-7c9b4c458c heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.23.96 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="53e9f36bc41e2fadd7b540b334053e0336c75c047563841b43a337b4d2a4bc34" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.23.96-k8s-nfs--server--provisioner--0-" Nov 8 00:27:18.448451 containerd[1877]: 2025-11-08 00:27:18.331 [INFO][3668] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="53e9f36bc41e2fadd7b540b334053e0336c75c047563841b43a337b4d2a4bc34" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.23.96-k8s-nfs--server--provisioner--0-eth0" Nov 8 00:27:18.448451 containerd[1877]: 2025-11-08 00:27:18.366 [INFO][3679] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="53e9f36bc41e2fadd7b540b334053e0336c75c047563841b43a337b4d2a4bc34" HandleID="k8s-pod-network.53e9f36bc41e2fadd7b540b334053e0336c75c047563841b43a337b4d2a4bc34" Workload="172.31.23.96-k8s-nfs--server--provisioner--0-eth0" Nov 8 00:27:18.448451 containerd[1877]: 2025-11-08 00:27:18.366 [INFO][3679] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="53e9f36bc41e2fadd7b540b334053e0336c75c047563841b43a337b4d2a4bc34" HandleID="k8s-pod-network.53e9f36bc41e2fadd7b540b334053e0336c75c047563841b43a337b4d2a4bc34" Workload="172.31.23.96-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"default", "node":"172.31.23.96", "pod":"nfs-server-provisioner-0", "timestamp":"2025-11-08 00:27:18.366216187 +0000 UTC"}, Hostname:"172.31.23.96", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:27:18.448451 containerd[1877]: 2025-11-08 00:27:18.366 [INFO][3679] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:27:18.448451 containerd[1877]: 2025-11-08 00:27:18.366 [INFO][3679] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:27:18.448451 containerd[1877]: 2025-11-08 00:27:18.366 [INFO][3679] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.23.96' Nov 8 00:27:18.448451 containerd[1877]: 2025-11-08 00:27:18.377 [INFO][3679] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.53e9f36bc41e2fadd7b540b334053e0336c75c047563841b43a337b4d2a4bc34" host="172.31.23.96" Nov 8 00:27:18.448451 containerd[1877]: 2025-11-08 00:27:18.385 [INFO][3679] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.23.96" Nov 8 00:27:18.448451 containerd[1877]: 2025-11-08 00:27:18.393 [INFO][3679] ipam/ipam.go 511: Trying affinity for 192.168.87.192/26 host="172.31.23.96" Nov 8 00:27:18.448451 containerd[1877]: 2025-11-08 00:27:18.397 [INFO][3679] ipam/ipam.go 158: Attempting to load block cidr=192.168.87.192/26 host="172.31.23.96" Nov 8 00:27:18.448451 containerd[1877]: 2025-11-08 00:27:18.400 [INFO][3679] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.87.192/26 host="172.31.23.96" Nov 8 00:27:18.448451 containerd[1877]: 2025-11-08 00:27:18.400 [INFO][3679] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.87.192/26 handle="k8s-pod-network.53e9f36bc41e2fadd7b540b334053e0336c75c047563841b43a337b4d2a4bc34" host="172.31.23.96" Nov 8 00:27:18.448451 containerd[1877]: 2025-11-08 00:27:18.403 [INFO][3679] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.53e9f36bc41e2fadd7b540b334053e0336c75c047563841b43a337b4d2a4bc34 Nov 8 00:27:18.448451 containerd[1877]: 2025-11-08 00:27:18.410 [INFO][3679] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.87.192/26 handle="k8s-pod-network.53e9f36bc41e2fadd7b540b334053e0336c75c047563841b43a337b4d2a4bc34" host="172.31.23.96" Nov 8 00:27:18.448451 containerd[1877]: 2025-11-08 00:27:18.422 [INFO][3679] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.87.195/26] block=192.168.87.192/26 handle="k8s-pod-network.53e9f36bc41e2fadd7b540b334053e0336c75c047563841b43a337b4d2a4bc34" host="172.31.23.96" Nov 8 00:27:18.448451 containerd[1877]: 2025-11-08 00:27:18.422 [INFO][3679] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.87.195/26] handle="k8s-pod-network.53e9f36bc41e2fadd7b540b334053e0336c75c047563841b43a337b4d2a4bc34" host="172.31.23.96" Nov 8 00:27:18.448451 containerd[1877]: 2025-11-08 00:27:18.422 [INFO][3679] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:27:18.448451 containerd[1877]: 2025-11-08 00:27:18.422 [INFO][3679] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.87.195/26] IPv6=[] ContainerID="53e9f36bc41e2fadd7b540b334053e0336c75c047563841b43a337b4d2a4bc34" HandleID="k8s-pod-network.53e9f36bc41e2fadd7b540b334053e0336c75c047563841b43a337b4d2a4bc34" Workload="172.31.23.96-k8s-nfs--server--provisioner--0-eth0" Nov 8 00:27:18.450354 containerd[1877]: 2025-11-08 00:27:18.424 [INFO][3668] cni-plugin/k8s.go 418: Populated endpoint ContainerID="53e9f36bc41e2fadd7b540b334053e0336c75c047563841b43a337b4d2a4bc34" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.23.96-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.96-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"1091ec65-8fd6-4079-b1c5-d78518cfd46c", ResourceVersion:"1343", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 27, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-7c9b4c458c", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.23.96", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.87.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:27:18.450354 containerd[1877]: 2025-11-08 00:27:18.425 [INFO][3668] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.195/32] ContainerID="53e9f36bc41e2fadd7b540b334053e0336c75c047563841b43a337b4d2a4bc34" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.23.96-k8s-nfs--server--provisioner--0-eth0" Nov 8 00:27:18.450354 containerd[1877]: 2025-11-08 00:27:18.425 [INFO][3668] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="53e9f36bc41e2fadd7b540b334053e0336c75c047563841b43a337b4d2a4bc34" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.23.96-k8s-nfs--server--provisioner--0-eth0" Nov 8 00:27:18.450354 containerd[1877]: 2025-11-08 00:27:18.428 [INFO][3668] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="53e9f36bc41e2fadd7b540b334053e0336c75c047563841b43a337b4d2a4bc34" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.23.96-k8s-nfs--server--provisioner--0-eth0" Nov 8 00:27:18.451604 containerd[1877]: 2025-11-08 00:27:18.428 [INFO][3668] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="53e9f36bc41e2fadd7b540b334053e0336c75c047563841b43a337b4d2a4bc34" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.23.96-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.96-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"1091ec65-8fd6-4079-b1c5-d78518cfd46c", ResourceVersion:"1343", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 27, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-7c9b4c458c", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.23.96", ContainerID:"53e9f36bc41e2fadd7b540b334053e0336c75c047563841b43a337b4d2a4bc34", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.87.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"8a:c8:a3:a5:9f:f7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:27:18.451604 containerd[1877]: 2025-11-08 00:27:18.446 [INFO][3668] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="53e9f36bc41e2fadd7b540b334053e0336c75c047563841b43a337b4d2a4bc34" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.23.96-k8s-nfs--server--provisioner--0-eth0" Nov 8 00:27:18.480282 containerd[1877]: time="2025-11-08T00:27:18.480088458Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:27:18.480282 containerd[1877]: time="2025-11-08T00:27:18.480150758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:27:18.481976 containerd[1877]: time="2025-11-08T00:27:18.480174002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:27:18.481976 containerd[1877]: time="2025-11-08T00:27:18.480708442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:27:18.512922 systemd[1]: Started cri-containerd-53e9f36bc41e2fadd7b540b334053e0336c75c047563841b43a337b4d2a4bc34.scope - libcontainer container 53e9f36bc41e2fadd7b540b334053e0336c75c047563841b43a337b4d2a4bc34. Nov 8 00:27:18.562309 containerd[1877]: time="2025-11-08T00:27:18.562267945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:1091ec65-8fd6-4079-b1c5-d78518cfd46c,Namespace:default,Attempt:0,} returns sandbox id \"53e9f36bc41e2fadd7b540b334053e0336c75c047563841b43a337b4d2a4bc34\"" Nov 8 00:27:18.564911 containerd[1877]: time="2025-11-08T00:27:18.564659776Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Nov 8 00:27:19.179249 kubelet[2339]: E1108 00:27:19.179200 2339 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:19.210651 kubelet[2339]: E1108 00:27:19.209727 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:19.872932 systemd-networkd[1807]: cali60e51b789ff: Gained IPv6LL Nov 8 00:27:20.229340 kubelet[2339]: E1108 00:27:20.229008 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:20.820451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2335670683.mount: Deactivated successfully. Nov 8 00:27:21.236282 kubelet[2339]: E1108 00:27:21.235825 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:22.237277 kubelet[2339]: E1108 00:27:22.237240 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:22.353304 ntpd[1862]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Nov 8 00:27:22.353731 ntpd[1862]: 8 Nov 00:27:22 ntpd[1862]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Nov 8 00:27:22.965503 containerd[1877]: time="2025-11-08T00:27:22.964948352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:27:22.970661 containerd[1877]: time="2025-11-08T00:27:22.970503320Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Nov 8 00:27:22.976106 containerd[1877]: time="2025-11-08T00:27:22.976056552Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:27:22.990716 containerd[1877]: time="2025-11-08T00:27:22.989625315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:27:22.992845 containerd[1877]: time="2025-11-08T00:27:22.992604233Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.427907615s" Nov 8 00:27:22.992845 containerd[1877]: time="2025-11-08T00:27:22.992662809Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Nov 8 00:27:23.112041 containerd[1877]: time="2025-11-08T00:27:23.111946427Z" level=info msg="CreateContainer within sandbox \"53e9f36bc41e2fadd7b540b334053e0336c75c047563841b43a337b4d2a4bc34\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Nov 8 00:27:23.148197 containerd[1877]: time="2025-11-08T00:27:23.148136106Z" level=info msg="CreateContainer within sandbox \"53e9f36bc41e2fadd7b540b334053e0336c75c047563841b43a337b4d2a4bc34\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"f9c008f83e5d7a7867745ba1ae5c028eb47a9252104c62c8f45c0cca7a28dfb0\"" Nov 8 00:27:23.152100 containerd[1877]: time="2025-11-08T00:27:23.152053592Z" level=info msg="StartContainer for \"f9c008f83e5d7a7867745ba1ae5c028eb47a9252104c62c8f45c0cca7a28dfb0\"" Nov 8 00:27:23.197453 systemd[1]: run-containerd-runc-k8s.io-f9c008f83e5d7a7867745ba1ae5c028eb47a9252104c62c8f45c0cca7a28dfb0-runc.ANnyuR.mount: Deactivated successfully. Nov 8 00:27:23.204841 systemd[1]: Started cri-containerd-f9c008f83e5d7a7867745ba1ae5c028eb47a9252104c62c8f45c0cca7a28dfb0.scope - libcontainer container f9c008f83e5d7a7867745ba1ae5c028eb47a9252104c62c8f45c0cca7a28dfb0. Nov 8 00:27:23.238486 containerd[1877]: time="2025-11-08T00:27:23.238252403Z" level=info msg="StartContainer for \"f9c008f83e5d7a7867745ba1ae5c028eb47a9252104c62c8f45c0cca7a28dfb0\" returns successfully" Nov 8 00:27:23.239655 kubelet[2339]: E1108 00:27:23.239076 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:23.518844 kubelet[2339]: I1108 00:27:23.518548 2339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.059908085 podStartE2EDuration="6.513343218s" podCreationTimestamp="2025-11-08 00:27:17 +0000 UTC" firstStartedPulling="2025-11-08 00:27:18.564003606 +0000 UTC m=+40.110592880" lastFinishedPulling="2025-11-08 00:27:23.017438743 +0000 UTC m=+44.564028013" observedRunningTime="2025-11-08 00:27:23.511978446 +0000 UTC m=+45.058567738" watchObservedRunningTime="2025-11-08 00:27:23.513343218 +0000 UTC m=+45.059932510" Nov 8 00:27:24.240162 kubelet[2339]: E1108 00:27:24.240104 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:24.318243 containerd[1877]: time="2025-11-08T00:27:24.318191995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:27:24.619241 containerd[1877]: time="2025-11-08T00:27:24.619105905Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:24.621850 containerd[1877]: time="2025-11-08T00:27:24.621449192Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:27:24.621850 containerd[1877]: time="2025-11-08T00:27:24.621541000Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:27:24.622007 kubelet[2339]: E1108 00:27:24.621742 2339 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:27:24.624362 kubelet[2339]: E1108 00:27:24.624101 2339 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:27:24.624362 kubelet[2339]: E1108 00:27:24.624344 2339 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-sqf2j_calico-system(7e6b44f6-8f09-463d-b422-20e45aa79602): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:24.625811 containerd[1877]: time="2025-11-08T00:27:24.625765490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:27:24.908716 containerd[1877]: time="2025-11-08T00:27:24.908564607Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:24.910952 containerd[1877]: time="2025-11-08T00:27:24.910885009Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:27:24.911161 containerd[1877]: time="2025-11-08T00:27:24.910919780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:27:24.911226 kubelet[2339]: E1108 00:27:24.911148 2339 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:27:24.911226 kubelet[2339]: E1108 00:27:24.911197 2339 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:27:24.911336 kubelet[2339]: E1108 00:27:24.911296 2339 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-sqf2j_calico-system(7e6b44f6-8f09-463d-b422-20e45aa79602): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:24.914225 kubelet[2339]: E1108 00:27:24.914151 2339 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sqf2j" podUID="7e6b44f6-8f09-463d-b422-20e45aa79602" Nov 8 00:27:25.240578 kubelet[2339]: E1108 00:27:25.240523 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:26.241561 kubelet[2339]: E1108 00:27:26.241486 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:27.242177 kubelet[2339]: E1108 00:27:27.242109 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:28.242803 kubelet[2339]: E1108 00:27:28.242764 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:29.243097 kubelet[2339]: E1108 00:27:29.243035 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:30.243872 kubelet[2339]: E1108 00:27:30.243823 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:31.244277 kubelet[2339]: E1108 00:27:31.244214 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:32.244684 kubelet[2339]: E1108 00:27:32.244610 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:33.244816 kubelet[2339]: E1108 00:27:33.244747 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:34.245042 kubelet[2339]: E1108 00:27:34.244901 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:35.245075 kubelet[2339]: E1108 00:27:35.245033 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:35.314969 kubelet[2339]: E1108 00:27:35.314870 2339 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sqf2j" podUID="7e6b44f6-8f09-463d-b422-20e45aa79602" Nov 8 00:27:36.245803 kubelet[2339]: E1108 00:27:36.245755 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:37.246131 kubelet[2339]: E1108 00:27:37.246061 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:38.247182 kubelet[2339]: E1108 00:27:38.247118 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:39.179358 kubelet[2339]: E1108 00:27:39.179287 2339 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:39.211720 containerd[1877]: time="2025-11-08T00:27:39.211681316Z" level=info msg="StopPodSandbox for \"33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588\"" Nov 8 00:27:39.249461 kubelet[2339]: E1108 00:27:39.249297 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:39.310404 containerd[1877]: 2025-11-08 00:27:39.274 [WARNING][3865] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.96-k8s-csi--node--driver--sqf2j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7e6b44f6-8f09-463d-b422-20e45aa79602", ResourceVersion:"1446", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.23.96", ContainerID:"1201dc07630422aeb9764bdcf6bd3c274c6a245da72767d08a07c39f7cb42349", Pod:"csi-node-driver-sqf2j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.87.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali16b724f94f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:27:39.310404 containerd[1877]: 2025-11-08 00:27:39.274 [INFO][3865] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" Nov 8 00:27:39.310404 containerd[1877]: 2025-11-08 00:27:39.274 [INFO][3865] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" iface="eth0" netns="" Nov 8 00:27:39.310404 containerd[1877]: 2025-11-08 00:27:39.274 [INFO][3865] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" Nov 8 00:27:39.310404 containerd[1877]: 2025-11-08 00:27:39.274 [INFO][3865] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" Nov 8 00:27:39.310404 containerd[1877]: 2025-11-08 00:27:39.296 [INFO][3873] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" HandleID="k8s-pod-network.33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" Workload="172.31.23.96-k8s-csi--node--driver--sqf2j-eth0" Nov 8 00:27:39.310404 containerd[1877]: 2025-11-08 00:27:39.296 [INFO][3873] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:27:39.310404 containerd[1877]: 2025-11-08 00:27:39.296 [INFO][3873] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:27:39.310404 containerd[1877]: 2025-11-08 00:27:39.304 [WARNING][3873] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" HandleID="k8s-pod-network.33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" Workload="172.31.23.96-k8s-csi--node--driver--sqf2j-eth0" Nov 8 00:27:39.310404 containerd[1877]: 2025-11-08 00:27:39.304 [INFO][3873] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" HandleID="k8s-pod-network.33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" Workload="172.31.23.96-k8s-csi--node--driver--sqf2j-eth0" Nov 8 00:27:39.310404 containerd[1877]: 2025-11-08 00:27:39.306 [INFO][3873] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:27:39.310404 containerd[1877]: 2025-11-08 00:27:39.308 [INFO][3865] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" Nov 8 00:27:39.310404 containerd[1877]: time="2025-11-08T00:27:39.310158132Z" level=info msg="TearDown network for sandbox \"33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588\" successfully" Nov 8 00:27:39.310404 containerd[1877]: time="2025-11-08T00:27:39.310182255Z" level=info msg="StopPodSandbox for \"33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588\" returns successfully" Nov 8 00:27:39.317124 containerd[1877]: time="2025-11-08T00:27:39.316753687Z" level=info msg="RemovePodSandbox for \"33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588\"" Nov 8 00:27:39.317124 containerd[1877]: time="2025-11-08T00:27:39.316789667Z" level=info msg="Forcibly stopping sandbox \"33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588\"" Nov 8 00:27:39.427188 containerd[1877]: 2025-11-08 00:27:39.390 [WARNING][3887] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.96-k8s-csi--node--driver--sqf2j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7e6b44f6-8f09-463d-b422-20e45aa79602", ResourceVersion:"1446", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.23.96", ContainerID:"1201dc07630422aeb9764bdcf6bd3c274c6a245da72767d08a07c39f7cb42349", Pod:"csi-node-driver-sqf2j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.87.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali16b724f94f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:27:39.427188 containerd[1877]: 2025-11-08 00:27:39.390 [INFO][3887] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" Nov 8 00:27:39.427188 containerd[1877]: 2025-11-08 00:27:39.390 [INFO][3887] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" iface="eth0" netns="" Nov 8 00:27:39.427188 containerd[1877]: 2025-11-08 00:27:39.390 [INFO][3887] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" Nov 8 00:27:39.427188 containerd[1877]: 2025-11-08 00:27:39.390 [INFO][3887] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" Nov 8 00:27:39.427188 containerd[1877]: 2025-11-08 00:27:39.413 [INFO][3896] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" HandleID="k8s-pod-network.33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" Workload="172.31.23.96-k8s-csi--node--driver--sqf2j-eth0" Nov 8 00:27:39.427188 containerd[1877]: 2025-11-08 00:27:39.414 [INFO][3896] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:27:39.427188 containerd[1877]: 2025-11-08 00:27:39.414 [INFO][3896] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:27:39.427188 containerd[1877]: 2025-11-08 00:27:39.422 [WARNING][3896] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" HandleID="k8s-pod-network.33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" Workload="172.31.23.96-k8s-csi--node--driver--sqf2j-eth0" Nov 8 00:27:39.427188 containerd[1877]: 2025-11-08 00:27:39.422 [INFO][3896] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" HandleID="k8s-pod-network.33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" Workload="172.31.23.96-k8s-csi--node--driver--sqf2j-eth0" Nov 8 00:27:39.427188 containerd[1877]: 2025-11-08 00:27:39.424 [INFO][3896] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:27:39.427188 containerd[1877]: 2025-11-08 00:27:39.425 [INFO][3887] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588" Nov 8 00:27:39.428444 containerd[1877]: time="2025-11-08T00:27:39.427236059Z" level=info msg="TearDown network for sandbox \"33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588\" successfully" Nov 8 00:27:39.441431 containerd[1877]: time="2025-11-08T00:27:39.441291635Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:27:39.441431 containerd[1877]: time="2025-11-08T00:27:39.441368999Z" level=info msg="RemovePodSandbox \"33675771baacc8370a461466a3878099a9761b1b59a80b190ee2552f61b47588\" returns successfully" Nov 8 00:27:39.442483 containerd[1877]: time="2025-11-08T00:27:39.441904557Z" level=info msg="StopPodSandbox for \"437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c\"" Nov 8 00:27:39.528920 containerd[1877]: 2025-11-08 00:27:39.486 [WARNING][3910] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.96-k8s-nginx--deployment--bb8f74bfb--dzhh5-eth0", GenerateName:"nginx-deployment-bb8f74bfb-", Namespace:"default", SelfLink:"", UID:"c913df09-aa74-4701-9dc0-2544dbe42937", ResourceVersion:"1272", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"bb8f74bfb", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.23.96", ContainerID:"188d5f3e7c9eeede0e6ae5417fc4bc29d8a3938b98af7de181660fb18b50e2f3", Pod:"nginx-deployment-bb8f74bfb-dzhh5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.87.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali9b5bea79ca5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:27:39.528920 containerd[1877]: 2025-11-08 00:27:39.486 [INFO][3910] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" Nov 8 00:27:39.528920 containerd[1877]: 2025-11-08 00:27:39.486 [INFO][3910] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" iface="eth0" netns="" Nov 8 00:27:39.528920 containerd[1877]: 2025-11-08 00:27:39.486 [INFO][3910] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" Nov 8 00:27:39.528920 containerd[1877]: 2025-11-08 00:27:39.486 [INFO][3910] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" Nov 8 00:27:39.528920 containerd[1877]: 2025-11-08 00:27:39.510 [INFO][3917] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" HandleID="k8s-pod-network.437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" Workload="172.31.23.96-k8s-nginx--deployment--bb8f74bfb--dzhh5-eth0" Nov 8 00:27:39.528920 containerd[1877]: 2025-11-08 00:27:39.510 [INFO][3917] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:27:39.528920 containerd[1877]: 2025-11-08 00:27:39.510 [INFO][3917] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:27:39.528920 containerd[1877]: 2025-11-08 00:27:39.521 [WARNING][3917] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" HandleID="k8s-pod-network.437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" Workload="172.31.23.96-k8s-nginx--deployment--bb8f74bfb--dzhh5-eth0" Nov 8 00:27:39.528920 containerd[1877]: 2025-11-08 00:27:39.521 [INFO][3917] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" HandleID="k8s-pod-network.437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" Workload="172.31.23.96-k8s-nginx--deployment--bb8f74bfb--dzhh5-eth0" Nov 8 00:27:39.528920 containerd[1877]: 2025-11-08 00:27:39.526 [INFO][3917] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:27:39.528920 containerd[1877]: 2025-11-08 00:27:39.527 [INFO][3910] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" Nov 8 00:27:39.529590 containerd[1877]: time="2025-11-08T00:27:39.528966114Z" level=info msg="TearDown network for sandbox \"437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c\" successfully" Nov 8 00:27:39.529590 containerd[1877]: time="2025-11-08T00:27:39.528989676Z" level=info msg="StopPodSandbox for \"437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c\" returns successfully" Nov 8 00:27:39.529590 containerd[1877]: time="2025-11-08T00:27:39.529370058Z" level=info msg="RemovePodSandbox for \"437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c\"" Nov 8 00:27:39.529590 containerd[1877]: time="2025-11-08T00:27:39.529392426Z" level=info msg="Forcibly stopping sandbox \"437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c\"" Nov 8 00:27:39.619528 containerd[1877]: 2025-11-08 00:27:39.578 [WARNING][3931] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.96-k8s-nginx--deployment--bb8f74bfb--dzhh5-eth0", GenerateName:"nginx-deployment-bb8f74bfb-", Namespace:"default", SelfLink:"", UID:"c913df09-aa74-4701-9dc0-2544dbe42937", ResourceVersion:"1272", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"bb8f74bfb", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.23.96", ContainerID:"188d5f3e7c9eeede0e6ae5417fc4bc29d8a3938b98af7de181660fb18b50e2f3", Pod:"nginx-deployment-bb8f74bfb-dzhh5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.87.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali9b5bea79ca5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:27:39.619528 containerd[1877]: 2025-11-08 00:27:39.579 [INFO][3931] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" Nov 8 00:27:39.619528 containerd[1877]: 2025-11-08 00:27:39.579 [INFO][3931] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" iface="eth0" netns="" Nov 8 00:27:39.619528 containerd[1877]: 2025-11-08 00:27:39.579 [INFO][3931] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" Nov 8 00:27:39.619528 containerd[1877]: 2025-11-08 00:27:39.579 [INFO][3931] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" Nov 8 00:27:39.619528 containerd[1877]: 2025-11-08 00:27:39.604 [INFO][3938] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" HandleID="k8s-pod-network.437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" Workload="172.31.23.96-k8s-nginx--deployment--bb8f74bfb--dzhh5-eth0" Nov 8 00:27:39.619528 containerd[1877]: 2025-11-08 00:27:39.604 [INFO][3938] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:27:39.619528 containerd[1877]: 2025-11-08 00:27:39.604 [INFO][3938] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:27:39.619528 containerd[1877]: 2025-11-08 00:27:39.613 [WARNING][3938] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" HandleID="k8s-pod-network.437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" Workload="172.31.23.96-k8s-nginx--deployment--bb8f74bfb--dzhh5-eth0" Nov 8 00:27:39.619528 containerd[1877]: 2025-11-08 00:27:39.613 [INFO][3938] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" HandleID="k8s-pod-network.437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" Workload="172.31.23.96-k8s-nginx--deployment--bb8f74bfb--dzhh5-eth0" Nov 8 00:27:39.619528 containerd[1877]: 2025-11-08 00:27:39.617 [INFO][3938] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:27:39.619528 containerd[1877]: 2025-11-08 00:27:39.618 [INFO][3931] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c" Nov 8 00:27:39.620291 containerd[1877]: time="2025-11-08T00:27:39.619587090Z" level=info msg="TearDown network for sandbox \"437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c\" successfully" Nov 8 00:27:39.624949 containerd[1877]: time="2025-11-08T00:27:39.624893412Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:27:39.625128 containerd[1877]: time="2025-11-08T00:27:39.624957264Z" level=info msg="RemovePodSandbox \"437fda703a3cf48147cd2e8e21640c9c9849daab228c79378b2fe47af5b1547c\" returns successfully" Nov 8 00:27:40.249844 kubelet[2339]: E1108 00:27:40.249678 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:41.250932 kubelet[2339]: E1108 00:27:41.250762 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:42.251751 kubelet[2339]: E1108 00:27:42.251691 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:43.252468 kubelet[2339]: E1108 00:27:43.252425 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:43.554117 systemd[1]: Created slice kubepods-besteffort-podcb93c725_b476_4f05_92da_ab8f793a5ce1.slice - libcontainer container kubepods-besteffort-podcb93c725_b476_4f05_92da_ab8f793a5ce1.slice. Nov 8 00:27:43.683869 kubelet[2339]: I1108 00:27:43.683817 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ac1f7546-0c9e-4590-97d3-516490edaa7a\" (UniqueName: \"kubernetes.io/nfs/cb93c725-b476-4f05-92da-ab8f793a5ce1-pvc-ac1f7546-0c9e-4590-97d3-516490edaa7a\") pod \"test-pod-1\" (UID: \"cb93c725-b476-4f05-92da-ab8f793a5ce1\") " pod="default/test-pod-1" Nov 8 00:27:43.683869 kubelet[2339]: I1108 00:27:43.683858 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlrmq\" (UniqueName: \"kubernetes.io/projected/cb93c725-b476-4f05-92da-ab8f793a5ce1-kube-api-access-wlrmq\") pod \"test-pod-1\" (UID: \"cb93c725-b476-4f05-92da-ab8f793a5ce1\") " pod="default/test-pod-1" Nov 8 00:27:43.841750 kernel: FS-Cache: Loaded Nov 8 00:27:43.916962 kernel: RPC: Registered named UNIX socket transport module. Nov 8 00:27:43.917085 kernel: RPC: Registered udp transport module. Nov 8 00:27:43.918114 kernel: RPC: Registered tcp transport module. Nov 8 00:27:43.918198 kernel: RPC: Registered tcp-with-tls transport module. Nov 8 00:27:43.919067 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Nov 8 00:27:44.218126 kernel: NFS: Registering the id_resolver key type Nov 8 00:27:44.218242 kernel: Key type id_resolver registered Nov 8 00:27:44.218266 kernel: Key type id_legacy registered Nov 8 00:27:44.253688 kubelet[2339]: E1108 00:27:44.253613 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:44.255304 nfsidmap[3981]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Nov 8 00:27:44.259840 nfsidmap[3982]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Nov 8 00:27:44.465812 containerd[1877]: time="2025-11-08T00:27:44.465767863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:cb93c725-b476-4f05-92da-ab8f793a5ce1,Namespace:default,Attempt:0,}" Nov 8 00:27:44.617962 (udev-worker)[3969]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:27:44.621961 systemd-networkd[1807]: cali5ec59c6bf6e: Link UP Nov 8 00:27:44.623439 systemd-networkd[1807]: cali5ec59c6bf6e: Gained carrier Nov 8 00:27:44.638678 containerd[1877]: 2025-11-08 00:27:44.528 [INFO][3987] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.23.96-k8s-test--pod--1-eth0 default cb93c725-b476-4f05-92da-ab8f793a5ce1 1493 0 2025-11-08 00:27:18 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.23.96 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="ec9f6cc26fb1ca52b15eb693f071a999c9531514514113d7406bd84b4b5c529f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.23.96-k8s-test--pod--1-" Nov 8 00:27:44.638678 containerd[1877]: 2025-11-08 00:27:44.528 [INFO][3987] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ec9f6cc26fb1ca52b15eb693f071a999c9531514514113d7406bd84b4b5c529f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.23.96-k8s-test--pod--1-eth0" Nov 8 00:27:44.638678 containerd[1877]: 2025-11-08 00:27:44.558 [INFO][4000] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ec9f6cc26fb1ca52b15eb693f071a999c9531514514113d7406bd84b4b5c529f" HandleID="k8s-pod-network.ec9f6cc26fb1ca52b15eb693f071a999c9531514514113d7406bd84b4b5c529f" Workload="172.31.23.96-k8s-test--pod--1-eth0" Nov 8 00:27:44.638678 containerd[1877]: 2025-11-08 00:27:44.558 [INFO][4000] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ec9f6cc26fb1ca52b15eb693f071a999c9531514514113d7406bd84b4b5c529f" HandleID="k8s-pod-network.ec9f6cc26fb1ca52b15eb693f071a999c9531514514113d7406bd84b4b5c529f" Workload="172.31.23.96-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f200), Attrs:map[string]string{"namespace":"default", "node":"172.31.23.96", "pod":"test-pod-1", "timestamp":"2025-11-08 00:27:44.558725503 +0000 UTC"}, Hostname:"172.31.23.96", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:27:44.638678 containerd[1877]: 2025-11-08 00:27:44.558 [INFO][4000] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:27:44.638678 containerd[1877]: 2025-11-08 00:27:44.558 [INFO][4000] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:27:44.638678 containerd[1877]: 2025-11-08 00:27:44.558 [INFO][4000] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.23.96' Nov 8 00:27:44.638678 containerd[1877]: 2025-11-08 00:27:44.569 [INFO][4000] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ec9f6cc26fb1ca52b15eb693f071a999c9531514514113d7406bd84b4b5c529f" host="172.31.23.96" Nov 8 00:27:44.638678 containerd[1877]: 2025-11-08 00:27:44.576 [INFO][4000] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.23.96" Nov 8 00:27:44.638678 containerd[1877]: 2025-11-08 00:27:44.587 [INFO][4000] ipam/ipam.go 511: Trying affinity for 192.168.87.192/26 host="172.31.23.96" Nov 8 00:27:44.638678 containerd[1877]: 2025-11-08 00:27:44.591 [INFO][4000] ipam/ipam.go 158: Attempting to load block cidr=192.168.87.192/26 host="172.31.23.96" Nov 8 00:27:44.638678 containerd[1877]: 2025-11-08 00:27:44.595 [INFO][4000] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.87.192/26 host="172.31.23.96" Nov 8 00:27:44.638678 containerd[1877]: 2025-11-08 00:27:44.595 [INFO][4000] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.87.192/26 handle="k8s-pod-network.ec9f6cc26fb1ca52b15eb693f071a999c9531514514113d7406bd84b4b5c529f" host="172.31.23.96" Nov 8 00:27:44.638678 containerd[1877]: 2025-11-08 00:27:44.597 [INFO][4000] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ec9f6cc26fb1ca52b15eb693f071a999c9531514514113d7406bd84b4b5c529f Nov 8 00:27:44.638678 containerd[1877]: 2025-11-08 00:27:44.603 [INFO][4000] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.87.192/26 handle="k8s-pod-network.ec9f6cc26fb1ca52b15eb693f071a999c9531514514113d7406bd84b4b5c529f" host="172.31.23.96" Nov 8 00:27:44.638678 containerd[1877]: 2025-11-08 00:27:44.612 [INFO][4000] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.87.196/26] block=192.168.87.192/26 handle="k8s-pod-network.ec9f6cc26fb1ca52b15eb693f071a999c9531514514113d7406bd84b4b5c529f" host="172.31.23.96" Nov 8 00:27:44.638678 containerd[1877]: 2025-11-08 00:27:44.612 [INFO][4000] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.87.196/26] handle="k8s-pod-network.ec9f6cc26fb1ca52b15eb693f071a999c9531514514113d7406bd84b4b5c529f" host="172.31.23.96" Nov 8 00:27:44.638678 containerd[1877]: 2025-11-08 00:27:44.612 [INFO][4000] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:27:44.638678 containerd[1877]: 2025-11-08 00:27:44.612 [INFO][4000] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.87.196/26] IPv6=[] ContainerID="ec9f6cc26fb1ca52b15eb693f071a999c9531514514113d7406bd84b4b5c529f" HandleID="k8s-pod-network.ec9f6cc26fb1ca52b15eb693f071a999c9531514514113d7406bd84b4b5c529f" Workload="172.31.23.96-k8s-test--pod--1-eth0" Nov 8 00:27:44.638678 containerd[1877]: 2025-11-08 00:27:44.613 [INFO][3987] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ec9f6cc26fb1ca52b15eb693f071a999c9531514514113d7406bd84b4b5c529f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.23.96-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.96-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"cb93c725-b476-4f05-92da-ab8f793a5ce1", ResourceVersion:"1493", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 27, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.23.96", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.87.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:27:44.639861 containerd[1877]: 2025-11-08 00:27:44.614 [INFO][3987] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.196/32] ContainerID="ec9f6cc26fb1ca52b15eb693f071a999c9531514514113d7406bd84b4b5c529f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.23.96-k8s-test--pod--1-eth0" Nov 8 00:27:44.639861 containerd[1877]: 2025-11-08 00:27:44.614 [INFO][3987] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="ec9f6cc26fb1ca52b15eb693f071a999c9531514514113d7406bd84b4b5c529f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.23.96-k8s-test--pod--1-eth0" Nov 8 00:27:44.639861 containerd[1877]: 2025-11-08 00:27:44.622 [INFO][3987] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ec9f6cc26fb1ca52b15eb693f071a999c9531514514113d7406bd84b4b5c529f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.23.96-k8s-test--pod--1-eth0" Nov 8 00:27:44.639861 containerd[1877]: 2025-11-08 00:27:44.623 [INFO][3987] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ec9f6cc26fb1ca52b15eb693f071a999c9531514514113d7406bd84b4b5c529f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.23.96-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.96-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"cb93c725-b476-4f05-92da-ab8f793a5ce1", ResourceVersion:"1493", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 27, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.23.96", ContainerID:"ec9f6cc26fb1ca52b15eb693f071a999c9531514514113d7406bd84b4b5c529f", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.87.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"de:6b:15:2b:c8:f7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:27:44.639861 containerd[1877]: 2025-11-08 00:27:44.633 [INFO][3987] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ec9f6cc26fb1ca52b15eb693f071a999c9531514514113d7406bd84b4b5c529f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.23.96-k8s-test--pod--1-eth0" Nov 8 00:27:44.667682 containerd[1877]: time="2025-11-08T00:27:44.666624858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:27:44.667682 containerd[1877]: time="2025-11-08T00:27:44.666851519Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:27:44.667682 containerd[1877]: time="2025-11-08T00:27:44.666887289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:27:44.667682 containerd[1877]: time="2025-11-08T00:27:44.666996898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:27:44.692028 systemd[1]: Started cri-containerd-ec9f6cc26fb1ca52b15eb693f071a999c9531514514113d7406bd84b4b5c529f.scope - libcontainer container ec9f6cc26fb1ca52b15eb693f071a999c9531514514113d7406bd84b4b5c529f. Nov 8 00:27:44.738904 containerd[1877]: time="2025-11-08T00:27:44.738857143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:cb93c725-b476-4f05-92da-ab8f793a5ce1,Namespace:default,Attempt:0,} returns sandbox id \"ec9f6cc26fb1ca52b15eb693f071a999c9531514514113d7406bd84b4b5c529f\"" Nov 8 00:27:44.740355 containerd[1877]: time="2025-11-08T00:27:44.740320100Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Nov 8 00:27:45.071361 containerd[1877]: time="2025-11-08T00:27:45.071305819Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:27:45.073140 containerd[1877]: time="2025-11-08T00:27:45.073079117Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Nov 8 00:27:45.075662 containerd[1877]: time="2025-11-08T00:27:45.075595059Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:8d14817f00613fe76ef7459f977ad93e7b71a3948346b7ac4d50e35f3dd518e9\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:0537df20ac7c5485a0f6b7bfb8e3fbbc8714fce070bab2a6344e5cadfba58d90\", size \"73311824\" in 335.231711ms" Nov 8 00:27:45.075765 containerd[1877]: time="2025-11-08T00:27:45.075647130Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:8d14817f00613fe76ef7459f977ad93e7b71a3948346b7ac4d50e35f3dd518e9\"" Nov 8 00:27:45.082697 containerd[1877]: time="2025-11-08T00:27:45.082473383Z" level=info msg="CreateContainer within sandbox \"ec9f6cc26fb1ca52b15eb693f071a999c9531514514113d7406bd84b4b5c529f\" for container &ContainerMetadata{Name:test,Attempt:0,}" Nov 8 00:27:45.155365 containerd[1877]: time="2025-11-08T00:27:45.155298715Z" level=info msg="CreateContainer within sandbox \"ec9f6cc26fb1ca52b15eb693f071a999c9531514514113d7406bd84b4b5c529f\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"5c863dac4ed4518639990132db186079ce6f25a7e2cb61c6c66708385d1a2bda\"" Nov 8 00:27:45.157345 containerd[1877]: time="2025-11-08T00:27:45.156263759Z" level=info msg="StartContainer for \"5c863dac4ed4518639990132db186079ce6f25a7e2cb61c6c66708385d1a2bda\"" Nov 8 00:27:45.198874 systemd[1]: Started cri-containerd-5c863dac4ed4518639990132db186079ce6f25a7e2cb61c6c66708385d1a2bda.scope - libcontainer container 5c863dac4ed4518639990132db186079ce6f25a7e2cb61c6c66708385d1a2bda. Nov 8 00:27:45.247920 containerd[1877]: time="2025-11-08T00:27:45.247870128Z" level=info msg="StartContainer for \"5c863dac4ed4518639990132db186079ce6f25a7e2cb61c6c66708385d1a2bda\" returns successfully" Nov 8 00:27:45.253939 kubelet[2339]: E1108 00:27:45.253807 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:45.567756 kubelet[2339]: I1108 00:27:45.567687 2339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=27.230902084 podStartE2EDuration="27.567669253s" podCreationTimestamp="2025-11-08 00:27:18 +0000 UTC" firstStartedPulling="2025-11-08 00:27:44.739759541 +0000 UTC m=+66.286348811" lastFinishedPulling="2025-11-08 00:27:45.07652671 +0000 UTC m=+66.623115980" observedRunningTime="2025-11-08 00:27:45.567590105 +0000 UTC m=+67.114179398" watchObservedRunningTime="2025-11-08 00:27:45.567669253 +0000 UTC m=+67.114258546" Nov 8 00:27:45.792325 systemd-networkd[1807]: cali5ec59c6bf6e: Gained IPv6LL Nov 8 00:27:45.813085 systemd[1]: run-containerd-runc-k8s.io-5c863dac4ed4518639990132db186079ce6f25a7e2cb61c6c66708385d1a2bda-runc.TwUYtE.mount: Deactivated successfully. Nov 8 00:27:46.254265 kubelet[2339]: E1108 00:27:46.254204 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:46.314337 containerd[1877]: time="2025-11-08T00:27:46.314275391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:27:46.594914 containerd[1877]: time="2025-11-08T00:27:46.594771868Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:46.597101 containerd[1877]: time="2025-11-08T00:27:46.596973784Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:27:46.597101 containerd[1877]: time="2025-11-08T00:27:46.597054721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:27:46.597422 kubelet[2339]: E1108 00:27:46.597384 2339 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:27:46.597558 kubelet[2339]: E1108 00:27:46.597430 2339 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:27:46.597558 kubelet[2339]: E1108 00:27:46.597535 2339 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-sqf2j_calico-system(7e6b44f6-8f09-463d-b422-20e45aa79602): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:46.598512 containerd[1877]: time="2025-11-08T00:27:46.598484062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:27:46.993564 containerd[1877]: time="2025-11-08T00:27:46.993512920Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:46.995827 containerd[1877]: time="2025-11-08T00:27:46.995775592Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:27:46.995957 containerd[1877]: time="2025-11-08T00:27:46.995878655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:27:46.996601 kubelet[2339]: E1108 00:27:46.996077 2339 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:27:46.996601 kubelet[2339]: E1108 00:27:46.996118 2339 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:27:46.996601 kubelet[2339]: E1108 00:27:46.996368 2339 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-sqf2j_calico-system(7e6b44f6-8f09-463d-b422-20e45aa79602): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:46.996833 kubelet[2339]: E1108 00:27:46.996466 2339 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sqf2j" podUID="7e6b44f6-8f09-463d-b422-20e45aa79602" Nov 8 00:27:47.255025 kubelet[2339]: E1108 00:27:47.254890 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:48.255787 kubelet[2339]: E1108 00:27:48.255711 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:48.353543 ntpd[1862]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Nov 8 00:27:48.353905 ntpd[1862]: 8 Nov 00:27:48 ntpd[1862]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Nov 8 00:27:49.256892 kubelet[2339]: E1108 00:27:49.256825 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:50.257583 kubelet[2339]: E1108 00:27:50.257500 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:51.258250 kubelet[2339]: E1108 00:27:51.258140 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:52.258548 kubelet[2339]: E1108 00:27:52.258480 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:53.259655 kubelet[2339]: E1108 00:27:53.259593 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:54.260644 kubelet[2339]: E1108 00:27:54.260573 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:55.261581 kubelet[2339]: E1108 00:27:55.261527 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:56.261988 kubelet[2339]: E1108 00:27:56.261938 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:57.263100 kubelet[2339]: E1108 00:27:57.262994 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:58.263179 kubelet[2339]: E1108 00:27:58.263137 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:59.179795 kubelet[2339]: E1108 00:27:59.179748 2339 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:59.263607 kubelet[2339]: E1108 00:27:59.263566 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:27:59.315160 kubelet[2339]: E1108 00:27:59.315090 2339 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sqf2j" podUID="7e6b44f6-8f09-463d-b422-20e45aa79602" Nov 8 00:28:00.264615 kubelet[2339]: E1108 00:28:00.264560 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:01.265611 kubelet[2339]: E1108 00:28:01.265551 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:02.265861 kubelet[2339]: E1108 00:28:02.265797 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:03.266554 kubelet[2339]: E1108 00:28:03.266499 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:04.267179 kubelet[2339]: E1108 00:28:04.267099 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:05.268540 kubelet[2339]: E1108 00:28:05.268462 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:06.269237 kubelet[2339]: E1108 00:28:06.269185 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:07.269853 kubelet[2339]: E1108 00:28:07.269799 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:08.271019 kubelet[2339]: E1108 00:28:08.270972 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:09.271174 kubelet[2339]: E1108 00:28:09.271126 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:10.272052 kubelet[2339]: E1108 00:28:10.271997 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:11.272976 kubelet[2339]: E1108 00:28:11.272896 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:11.460556 kubelet[2339]: E1108 00:28:11.460480 2339 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.96?timeout=10s\": context deadline exceeded" Nov 8 00:28:12.273502 kubelet[2339]: E1108 00:28:12.273435 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:12.314527 kubelet[2339]: E1108 00:28:12.314472 2339 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sqf2j" podUID="7e6b44f6-8f09-463d-b422-20e45aa79602" Nov 8 00:28:13.274533 kubelet[2339]: E1108 00:28:13.274481 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:14.274997 kubelet[2339]: E1108 00:28:14.274956 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:15.275790 kubelet[2339]: E1108 00:28:15.275722 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:16.276287 kubelet[2339]: E1108 00:28:16.276047 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:17.277220 kubelet[2339]: E1108 00:28:17.277166 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:18.277793 kubelet[2339]: E1108 00:28:18.277747 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:19.179531 kubelet[2339]: E1108 00:28:19.179488 2339 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:19.278657 kubelet[2339]: E1108 00:28:19.278597 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:20.279243 kubelet[2339]: E1108 00:28:20.279186 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:21.280260 kubelet[2339]: E1108 00:28:21.280203 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:21.461116 kubelet[2339]: E1108 00:28:21.461069 2339 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.96?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 8 00:28:22.280962 kubelet[2339]: E1108 00:28:22.280915 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:23.281737 kubelet[2339]: E1108 00:28:23.281673 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:24.281914 kubelet[2339]: E1108 00:28:24.281855 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:25.282685 kubelet[2339]: E1108 00:28:25.282639 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:25.314462 kubelet[2339]: E1108 00:28:25.314395 2339 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sqf2j" podUID="7e6b44f6-8f09-463d-b422-20e45aa79602" Nov 8 00:28:26.283153 kubelet[2339]: E1108 00:28:26.283097 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:27.283679 kubelet[2339]: E1108 00:28:27.283608 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:28.284034 kubelet[2339]: E1108 00:28:28.283992 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:29.284312 kubelet[2339]: E1108 00:28:29.284264 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:30.285544 kubelet[2339]: E1108 00:28:30.285478 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:31.286359 kubelet[2339]: E1108 00:28:31.286307 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:31.461807 kubelet[2339]: E1108 00:28:31.461753 2339 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.96?timeout=10s\": context deadline exceeded" Nov 8 00:28:32.246657 kubelet[2339]: E1108 00:28:32.245876 2339 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.96?timeout=10s\": unexpected EOF" Nov 8 00:28:32.247102 kubelet[2339]: E1108 00:28:32.247078 2339 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.96?timeout=10s\": dial tcp 172.31.25.121:6443: connect: connection refused" Nov 8 00:28:32.247370 kubelet[2339]: I1108 00:28:32.247204 2339 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Nov 8 00:28:32.247754 kubelet[2339]: E1108 00:28:32.247723 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.96?timeout=10s\": dial tcp 172.31.25.121:6443: connect: connection refused" interval="200ms" Nov 8 00:28:32.253695 kubelet[2339]: E1108 00:28:32.242190 2339 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://172.31.25.121:6443/api/v1/namespaces/calico-system/events/csi-node-driver-sqf2j.1875e07720276634\": unexpected EOF" event="&Event{ObjectMeta:{csi-node-driver-sqf2j.1875e07720276634 calico-system 1447 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:csi-node-driver-sqf2j,UID:7e6b44f6-8f09-463d-b422-20e45aa79602,APIVersion:v1,ResourceVersion:959,FieldPath:spec.containers{calico-csi},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/csi:v3.30.4\",Source:EventSource{Component:kubelet,Host:172.31.23.96,},FirstTimestamp:2025-11-08 00:27:11 +0000 UTC,LastTimestamp:2025-11-08 00:27:59.314460956 +0000 UTC m=+80.861050229,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.23.96,}" Nov 8 00:28:32.287056 kubelet[2339]: E1108 00:28:32.286998 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:32.449443 kubelet[2339]: E1108 00:28:32.449384 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.96?timeout=10s\": dial tcp 172.31.25.121:6443: connect: connection refused" interval="400ms" Nov 8 00:28:32.851429 kubelet[2339]: E1108 00:28:32.851380 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.96?timeout=10s\": dial tcp 172.31.25.121:6443: connect: connection refused" interval="800ms" Nov 8 00:28:33.244331 kubelet[2339]: E1108 00:28:33.244255 2339 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://172.31.25.121:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-sqf2j\": dial tcp 172.31.25.121:6443: connect: connection refused - error from a previous attempt: unexpected EOF" podUID="7e6b44f6-8f09-463d-b422-20e45aa79602" pod="calico-system/csi-node-driver-sqf2j" Nov 8 00:28:33.253213 kubelet[2339]: E1108 00:28:33.252992 2339 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://172.31.25.121:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-sqf2j\": dial tcp 172.31.25.121:6443: connect: connection refused" podUID="7e6b44f6-8f09-463d-b422-20e45aa79602" pod="calico-system/csi-node-driver-sqf2j" Nov 8 00:28:33.253682 kubelet[2339]: E1108 00:28:33.253529 2339 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://172.31.25.121:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-sqf2j\": dial tcp 172.31.25.121:6443: connect: connection refused" podUID="7e6b44f6-8f09-463d-b422-20e45aa79602" pod="calico-system/csi-node-driver-sqf2j" Nov 8 00:28:33.287809 kubelet[2339]: E1108 00:28:33.287758 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:34.288734 kubelet[2339]: E1108 00:28:34.288673 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:35.289781 kubelet[2339]: E1108 00:28:35.289663 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:36.290055 kubelet[2339]: E1108 00:28:36.289997 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:37.291238 kubelet[2339]: E1108 00:28:37.291181 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:37.314904 containerd[1877]: time="2025-11-08T00:28:37.314696241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:28:37.580330 containerd[1877]: time="2025-11-08T00:28:37.580028249Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:28:37.582255 containerd[1877]: time="2025-11-08T00:28:37.582177127Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:28:37.582498 containerd[1877]: time="2025-11-08T00:28:37.582226746Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:28:37.582586 kubelet[2339]: E1108 00:28:37.582496 2339 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:28:37.582586 kubelet[2339]: E1108 00:28:37.582545 2339 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:28:37.582785 kubelet[2339]: E1108 00:28:37.582656 2339 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-sqf2j_calico-system(7e6b44f6-8f09-463d-b422-20e45aa79602): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:28:37.583753 containerd[1877]: time="2025-11-08T00:28:37.583722356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:28:37.871083 containerd[1877]: time="2025-11-08T00:28:37.870911932Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:28:37.872978 containerd[1877]: time="2025-11-08T00:28:37.872915267Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:28:37.873110 containerd[1877]: time="2025-11-08T00:28:37.872995126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:28:37.873205 kubelet[2339]: E1108 00:28:37.873164 2339 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:28:37.873262 kubelet[2339]: E1108 00:28:37.873211 2339 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:28:37.873295 kubelet[2339]: E1108 00:28:37.873281 2339 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-sqf2j_calico-system(7e6b44f6-8f09-463d-b422-20e45aa79602): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:28:37.873358 kubelet[2339]: E1108 00:28:37.873322 2339 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sqf2j" podUID="7e6b44f6-8f09-463d-b422-20e45aa79602" Nov 8 00:28:38.291579 kubelet[2339]: E1108 00:28:38.291515 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:39.179448 kubelet[2339]: E1108 00:28:39.179390 2339 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:39.292486 kubelet[2339]: E1108 00:28:39.292431 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:40.293386 kubelet[2339]: E1108 00:28:40.293298 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:40.641812 systemd[1]: run-containerd-runc-k8s.io-9341796569e860aa5e89feb6145ebd892e5eab21d6735745d67a4735d548425c-runc.lxEjlf.mount: Deactivated successfully. Nov 8 00:28:41.293975 kubelet[2339]: E1108 00:28:41.293919 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:42.294108 kubelet[2339]: E1108 00:28:42.294045 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:43.295194 kubelet[2339]: E1108 00:28:43.295140 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:43.653370 kubelet[2339]: E1108 00:28:43.653215 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.96?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Nov 8 00:28:44.296218 kubelet[2339]: E1108 00:28:44.295991 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:45.296871 kubelet[2339]: E1108 00:28:45.296798 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:46.297447 kubelet[2339]: E1108 00:28:46.297376 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:47.298029 kubelet[2339]: E1108 00:28:47.297974 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:48.298755 kubelet[2339]: E1108 00:28:48.298717 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:49.298915 kubelet[2339]: E1108 00:28:49.298827 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:50.299026 kubelet[2339]: E1108 00:28:50.298967 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:51.299721 kubelet[2339]: E1108 00:28:51.299661 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:52.300766 kubelet[2339]: E1108 00:28:52.300717 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:52.314485 kubelet[2339]: E1108 00:28:52.314426 2339 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sqf2j" podUID="7e6b44f6-8f09-463d-b422-20e45aa79602" Nov 8 00:28:53.301689 kubelet[2339]: E1108 00:28:53.301607 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:54.302394 kubelet[2339]: E1108 00:28:54.302338 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:28:55.254902 kubelet[2339]: E1108 00:28:55.254375 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.96?timeout=10s\": context deadline exceeded" interval="3.2s" Nov 8 00:28:55.302826 kubelet[2339]: E1108 00:28:55.302767 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"