Nov 8 00:35:06.944761 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:35:06.944837 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:35:06.944863 kernel: BIOS-provided physical RAM map: Nov 8 00:35:06.944874 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 8 00:35:06.944885 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Nov 8 00:35:06.944896 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Nov 8 00:35:06.944910 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Nov 8 00:35:06.944923 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Nov 8 00:35:06.944935 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Nov 8 00:35:06.944948 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Nov 8 00:35:06.944958 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Nov 8 00:35:06.944970 kernel: NX (Execute Disable) protection: active Nov 8 00:35:06.944981 kernel: APIC: Static calls initialized Nov 8 00:35:06.944994 kernel: efi: EFI v2.7 by EDK II Nov 8 00:35:06.945010 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77002518 Nov 8 00:35:06.945025 kernel: SMBIOS 2.7 present. Nov 8 00:35:06.945037 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Nov 8 00:35:06.945049 kernel: Hypervisor detected: KVM Nov 8 00:35:06.945063 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 8 00:35:06.945075 kernel: kvm-clock: using sched offset of 4181980377 cycles Nov 8 00:35:06.945086 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 8 00:35:06.945108 kernel: tsc: Detected 2499.998 MHz processor Nov 8 00:35:06.945122 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:35:06.945133 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:35:06.945145 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Nov 8 00:35:06.945163 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 8 00:35:06.945176 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:35:06.945191 kernel: Using GB pages for direct mapping Nov 8 00:35:06.945202 kernel: Secure boot disabled Nov 8 00:35:06.945213 kernel: ACPI: Early table checksum verification disabled Nov 8 00:35:06.945225 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Nov 8 00:35:06.945238 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Nov 8 00:35:06.945250 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Nov 8 00:35:06.945264 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Nov 8 00:35:06.945282 kernel: ACPI: FACS 0x00000000789D0000 000040 Nov 8 00:35:06.945294 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Nov 8 00:35:06.945306 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Nov 8 00:35:06.945320 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Nov 8 00:35:06.945334 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Nov 8 00:35:06.945349 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Nov 8 00:35:06.945371 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Nov 8 00:35:06.945390 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Nov 8 00:35:06.945405 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Nov 8 00:35:06.945420 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Nov 8 00:35:06.945436 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Nov 8 00:35:06.945451 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Nov 8 00:35:06.945466 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Nov 8 00:35:06.945482 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Nov 8 00:35:06.945500 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Nov 8 00:35:06.945515 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Nov 8 00:35:06.945530 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Nov 8 00:35:06.945546 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Nov 8 00:35:06.945561 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Nov 8 00:35:06.945576 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Nov 8 00:35:06.945592 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 8 00:35:06.945607 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 8 00:35:06.945622 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Nov 8 00:35:06.945641 kernel: NUMA: Initialized distance table, cnt=1 Nov 8 00:35:06.945656 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Nov 8 00:35:06.945672 kernel: Zone ranges: Nov 8 00:35:06.945697 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:35:06.945709 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Nov 8 00:35:06.945722 kernel: Normal empty Nov 8 00:35:06.945734 kernel: Movable zone start for each node Nov 8 00:35:06.945747 kernel: Early memory node ranges Nov 8 00:35:06.945760 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 8 00:35:06.945776 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Nov 8 00:35:06.945796 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Nov 8 00:35:06.945811 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Nov 8 00:35:06.945827 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:35:06.945844 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 8 00:35:06.945859 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Nov 8 00:35:06.945873 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Nov 8 00:35:06.945887 kernel: ACPI: PM-Timer IO Port: 0xb008 Nov 8 00:35:06.945902 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 8 00:35:06.945917 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Nov 8 00:35:06.945936 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 8 00:35:06.945949 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 00:35:06.945963 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 8 00:35:06.945976 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 8 00:35:06.945990 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:35:06.946003 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 8 00:35:06.946017 kernel: TSC deadline timer available Nov 8 00:35:06.946030 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 8 00:35:06.946045 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 8 00:35:06.946064 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Nov 8 00:35:06.946079 kernel: Booting paravirtualized kernel on KVM Nov 8 00:35:06.946095 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:35:06.946110 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 8 00:35:06.946124 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 8 00:35:06.946138 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 8 00:35:06.946154 kernel: pcpu-alloc: [0] 0 1 Nov 8 00:35:06.946170 kernel: kvm-guest: PV spinlocks enabled Nov 8 00:35:06.946186 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 8 00:35:06.946208 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:35:06.946225 kernel: random: crng init done Nov 8 00:35:06.946240 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:35:06.946254 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 8 00:35:06.946268 kernel: Fallback order for Node 0: 0 Nov 8 00:35:06.946281 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Nov 8 00:35:06.946294 kernel: Policy zone: DMA32 Nov 8 00:35:06.946308 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:35:06.946325 kernel: Memory: 1874604K/2037804K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 162940K reserved, 0K cma-reserved) Nov 8 00:35:06.946339 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 8 00:35:06.946353 kernel: Kernel/User page tables isolation: enabled Nov 8 00:35:06.946366 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:35:06.946380 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:35:06.946393 kernel: Dynamic Preempt: voluntary Nov 8 00:35:06.946407 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:35:06.946421 kernel: rcu: RCU event tracing is enabled. Nov 8 00:35:06.946435 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 8 00:35:06.946451 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:35:06.946465 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:35:06.946478 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:35:06.946492 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:35:06.946506 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 8 00:35:06.946520 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 8 00:35:06.946534 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:35:06.946562 kernel: Console: colour dummy device 80x25 Nov 8 00:35:06.946577 kernel: printk: console [tty0] enabled Nov 8 00:35:06.946592 kernel: printk: console [ttyS0] enabled Nov 8 00:35:06.946606 kernel: ACPI: Core revision 20230628 Nov 8 00:35:06.946621 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Nov 8 00:35:06.946639 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:35:06.946653 kernel: x2apic enabled Nov 8 00:35:06.946668 kernel: APIC: Switched APIC routing to: physical x2apic Nov 8 00:35:06.948359 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Nov 8 00:35:06.948383 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Nov 8 00:35:06.948406 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 8 00:35:06.948422 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 8 00:35:06.948437 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:35:06.948452 kernel: Spectre V2 : Mitigation: Retpolines Nov 8 00:35:06.948467 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 8 00:35:06.948483 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 8 00:35:06.948499 kernel: RETBleed: Vulnerable Nov 8 00:35:06.948512 kernel: Speculative Store Bypass: Vulnerable Nov 8 00:35:06.948526 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:35:06.948541 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:35:06.948571 kernel: GDS: Unknown: Dependent on hypervisor status Nov 8 00:35:06.948585 kernel: active return thunk: its_return_thunk Nov 8 00:35:06.948600 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 8 00:35:06.948615 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:35:06.948630 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:35:06.948645 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:35:06.948660 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Nov 8 00:35:06.948675 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Nov 8 00:35:06.948714 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 8 00:35:06.948737 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 8 00:35:06.948751 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 8 00:35:06.948769 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 8 00:35:06.948785 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:35:06.948800 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Nov 8 00:35:06.948815 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Nov 8 00:35:06.948829 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Nov 8 00:35:06.948857 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Nov 8 00:35:06.948872 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Nov 8 00:35:06.948885 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Nov 8 00:35:06.948897 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Nov 8 00:35:06.948912 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:35:06.948927 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:35:06.948947 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:35:06.948963 kernel: landlock: Up and running. Nov 8 00:35:06.948977 kernel: SELinux: Initializing. Nov 8 00:35:06.948991 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 8 00:35:06.949005 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 8 00:35:06.949019 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Nov 8 00:35:06.949035 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:35:06.949052 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:35:06.949068 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:35:06.949085 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Nov 8 00:35:06.949106 kernel: signal: max sigframe size: 3632 Nov 8 00:35:06.949122 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:35:06.949140 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:35:06.949157 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 8 00:35:06.949173 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:35:06.949189 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:35:06.949206 kernel: .... node #0, CPUs: #1 Nov 8 00:35:06.949223 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Nov 8 00:35:06.949239 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 8 00:35:06.949258 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:35:06.949274 kernel: smpboot: Max logical packages: 1 Nov 8 00:35:06.949291 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Nov 8 00:35:06.949307 kernel: devtmpfs: initialized Nov 8 00:35:06.949323 kernel: x86/mm: Memory block size: 128MB Nov 8 00:35:06.949338 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Nov 8 00:35:06.949352 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:35:06.949369 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 8 00:35:06.949384 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:35:06.949403 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:35:06.949419 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:35:06.949434 kernel: audit: type=2000 audit(1762562106.107:1): state=initialized audit_enabled=0 res=1 Nov 8 00:35:06.949450 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:35:06.949466 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:35:06.949481 kernel: cpuidle: using governor menu Nov 8 00:35:06.949497 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:35:06.949513 kernel: dca service started, version 1.12.1 Nov 8 00:35:06.949528 kernel: PCI: Using configuration type 1 for base access Nov 8 00:35:06.949547 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:35:06.949562 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:35:06.949578 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:35:06.949594 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:35:06.949609 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:35:06.949625 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:35:06.949640 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:35:06.949656 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:35:06.949672 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Nov 8 00:35:06.949711 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:35:06.949727 kernel: ACPI: Interpreter enabled Nov 8 00:35:06.949743 kernel: ACPI: PM: (supports S0 S5) Nov 8 00:35:06.949758 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:35:06.949774 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:35:06.949790 kernel: PCI: Using E820 reservations for host bridge windows Nov 8 00:35:06.949806 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 8 00:35:06.949821 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 8 00:35:06.950052 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:35:06.950820 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 8 00:35:06.950992 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 8 00:35:06.951015 kernel: acpiphp: Slot [3] registered Nov 8 00:35:06.951033 kernel: acpiphp: Slot [4] registered Nov 8 00:35:06.951049 kernel: acpiphp: Slot [5] registered Nov 8 00:35:06.951066 kernel: acpiphp: Slot [6] registered Nov 8 00:35:06.951082 kernel: acpiphp: Slot [7] registered Nov 8 00:35:06.951104 kernel: acpiphp: Slot [8] registered Nov 8 00:35:06.951121 kernel: acpiphp: Slot [9] registered Nov 8 00:35:06.951136 kernel: acpiphp: Slot [10] registered Nov 8 00:35:06.951153 kernel: acpiphp: Slot [11] registered Nov 8 00:35:06.951169 kernel: acpiphp: Slot [12] registered Nov 8 00:35:06.951186 kernel: acpiphp: Slot [13] registered Nov 8 00:35:06.951202 kernel: acpiphp: Slot [14] registered Nov 8 00:35:06.951219 kernel: acpiphp: Slot [15] registered Nov 8 00:35:06.951235 kernel: acpiphp: Slot [16] registered Nov 8 00:35:06.951252 kernel: acpiphp: Slot [17] registered Nov 8 00:35:06.951271 kernel: acpiphp: Slot [18] registered Nov 8 00:35:06.951288 kernel: acpiphp: Slot [19] registered Nov 8 00:35:06.951304 kernel: acpiphp: Slot [20] registered Nov 8 00:35:06.951320 kernel: acpiphp: Slot [21] registered Nov 8 00:35:06.951337 kernel: acpiphp: Slot [22] registered Nov 8 00:35:06.951354 kernel: acpiphp: Slot [23] registered Nov 8 00:35:06.951370 kernel: acpiphp: Slot [24] registered Nov 8 00:35:06.951386 kernel: acpiphp: Slot [25] registered Nov 8 00:35:06.951402 kernel: acpiphp: Slot [26] registered Nov 8 00:35:06.951422 kernel: acpiphp: Slot [27] registered Nov 8 00:35:06.951438 kernel: acpiphp: Slot [28] registered Nov 8 00:35:06.951454 kernel: acpiphp: Slot [29] registered Nov 8 00:35:06.951470 kernel: acpiphp: Slot [30] registered Nov 8 00:35:06.951486 kernel: acpiphp: Slot [31] registered Nov 8 00:35:06.951503 kernel: PCI host bridge to bus 0000:00 Nov 8 00:35:06.951646 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 00:35:06.951802 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 8 00:35:06.951947 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 00:35:06.952084 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 8 00:35:06.952220 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Nov 8 00:35:06.952353 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 8 00:35:06.952528 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 8 00:35:06.955808 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Nov 8 00:35:06.956022 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Nov 8 00:35:06.956183 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Nov 8 00:35:06.956325 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Nov 8 00:35:06.956467 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Nov 8 00:35:06.956624 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Nov 8 00:35:06.956800 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Nov 8 00:35:06.956942 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Nov 8 00:35:06.957081 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Nov 8 00:35:06.957236 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Nov 8 00:35:06.957373 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Nov 8 00:35:06.957506 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Nov 8 00:35:06.957638 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Nov 8 00:35:06.959828 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 8 00:35:06.959997 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Nov 8 00:35:06.960149 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Nov 8 00:35:06.960300 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Nov 8 00:35:06.960443 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Nov 8 00:35:06.960465 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 8 00:35:06.960482 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 8 00:35:06.960499 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 8 00:35:06.960515 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 8 00:35:06.960532 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 8 00:35:06.960562 kernel: iommu: Default domain type: Translated Nov 8 00:35:06.960578 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:35:06.960595 kernel: efivars: Registered efivars operations Nov 8 00:35:06.960612 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:35:06.960629 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 00:35:06.960646 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Nov 8 00:35:06.960662 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Nov 8 00:35:06.960822 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Nov 8 00:35:06.960963 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Nov 8 00:35:06.961112 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 8 00:35:06.961133 kernel: vgaarb: loaded Nov 8 00:35:06.961150 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Nov 8 00:35:06.961167 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Nov 8 00:35:06.961183 kernel: clocksource: Switched to clocksource kvm-clock Nov 8 00:35:06.961200 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:35:06.961217 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:35:06.961233 kernel: pnp: PnP ACPI init Nov 8 00:35:06.961249 kernel: pnp: PnP ACPI: found 5 devices Nov 8 00:35:06.961270 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:35:06.961286 kernel: NET: Registered PF_INET protocol family Nov 8 00:35:06.961303 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:35:06.961320 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 8 00:35:06.961336 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:35:06.961352 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 8 00:35:06.961369 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 8 00:35:06.961386 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 8 00:35:06.961406 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 8 00:35:06.961423 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 8 00:35:06.961439 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:35:06.961456 kernel: NET: Registered PF_XDP protocol family Nov 8 00:35:06.961588 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 8 00:35:06.963404 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 8 00:35:06.963556 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 8 00:35:06.963674 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 8 00:35:06.963832 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Nov 8 00:35:06.963989 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 8 00:35:06.964012 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:35:06.964030 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 8 00:35:06.964049 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Nov 8 00:35:06.964065 kernel: clocksource: Switched to clocksource tsc Nov 8 00:35:06.964082 kernel: Initialise system trusted keyrings Nov 8 00:35:06.964099 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 8 00:35:06.964116 kernel: Key type asymmetric registered Nov 8 00:35:06.964136 kernel: Asymmetric key parser 'x509' registered Nov 8 00:35:06.964152 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:35:06.964168 kernel: io scheduler mq-deadline registered Nov 8 00:35:06.964185 kernel: io scheduler kyber registered Nov 8 00:35:06.964202 kernel: io scheduler bfq registered Nov 8 00:35:06.964218 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:35:06.964235 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:35:06.964251 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:35:06.964267 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 8 00:35:06.964286 kernel: i8042: Warning: Keylock active Nov 8 00:35:06.964302 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 8 00:35:06.964319 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 8 00:35:06.964471 kernel: rtc_cmos 00:00: RTC can wake from S4 Nov 8 00:35:06.964619 kernel: rtc_cmos 00:00: registered as rtc0 Nov 8 00:35:06.964793 kernel: rtc_cmos 00:00: setting system clock to 2025-11-08T00:35:06 UTC (1762562106) Nov 8 00:35:06.964923 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Nov 8 00:35:06.964944 kernel: intel_pstate: CPU model not supported Nov 8 00:35:06.964966 kernel: efifb: probing for efifb Nov 8 00:35:06.964983 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Nov 8 00:35:06.965000 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Nov 8 00:35:06.965017 kernel: efifb: scrolling: redraw Nov 8 00:35:06.965034 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 8 00:35:06.965051 kernel: Console: switching to colour frame buffer device 100x37 Nov 8 00:35:06.965065 kernel: fb0: EFI VGA frame buffer device Nov 8 00:35:06.965081 kernel: pstore: Using crash dump compression: deflate Nov 8 00:35:06.965098 kernel: pstore: Registered efi_pstore as persistent store backend Nov 8 00:35:06.965118 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:35:06.965134 kernel: Segment Routing with IPv6 Nov 8 00:35:06.965151 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:35:06.965168 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:35:06.965185 kernel: Key type dns_resolver registered Nov 8 00:35:06.965200 kernel: IPI shorthand broadcast: enabled Nov 8 00:35:06.965244 kernel: sched_clock: Marking stable (508004527, 131231230)->(709212886, -69977129) Nov 8 00:35:06.965265 kernel: registered taskstats version 1 Nov 8 00:35:06.965283 kernel: Loading compiled-in X.509 certificates Nov 8 00:35:06.965304 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:35:06.965321 kernel: Key type .fscrypt registered Nov 8 00:35:06.965338 kernel: Key type fscrypt-provisioning registered Nov 8 00:35:06.965355 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:35:06.965372 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:35:06.965390 kernel: ima: No architecture policies found Nov 8 00:35:06.965407 kernel: clk: Disabling unused clocks Nov 8 00:35:06.965424 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:35:06.965442 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:35:06.965463 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:35:06.965481 kernel: Run /init as init process Nov 8 00:35:06.965498 kernel: with arguments: Nov 8 00:35:06.965515 kernel: /init Nov 8 00:35:06.965532 kernel: with environment: Nov 8 00:35:06.965549 kernel: HOME=/ Nov 8 00:35:06.965566 kernel: TERM=linux Nov 8 00:35:06.965587 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:35:06.965611 systemd[1]: Detected virtualization amazon. Nov 8 00:35:06.965628 systemd[1]: Detected architecture x86-64. Nov 8 00:35:06.965646 systemd[1]: Running in initrd. Nov 8 00:35:06.965663 systemd[1]: No hostname configured, using default hostname. Nov 8 00:35:06.967630 systemd[1]: Hostname set to . Nov 8 00:35:06.967657 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:35:06.967675 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:35:06.967708 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:35:06.967733 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:35:06.967750 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:35:06.967767 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:35:06.967786 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:35:06.967808 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:35:06.967832 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:35:06.967851 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:35:06.967871 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:35:06.967889 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:35:06.967907 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:35:06.967925 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:35:06.967943 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:35:06.967964 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:35:06.967983 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:35:06.968001 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:35:06.968020 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:35:06.968038 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:35:06.968057 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:35:06.968075 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:35:06.968094 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:35:06.968112 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:35:06.968134 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:35:06.968153 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:35:06.968173 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:35:06.968191 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:35:06.968210 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:35:06.968229 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:35:06.968248 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:35:06.968266 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:35:06.968285 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:35:06.968353 systemd-journald[178]: Collecting audit messages is disabled. Nov 8 00:35:06.968395 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:35:06.968419 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:35:06.968439 systemd-journald[178]: Journal started Nov 8 00:35:06.968476 systemd-journald[178]: Runtime Journal (/run/log/journal/ec2e408a496f073c0b6f37519f312105) is 4.7M, max 38.2M, 33.4M free. Nov 8 00:35:06.945146 systemd-modules-load[179]: Inserted module 'overlay' Nov 8 00:35:06.979716 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:35:06.982070 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:35:06.998738 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:35:07.000039 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:35:07.005807 kernel: Bridge firewalling registered Nov 8 00:35:07.005848 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 8 00:35:07.000854 systemd-modules-load[179]: Inserted module 'br_netfilter' Nov 8 00:35:07.006267 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:35:07.009128 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:35:07.011793 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:35:07.022923 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:35:07.027921 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:35:07.033937 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:35:07.037555 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:35:07.045918 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:35:07.059800 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:35:07.061878 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:35:07.070528 dracut-cmdline[207]: dracut-dracut-053 Nov 8 00:35:07.073983 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:35:07.077552 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:35:07.126607 systemd-resolved[217]: Positive Trust Anchors: Nov 8 00:35:07.127713 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:35:07.129142 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:35:07.137444 systemd-resolved[217]: Defaulting to hostname 'linux'. Nov 8 00:35:07.139893 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:35:07.140740 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:35:07.168719 kernel: SCSI subsystem initialized Nov 8 00:35:07.178708 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:35:07.190717 kernel: iscsi: registered transport (tcp) Nov 8 00:35:07.215108 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:35:07.215198 kernel: QLogic iSCSI HBA Driver Nov 8 00:35:07.257781 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:35:07.263926 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:35:07.291410 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:35:07.291488 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:35:07.291512 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:35:07.335711 kernel: raid6: avx512x4 gen() 17966 MB/s Nov 8 00:35:07.353718 kernel: raid6: avx512x2 gen() 18054 MB/s Nov 8 00:35:07.371713 kernel: raid6: avx512x1 gen() 17831 MB/s Nov 8 00:35:07.389716 kernel: raid6: avx2x4 gen() 17975 MB/s Nov 8 00:35:07.407713 kernel: raid6: avx2x2 gen() 17603 MB/s Nov 8 00:35:07.426007 kernel: raid6: avx2x1 gen() 13666 MB/s Nov 8 00:35:07.426088 kernel: raid6: using algorithm avx512x2 gen() 18054 MB/s Nov 8 00:35:07.444939 kernel: raid6: .... xor() 24008 MB/s, rmw enabled Nov 8 00:35:07.445018 kernel: raid6: using avx512x2 recovery algorithm Nov 8 00:35:07.467725 kernel: xor: automatically using best checksumming function avx Nov 8 00:35:07.636710 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:35:07.648423 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:35:07.654880 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:35:07.670639 systemd-udevd[397]: Using default interface naming scheme 'v255'. Nov 8 00:35:07.675807 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:35:07.682878 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:35:07.705882 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Nov 8 00:35:07.738215 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:35:07.746009 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:35:07.819580 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:35:07.827873 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:35:07.853160 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:35:07.858904 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:35:07.861089 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:35:07.861618 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:35:07.869955 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:35:07.908662 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:35:07.924643 kernel: ena 0000:00:05.0: ENA device version: 0.10 Nov 8 00:35:07.924945 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Nov 8 00:35:07.927092 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:35:07.933971 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Nov 8 00:35:07.954706 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:48:4f:94:e6:fb Nov 8 00:35:07.956671 (udev-worker)[456]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:35:07.967673 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:35:07.967770 kernel: AES CTR mode by8 optimization enabled Nov 8 00:35:07.973158 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:35:07.973322 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:35:07.974197 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:35:07.976751 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:35:07.976960 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:35:07.978550 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:35:07.990954 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:35:07.994104 kernel: nvme nvme0: pci function 0000:00:04.0 Nov 8 00:35:07.996702 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 8 00:35:08.009219 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:35:08.011200 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:35:08.017700 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 8 00:35:08.023388 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:35:08.029174 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 00:35:08.029210 kernel: GPT:9289727 != 33554431 Nov 8 00:35:08.029231 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 00:35:08.029260 kernel: GPT:9289727 != 33554431 Nov 8 00:35:08.029280 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:35:08.029299 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 8 00:35:08.048604 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:35:08.055930 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:35:08.082626 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:35:08.102737 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (445) Nov 8 00:35:08.126944 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 8 00:35:08.158739 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/nvme0n1p3 scanned by (udev-worker) (458) Nov 8 00:35:08.212100 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Nov 8 00:35:08.223148 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Nov 8 00:35:08.223867 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Nov 8 00:35:08.231741 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Nov 8 00:35:08.237901 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:35:08.245514 disk-uuid[631]: Primary Header is updated. Nov 8 00:35:08.245514 disk-uuid[631]: Secondary Entries is updated. Nov 8 00:35:08.245514 disk-uuid[631]: Secondary Header is updated. Nov 8 00:35:08.250743 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 8 00:35:08.256716 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 8 00:35:08.267758 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 8 00:35:09.265771 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 8 00:35:09.266858 disk-uuid[632]: The operation has completed successfully. Nov 8 00:35:09.419857 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:35:09.420004 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:35:09.437920 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:35:09.442830 sh[975]: Success Nov 8 00:35:09.458719 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 8 00:35:09.552897 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:35:09.562845 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:35:09.566027 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:35:09.595797 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:35:09.595872 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:35:09.597824 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:35:09.600648 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:35:09.600714 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:35:09.664721 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 8 00:35:09.667490 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:35:09.668516 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:35:09.674897 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:35:09.677920 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:35:09.699134 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:35:09.699196 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:35:09.701386 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 8 00:35:09.707709 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 8 00:35:09.725102 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:35:09.724638 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:35:09.734468 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:35:09.741030 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:35:09.791166 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:35:09.815007 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:35:09.837141 systemd-networkd[1167]: lo: Link UP Nov 8 00:35:09.837153 systemd-networkd[1167]: lo: Gained carrier Nov 8 00:35:09.838907 systemd-networkd[1167]: Enumeration completed Nov 8 00:35:09.839037 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:35:09.839546 systemd-networkd[1167]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:35:09.839551 systemd-networkd[1167]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:35:09.841482 systemd[1]: Reached target network.target - Network. Nov 8 00:35:09.842967 systemd-networkd[1167]: eth0: Link UP Nov 8 00:35:09.842973 systemd-networkd[1167]: eth0: Gained carrier Nov 8 00:35:09.842988 systemd-networkd[1167]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:35:09.855807 systemd-networkd[1167]: eth0: DHCPv4 address 172.31.30.13/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 8 00:35:09.958757 ignition[1096]: Ignition 2.19.0 Nov 8 00:35:09.958771 ignition[1096]: Stage: fetch-offline Nov 8 00:35:09.960656 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:35:09.959001 ignition[1096]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:35:09.959011 ignition[1096]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:35:09.959320 ignition[1096]: Ignition finished successfully Nov 8 00:35:09.969160 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 8 00:35:09.985197 ignition[1177]: Ignition 2.19.0 Nov 8 00:35:09.985208 ignition[1177]: Stage: fetch Nov 8 00:35:09.985549 ignition[1177]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:35:09.985558 ignition[1177]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:35:09.985641 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:35:10.000715 ignition[1177]: PUT result: OK Nov 8 00:35:10.002722 ignition[1177]: parsed url from cmdline: "" Nov 8 00:35:10.002734 ignition[1177]: no config URL provided Nov 8 00:35:10.002744 ignition[1177]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:35:10.002759 ignition[1177]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:35:10.002785 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:35:10.003515 ignition[1177]: PUT result: OK Nov 8 00:35:10.003570 ignition[1177]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Nov 8 00:35:10.004662 ignition[1177]: GET result: OK Nov 8 00:35:10.004732 ignition[1177]: parsing config with SHA512: 52296593289e5e91e5aa845f60bee62c6edb0595b5d3b5d81e8ec84b4f6cc552f22ed6ac48aaa899736e96266ba79c7de92d32f87023862e95c3f7329ff8caa8 Nov 8 00:35:10.008800 unknown[1177]: fetched base config from "system" Nov 8 00:35:10.009157 ignition[1177]: fetch: fetch complete Nov 8 00:35:10.008813 unknown[1177]: fetched base config from "system" Nov 8 00:35:10.009162 ignition[1177]: fetch: fetch passed Nov 8 00:35:10.008822 unknown[1177]: fetched user config from "aws" Nov 8 00:35:10.009211 ignition[1177]: Ignition finished successfully Nov 8 00:35:10.013348 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 8 00:35:10.021146 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:35:10.038045 ignition[1184]: Ignition 2.19.0 Nov 8 00:35:10.038060 ignition[1184]: Stage: kargs Nov 8 00:35:10.038546 ignition[1184]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:35:10.038561 ignition[1184]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:35:10.038727 ignition[1184]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:35:10.040414 ignition[1184]: PUT result: OK Nov 8 00:35:10.043115 ignition[1184]: kargs: kargs passed Nov 8 00:35:10.043191 ignition[1184]: Ignition finished successfully Nov 8 00:35:10.045155 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:35:10.048934 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:35:10.065669 ignition[1190]: Ignition 2.19.0 Nov 8 00:35:10.065705 ignition[1190]: Stage: disks Nov 8 00:35:10.066178 ignition[1190]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:35:10.066193 ignition[1190]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:35:10.066315 ignition[1190]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:35:10.067379 ignition[1190]: PUT result: OK Nov 8 00:35:10.070041 ignition[1190]: disks: disks passed Nov 8 00:35:10.070120 ignition[1190]: Ignition finished successfully Nov 8 00:35:10.071843 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:35:10.072467 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:35:10.072927 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:35:10.073476 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:35:10.074058 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:35:10.074600 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:35:10.079904 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:35:10.105748 systemd-fsck[1198]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 8 00:35:10.108887 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:35:10.115822 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:35:10.223754 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:35:10.224382 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:35:10.225860 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:35:10.233853 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:35:10.237848 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:35:10.239880 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 8 00:35:10.239955 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:35:10.239992 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:35:10.250951 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:35:10.259115 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:35:10.263829 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1217) Nov 8 00:35:10.263874 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:35:10.263894 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:35:10.263913 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 8 00:35:10.275730 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 8 00:35:10.277654 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:35:10.422833 initrd-setup-root[1242]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:35:10.429458 initrd-setup-root[1249]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:35:10.435031 initrd-setup-root[1256]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:35:10.440927 initrd-setup-root[1263]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:35:10.598434 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:35:10.601903 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:35:10.605879 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:35:10.615932 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:35:10.618013 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:35:10.657743 ignition[1331]: INFO : Ignition 2.19.0 Nov 8 00:35:10.657743 ignition[1331]: INFO : Stage: mount Nov 8 00:35:10.660059 ignition[1331]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:35:10.660059 ignition[1331]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:35:10.660059 ignition[1331]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:35:10.658894 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:35:10.662632 ignition[1331]: INFO : PUT result: OK Nov 8 00:35:10.663941 ignition[1331]: INFO : mount: mount passed Nov 8 00:35:10.664317 ignition[1331]: INFO : Ignition finished successfully Nov 8 00:35:10.666129 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:35:10.669880 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:35:10.698997 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:35:10.716925 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1343) Nov 8 00:35:10.721128 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:35:10.721233 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:35:10.721269 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 8 00:35:10.727720 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 8 00:35:10.730226 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:35:10.752019 ignition[1359]: INFO : Ignition 2.19.0 Nov 8 00:35:10.752019 ignition[1359]: INFO : Stage: files Nov 8 00:35:10.753655 ignition[1359]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:35:10.753655 ignition[1359]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:35:10.753655 ignition[1359]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:35:10.754971 ignition[1359]: INFO : PUT result: OK Nov 8 00:35:10.756751 ignition[1359]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:35:10.758429 ignition[1359]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:35:10.758429 ignition[1359]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:35:10.777585 ignition[1359]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:35:10.778384 ignition[1359]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:35:10.778384 ignition[1359]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:35:10.778114 unknown[1359]: wrote ssh authorized keys file for user: core Nov 8 00:35:10.793881 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 8 00:35:10.794737 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 8 00:35:10.794737 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:35:10.794737 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:35:10.794737 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:35:10.794737 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:35:10.794737 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:35:10.794737 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:35:10.794737 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:35:10.794737 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 8 00:35:11.210897 systemd-networkd[1167]: eth0: Gained IPv6LL Nov 8 00:35:11.249925 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Nov 8 00:35:11.711456 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:35:11.711456 ignition[1359]: INFO : files: op(8): [started] processing unit "containerd.service" Nov 8 00:35:11.714053 ignition[1359]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 8 00:35:11.715287 ignition[1359]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 8 00:35:11.715287 ignition[1359]: INFO : files: op(8): [finished] processing unit "containerd.service" Nov 8 00:35:11.715287 ignition[1359]: INFO : files: createResultFile: createFiles: op(a): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:35:11.715287 ignition[1359]: INFO : files: createResultFile: createFiles: op(a): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:35:11.715287 ignition[1359]: INFO : files: files passed Nov 8 00:35:11.715287 ignition[1359]: INFO : Ignition finished successfully Nov 8 00:35:11.717484 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:35:11.722966 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:35:11.727891 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:35:11.732386 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:35:11.732518 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:35:11.750963 initrd-setup-root-after-ignition[1388]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:35:11.750963 initrd-setup-root-after-ignition[1388]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:35:11.755421 initrd-setup-root-after-ignition[1392]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:35:11.756676 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:35:11.758403 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:35:11.763976 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:35:11.801304 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:35:11.801421 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:35:11.802919 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:35:11.803655 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:35:11.804509 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:35:11.811975 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:35:11.825435 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:35:11.830967 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:35:11.843637 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:35:11.844388 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:35:11.845569 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:35:11.846466 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:35:11.846699 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:35:11.847910 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:35:11.849023 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:35:11.849889 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:35:11.850722 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:35:11.851510 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:35:11.852365 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:35:11.853347 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:35:11.854208 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:35:11.855452 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:35:11.856243 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:35:11.857126 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:35:11.857308 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:35:11.858453 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:35:11.859263 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:35:11.859963 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:35:11.860827 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:35:11.861367 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:35:11.861541 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:35:11.863034 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:35:11.863221 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:35:11.863935 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:35:11.864087 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:35:11.871974 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:35:11.873890 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:35:11.874784 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:35:11.880150 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:35:11.881267 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:35:11.882089 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:35:11.884203 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:35:11.884376 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:35:11.891933 ignition[1412]: INFO : Ignition 2.19.0 Nov 8 00:35:11.891933 ignition[1412]: INFO : Stage: umount Nov 8 00:35:11.896652 ignition[1412]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:35:11.896652 ignition[1412]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:35:11.896652 ignition[1412]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:35:11.896652 ignition[1412]: INFO : PUT result: OK Nov 8 00:35:11.899458 ignition[1412]: INFO : umount: umount passed Nov 8 00:35:11.899458 ignition[1412]: INFO : Ignition finished successfully Nov 8 00:35:11.905038 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:35:11.905189 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:35:11.906166 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:35:11.906304 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:35:11.908500 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:35:11.908650 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:35:11.911671 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:35:11.911782 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:35:11.912309 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 8 00:35:11.912379 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 8 00:35:11.914841 systemd[1]: Stopped target network.target - Network. Nov 8 00:35:11.915484 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:35:11.915615 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:35:11.917022 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:35:11.917512 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:35:11.920765 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:35:11.921284 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:35:11.921757 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:35:11.922245 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:35:11.922304 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:35:11.924820 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:35:11.924882 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:35:11.925770 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:35:11.925847 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:35:11.926336 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:35:11.926394 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:35:11.927076 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:35:11.927638 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:35:11.930241 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:35:11.934554 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:35:11.934740 systemd-networkd[1167]: eth0: DHCPv6 lease lost Nov 8 00:35:11.935785 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:35:11.938078 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:35:11.938288 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:35:11.940474 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:35:11.940644 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:35:11.945814 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:35:11.946415 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:35:11.946497 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:35:11.950490 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:35:11.950566 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:35:11.951423 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:35:11.951488 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:35:11.953471 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:35:11.953540 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:35:11.954407 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:35:11.966863 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:35:11.967068 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:35:11.969989 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:35:11.970131 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:35:11.972025 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:35:11.972100 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:35:11.973171 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:35:11.973224 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:35:11.973951 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:35:11.974015 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:35:11.975083 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:35:11.975142 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:35:11.976235 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:35:11.976293 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:35:11.987910 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:35:11.989110 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:35:11.989176 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:35:11.989638 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:35:11.989700 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:35:11.995356 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:35:11.995499 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:35:12.053211 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:35:12.053324 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:35:12.054738 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:35:12.055143 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:35:12.055225 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:35:12.064932 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:35:12.074097 systemd[1]: Switching root. Nov 8 00:35:12.104377 systemd-journald[178]: Journal stopped Nov 8 00:35:13.506946 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Nov 8 00:35:13.507038 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:35:13.507059 kernel: SELinux: policy capability open_perms=1 Nov 8 00:35:13.507076 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:35:13.507092 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:35:13.507118 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:35:13.507140 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:35:13.507163 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:35:13.507190 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:35:13.507212 kernel: audit: type=1403 audit(1762562112.477:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:35:13.507241 systemd[1]: Successfully loaded SELinux policy in 40.520ms. Nov 8 00:35:13.507273 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.472ms. Nov 8 00:35:13.507298 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:35:13.507322 systemd[1]: Detected virtualization amazon. Nov 8 00:35:13.507349 systemd[1]: Detected architecture x86-64. Nov 8 00:35:13.507371 systemd[1]: Detected first boot. Nov 8 00:35:13.507396 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:35:13.507418 zram_generator::config[1475]: No configuration found. Nov 8 00:35:13.507441 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:35:13.507461 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:35:13.507483 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Nov 8 00:35:13.507505 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:35:13.507530 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:35:13.507552 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:35:13.507572 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:35:13.507592 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:35:13.507614 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:35:13.507637 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:35:13.507659 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:35:13.507696 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:35:13.507718 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:35:13.507746 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:35:13.507776 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:35:13.507798 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:35:13.507819 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:35:13.507839 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 8 00:35:13.507859 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:35:13.507880 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:35:13.507900 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:35:13.507919 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:35:13.507942 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:35:13.507962 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:35:13.507981 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:35:13.508002 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:35:13.508022 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:35:13.508041 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:35:13.508061 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:35:13.508081 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:35:13.508104 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:35:13.508124 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:35:13.508143 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:35:13.508163 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:35:13.508184 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:35:13.508204 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:35:13.508224 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:35:13.508244 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:35:13.508264 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:35:13.508289 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:35:13.508310 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:35:13.508331 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:35:13.508355 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:35:13.508376 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:35:13.508398 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:35:13.508418 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:35:13.508439 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:35:13.508463 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:35:13.508484 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:35:13.508505 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 8 00:35:13.508527 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Nov 8 00:35:13.508556 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:35:13.508578 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:35:13.508598 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:35:13.508619 kernel: loop: module loaded Nov 8 00:35:13.508640 kernel: fuse: init (API version 7.39) Nov 8 00:35:13.508662 kernel: ACPI: bus type drm_connector registered Nov 8 00:35:13.509390 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:35:13.509428 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:35:13.509449 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:35:13.509509 systemd-journald[1583]: Collecting audit messages is disabled. Nov 8 00:35:13.509546 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:35:13.509568 systemd-journald[1583]: Journal started Nov 8 00:35:13.509610 systemd-journald[1583]: Runtime Journal (/run/log/journal/ec2e408a496f073c0b6f37519f312105) is 4.7M, max 38.2M, 33.4M free. Nov 8 00:35:13.512718 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:35:13.515120 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:35:13.516045 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:35:13.516928 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:35:13.517772 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:35:13.518567 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:35:13.519787 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:35:13.521243 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:35:13.522995 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:35:13.523272 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:35:13.524528 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:35:13.525113 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:35:13.526214 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:35:13.526473 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:35:13.528295 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:35:13.528569 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:35:13.529896 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:35:13.530149 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:35:13.531855 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:35:13.532166 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:35:13.533501 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:35:13.535146 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:35:13.536384 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:35:13.555164 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:35:13.562867 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:35:13.571953 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:35:13.573843 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:35:13.585642 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:35:13.595889 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:35:13.597848 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:35:13.604946 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:35:13.608832 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:35:13.614424 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:35:13.630057 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:35:13.635333 systemd-journald[1583]: Time spent on flushing to /var/log/journal/ec2e408a496f073c0b6f37519f312105 is 73.688ms for 952 entries. Nov 8 00:35:13.635333 systemd-journald[1583]: System Journal (/var/log/journal/ec2e408a496f073c0b6f37519f312105) is 8.0M, max 195.6M, 187.6M free. Nov 8 00:35:13.718603 systemd-journald[1583]: Received client request to flush runtime journal. Nov 8 00:35:13.646961 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:35:13.647932 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:35:13.651003 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:35:13.652080 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:35:13.660870 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:35:13.671899 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:35:13.693983 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:35:13.718929 udevadm[1632]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 8 00:35:13.720271 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:35:13.730562 systemd-tmpfiles[1624]: ACLs are not supported, ignoring. Nov 8 00:35:13.730588 systemd-tmpfiles[1624]: ACLs are not supported, ignoring. Nov 8 00:35:13.738388 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:35:13.748939 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:35:13.795194 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:35:13.805059 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:35:13.829786 systemd-tmpfiles[1646]: ACLs are not supported, ignoring. Nov 8 00:35:13.829816 systemd-tmpfiles[1646]: ACLs are not supported, ignoring. Nov 8 00:35:13.838068 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:35:14.324825 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:35:14.333888 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:35:14.359660 systemd-udevd[1652]: Using default interface naming scheme 'v255'. Nov 8 00:35:14.404893 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:35:14.413572 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:35:14.434849 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:35:14.467661 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Nov 8 00:35:14.472515 (udev-worker)[1665]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:35:14.502220 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:35:14.563723 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 8 00:35:14.569341 kernel: ACPI: button: Power Button [PWRF] Nov 8 00:35:14.569420 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Nov 8 00:35:14.571723 kernel: ACPI: button: Sleep Button [SLPF] Nov 8 00:35:14.576751 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Nov 8 00:35:14.593389 systemd-networkd[1655]: lo: Link UP Nov 8 00:35:14.593863 systemd-networkd[1655]: lo: Gained carrier Nov 8 00:35:14.595643 systemd-networkd[1655]: Enumeration completed Nov 8 00:35:14.596016 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:35:14.598249 systemd-networkd[1655]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:35:14.599238 systemd-networkd[1655]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:35:14.602148 systemd-networkd[1655]: eth0: Link UP Nov 8 00:35:14.602537 systemd-networkd[1655]: eth0: Gained carrier Nov 8 00:35:14.602645 systemd-networkd[1655]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:35:14.607022 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:35:14.613877 systemd-networkd[1655]: eth0: DHCPv4 address 172.31.30.13/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 8 00:35:14.620707 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Nov 8 00:35:14.649749 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (1660) Nov 8 00:35:14.712603 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:35:14.724002 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:35:14.724340 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:35:14.735999 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:35:14.738711 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 00:35:14.822029 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 8 00:35:14.832049 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:35:14.855962 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:35:14.874703 lvm[1775]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:35:14.881169 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:35:14.901158 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:35:14.903061 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:35:14.907041 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:35:14.915788 lvm[1781]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:35:14.942484 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:35:14.943962 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:35:14.944514 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:35:14.944668 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:35:14.945671 systemd[1]: Reached target machines.target - Containers. Nov 8 00:35:14.947492 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:35:14.952968 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:35:14.956265 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:35:14.958983 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:35:14.965937 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:35:14.971041 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:35:14.982879 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:35:14.988369 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:35:15.008492 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:35:15.019024 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:35:15.019914 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:35:15.025389 kernel: loop0: detected capacity change from 0 to 140768 Nov 8 00:35:15.096731 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:35:15.122728 kernel: loop1: detected capacity change from 0 to 224512 Nov 8 00:35:15.198085 kernel: loop2: detected capacity change from 0 to 142488 Nov 8 00:35:15.299930 kernel: loop3: detected capacity change from 0 to 61336 Nov 8 00:35:15.350770 kernel: loop4: detected capacity change from 0 to 140768 Nov 8 00:35:15.381718 kernel: loop5: detected capacity change from 0 to 224512 Nov 8 00:35:15.416910 kernel: loop6: detected capacity change from 0 to 142488 Nov 8 00:35:15.445177 kernel: loop7: detected capacity change from 0 to 61336 Nov 8 00:35:15.463759 (sd-merge)[1803]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Nov 8 00:35:15.464377 (sd-merge)[1803]: Merged extensions into '/usr'. Nov 8 00:35:15.486662 systemd[1]: Reloading requested from client PID 1789 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:35:15.486707 systemd[1]: Reloading... Nov 8 00:35:15.584718 zram_generator::config[1828]: No configuration found. Nov 8 00:35:15.757452 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:35:15.859394 systemd[1]: Reloading finished in 372 ms. Nov 8 00:35:15.874853 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:35:15.888985 systemd[1]: Starting ensure-sysext.service... Nov 8 00:35:15.893915 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:35:15.898810 systemd[1]: Reloading requested from client PID 1888 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:35:15.898830 systemd[1]: Reloading... Nov 8 00:35:15.925740 ldconfig[1785]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:35:15.944446 systemd-tmpfiles[1889]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:35:15.945607 systemd-tmpfiles[1889]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:35:15.946837 systemd-networkd[1655]: eth0: Gained IPv6LL Nov 8 00:35:15.950835 systemd-tmpfiles[1889]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:35:15.951526 systemd-tmpfiles[1889]: ACLs are not supported, ignoring. Nov 8 00:35:15.951747 systemd-tmpfiles[1889]: ACLs are not supported, ignoring. Nov 8 00:35:15.962784 systemd-tmpfiles[1889]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:35:15.962976 systemd-tmpfiles[1889]: Skipping /boot Nov 8 00:35:15.979072 systemd-tmpfiles[1889]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:35:15.979236 systemd-tmpfiles[1889]: Skipping /boot Nov 8 00:35:16.023729 zram_generator::config[1920]: No configuration found. Nov 8 00:35:16.160170 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:35:16.241658 systemd[1]: Reloading finished in 342 ms. Nov 8 00:35:16.261731 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:35:16.262551 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:35:16.267750 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:35:16.279018 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:35:16.285983 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:35:16.292657 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:35:16.298900 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:35:16.313501 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:35:16.331643 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:35:16.332931 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:35:16.337058 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:35:16.354847 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:35:16.361596 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:35:16.366907 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:35:16.367874 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:35:16.393842 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:35:16.394335 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:35:16.394726 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:35:16.394985 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:35:16.400134 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:35:16.400397 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:35:16.410899 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:35:16.421010 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:35:16.421285 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:35:16.425627 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:35:16.427916 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:35:16.439040 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:35:16.439496 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:35:16.447794 augenrules[2016]: No rules Nov 8 00:35:16.453006 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:35:16.462118 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:35:16.465954 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:35:16.466174 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:35:16.466404 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:35:16.467892 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:35:16.470557 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:35:16.473659 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:35:16.476392 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:35:16.476654 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:35:16.485498 systemd[1]: Finished ensure-sysext.service. Nov 8 00:35:16.497474 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:35:16.497798 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:35:16.501018 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:35:16.507389 systemd-resolved[1987]: Positive Trust Anchors: Nov 8 00:35:16.507408 systemd-resolved[1987]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:35:16.507471 systemd-resolved[1987]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:35:16.511276 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:35:16.516297 systemd-resolved[1987]: Defaulting to hostname 'linux'. Nov 8 00:35:16.521854 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:35:16.522600 systemd[1]: Reached target network.target - Network. Nov 8 00:35:16.523199 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:35:16.524282 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:35:16.528783 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:35:16.533619 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:35:16.534998 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:35:16.535200 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:35:16.536407 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:35:16.537199 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:35:16.538018 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:35:16.538494 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:35:16.538913 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:35:16.539301 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:35:16.539349 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:35:16.539740 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:35:16.541709 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:35:16.543587 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:35:16.545464 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:35:16.551034 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:35:16.551717 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:35:16.552284 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:35:16.553362 systemd[1]: System is tainted: cgroupsv1 Nov 8 00:35:16.553440 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:35:16.553478 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:35:16.556566 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:35:16.559904 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 8 00:35:16.565998 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:35:16.572828 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:35:16.587901 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:35:16.588794 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:35:16.613808 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:35:16.624721 jq[2044]: false Nov 8 00:35:16.621079 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:35:16.634209 systemd[1]: Started ntpd.service - Network Time Service. Nov 8 00:35:16.654361 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:35:16.667844 systemd[1]: Starting setup-oem.service - Setup OEM... Nov 8 00:35:16.674460 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:35:16.688892 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:35:16.711929 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:35:16.713093 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 00:35:16.713877 dbus-daemon[2042]: [system] SELinux support is enabled Nov 8 00:35:16.731881 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:35:16.738033 dbus-daemon[2042]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1655 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 8 00:35:16.754317 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:35:16.761362 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:35:16.762824 coreos-metadata[2041]: Nov 08 00:35:16.761 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 8 00:35:16.766021 coreos-metadata[2041]: Nov 08 00:35:16.764 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Nov 8 00:35:16.766580 coreos-metadata[2041]: Nov 08 00:35:16.766 INFO Fetch successful Nov 8 00:35:16.766580 coreos-metadata[2041]: Nov 08 00:35:16.766 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Nov 8 00:35:16.768101 coreos-metadata[2041]: Nov 08 00:35:16.767 INFO Fetch successful Nov 8 00:35:16.768101 coreos-metadata[2041]: Nov 08 00:35:16.768 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Nov 8 00:35:16.768882 coreos-metadata[2041]: Nov 08 00:35:16.768 INFO Fetch successful Nov 8 00:35:16.768882 coreos-metadata[2041]: Nov 08 00:35:16.768 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Nov 8 00:35:16.770206 coreos-metadata[2041]: Nov 08 00:35:16.769 INFO Fetch successful Nov 8 00:35:16.770206 coreos-metadata[2041]: Nov 08 00:35:16.770 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Nov 8 00:35:16.772608 coreos-metadata[2041]: Nov 08 00:35:16.772 INFO Fetch failed with 404: resource not found Nov 8 00:35:16.772608 coreos-metadata[2041]: Nov 08 00:35:16.772 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Nov 8 00:35:16.773477 coreos-metadata[2041]: Nov 08 00:35:16.773 INFO Fetch successful Nov 8 00:35:16.773477 coreos-metadata[2041]: Nov 08 00:35:16.773 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Nov 8 00:35:16.774303 coreos-metadata[2041]: Nov 08 00:35:16.774 INFO Fetch successful Nov 8 00:35:16.774303 coreos-metadata[2041]: Nov 08 00:35:16.774 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Nov 8 00:35:16.775207 coreos-metadata[2041]: Nov 08 00:35:16.775 INFO Fetch successful Nov 8 00:35:16.775327 coreos-metadata[2041]: Nov 08 00:35:16.775 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Nov 8 00:35:16.776069 coreos-metadata[2041]: Nov 08 00:35:16.775 INFO Fetch successful Nov 8 00:35:16.776302 coreos-metadata[2041]: Nov 08 00:35:16.776 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Nov 8 00:35:16.780824 coreos-metadata[2041]: Nov 08 00:35:16.779 INFO Fetch successful Nov 8 00:35:16.780913 extend-filesystems[2045]: Found loop4 Nov 8 00:35:16.780913 extend-filesystems[2045]: Found loop5 Nov 8 00:35:16.783157 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:35:16.785960 jq[2073]: true Nov 8 00:35:16.786191 extend-filesystems[2045]: Found loop6 Nov 8 00:35:16.786191 extend-filesystems[2045]: Found loop7 Nov 8 00:35:16.786191 extend-filesystems[2045]: Found nvme0n1 Nov 8 00:35:16.786191 extend-filesystems[2045]: Found nvme0n1p1 Nov 8 00:35:16.786191 extend-filesystems[2045]: Found nvme0n1p2 Nov 8 00:35:16.786191 extend-filesystems[2045]: Found nvme0n1p3 Nov 8 00:35:16.786191 extend-filesystems[2045]: Found usr Nov 8 00:35:16.786191 extend-filesystems[2045]: Found nvme0n1p4 Nov 8 00:35:16.786191 extend-filesystems[2045]: Found nvme0n1p6 Nov 8 00:35:16.786590 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:35:16.790396 extend-filesystems[2045]: Found nvme0n1p7 Nov 8 00:35:16.790396 extend-filesystems[2045]: Found nvme0n1p9 Nov 8 00:35:16.790396 extend-filesystems[2045]: Checking size of /dev/nvme0n1p9 Nov 8 00:35:16.819144 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:35:16.819477 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:35:16.822312 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:35:16.833001 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:35:16.833335 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:35:16.842787 update_engine[2069]: I20251108 00:35:16.836900 2069 main.cc:92] Flatcar Update Engine starting Nov 8 00:35:16.842787 update_engine[2069]: I20251108 00:35:16.839078 2069 update_check_scheduler.cc:74] Next update check in 3m29s Nov 8 00:35:16.855016 ntpd[2051]: ntpd 4.2.8p17@1.4004-o Fri Nov 7 22:06:24 UTC 2025 (1): Starting Nov 8 00:35:16.855984 ntpd[2051]: 8 Nov 00:35:16 ntpd[2051]: ntpd 4.2.8p17@1.4004-o Fri Nov 7 22:06:24 UTC 2025 (1): Starting Nov 8 00:35:16.855984 ntpd[2051]: 8 Nov 00:35:16 ntpd[2051]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 8 00:35:16.855984 ntpd[2051]: 8 Nov 00:35:16 ntpd[2051]: ---------------------------------------------------- Nov 8 00:35:16.855984 ntpd[2051]: 8 Nov 00:35:16 ntpd[2051]: ntp-4 is maintained by Network Time Foundation, Nov 8 00:35:16.855984 ntpd[2051]: 8 Nov 00:35:16 ntpd[2051]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 8 00:35:16.855984 ntpd[2051]: 8 Nov 00:35:16 ntpd[2051]: corporation. Support and training for ntp-4 are Nov 8 00:35:16.855984 ntpd[2051]: 8 Nov 00:35:16 ntpd[2051]: available at https://www.nwtime.org/support Nov 8 00:35:16.855984 ntpd[2051]: 8 Nov 00:35:16 ntpd[2051]: ---------------------------------------------------- Nov 8 00:35:16.855046 ntpd[2051]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 8 00:35:16.855056 ntpd[2051]: ---------------------------------------------------- Nov 8 00:35:16.855065 ntpd[2051]: ntp-4 is maintained by Network Time Foundation, Nov 8 00:35:16.855074 ntpd[2051]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 8 00:35:16.855084 ntpd[2051]: corporation. Support and training for ntp-4 are Nov 8 00:35:16.855092 ntpd[2051]: available at https://www.nwtime.org/support Nov 8 00:35:16.855102 ntpd[2051]: ---------------------------------------------------- Nov 8 00:35:16.868863 ntpd[2051]: proto: precision = 0.079 usec (-24) Nov 8 00:35:16.876250 ntpd[2051]: 8 Nov 00:35:16 ntpd[2051]: proto: precision = 0.079 usec (-24) Nov 8 00:35:16.876250 ntpd[2051]: 8 Nov 00:35:16 ntpd[2051]: basedate set to 2025-10-26 Nov 8 00:35:16.876250 ntpd[2051]: 8 Nov 00:35:16 ntpd[2051]: gps base set to 2025-10-26 (week 2390) Nov 8 00:35:16.869195 ntpd[2051]: basedate set to 2025-10-26 Nov 8 00:35:16.869211 ntpd[2051]: gps base set to 2025-10-26 (week 2390) Nov 8 00:35:16.879515 ntpd[2051]: Listen and drop on 0 v6wildcard [::]:123 Nov 8 00:35:16.882431 ntpd[2051]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 8 00:35:16.882881 ntpd[2051]: 8 Nov 00:35:16 ntpd[2051]: Listen and drop on 0 v6wildcard [::]:123 Nov 8 00:35:16.882881 ntpd[2051]: 8 Nov 00:35:16 ntpd[2051]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 8 00:35:16.882881 ntpd[2051]: 8 Nov 00:35:16 ntpd[2051]: Listen normally on 2 lo 127.0.0.1:123 Nov 8 00:35:16.882881 ntpd[2051]: 8 Nov 00:35:16 ntpd[2051]: Listen normally on 3 eth0 172.31.30.13:123 Nov 8 00:35:16.882618 ntpd[2051]: Listen normally on 2 lo 127.0.0.1:123 Nov 8 00:35:16.882655 ntpd[2051]: Listen normally on 3 eth0 172.31.30.13:123 Nov 8 00:35:16.886754 ntpd[2051]: Listen normally on 4 lo [::1]:123 Nov 8 00:35:16.887201 (ntainerd)[2090]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:35:16.890657 ntpd[2051]: 8 Nov 00:35:16 ntpd[2051]: Listen normally on 4 lo [::1]:123 Nov 8 00:35:16.890657 ntpd[2051]: 8 Nov 00:35:16 ntpd[2051]: Listen normally on 5 eth0 [fe80::448:4fff:fe94:e6fb%2]:123 Nov 8 00:35:16.890657 ntpd[2051]: 8 Nov 00:35:16 ntpd[2051]: Listening on routing socket on fd #22 for interface updates Nov 8 00:35:16.890657 ntpd[2051]: 8 Nov 00:35:16 ntpd[2051]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 8 00:35:16.890657 ntpd[2051]: 8 Nov 00:35:16 ntpd[2051]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 8 00:35:16.886843 ntpd[2051]: Listen normally on 5 eth0 [fe80::448:4fff:fe94:e6fb%2]:123 Nov 8 00:35:16.886896 ntpd[2051]: Listening on routing socket on fd #22 for interface updates Nov 8 00:35:16.888484 ntpd[2051]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 8 00:35:16.888520 ntpd[2051]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 8 00:35:16.906988 jq[2089]: true Nov 8 00:35:16.914879 extend-filesystems[2045]: Resized partition /dev/nvme0n1p9 Nov 8 00:35:16.928779 extend-filesystems[2109]: resize2fs 1.47.1 (20-May-2024) Nov 8 00:35:16.955707 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Nov 8 00:35:16.976321 dbus-daemon[2042]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 8 00:35:16.984449 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:35:16.992110 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:35:16.992152 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:35:17.000077 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 8 00:35:17.001841 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:35:17.001884 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:35:17.005598 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:35:17.033723 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:35:17.038452 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 8 00:35:17.050077 systemd[1]: Finished setup-oem.service - Setup OEM. Nov 8 00:35:17.098891 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Nov 8 00:35:17.099626 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:35:17.112946 systemd-logind[2066]: Watching system buttons on /dev/input/event1 (Power Button) Nov 8 00:35:17.113005 systemd-logind[2066]: Watching system buttons on /dev/input/event2 (Sleep Button) Nov 8 00:35:17.113032 systemd-logind[2066]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 8 00:35:17.113292 systemd-logind[2066]: New seat seat0. Nov 8 00:35:17.121799 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:35:17.166117 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Nov 8 00:35:17.185617 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (1654) Nov 8 00:35:17.191717 extend-filesystems[2109]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Nov 8 00:35:17.191717 extend-filesystems[2109]: old_desc_blocks = 1, new_desc_blocks = 2 Nov 8 00:35:17.191717 extend-filesystems[2109]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Nov 8 00:35:17.189134 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:35:17.204076 extend-filesystems[2045]: Resized filesystem in /dev/nvme0n1p9 Nov 8 00:35:17.189479 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:35:17.223712 bash[2146]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:35:17.220445 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:35:17.234242 systemd[1]: Starting sshkeys.service... Nov 8 00:35:17.254745 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 8 00:35:17.261362 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 8 00:35:17.435773 amazon-ssm-agent[2136]: Initializing new seelog logger Nov 8 00:35:17.442949 amazon-ssm-agent[2136]: New Seelog Logger Creation Complete Nov 8 00:35:17.446010 amazon-ssm-agent[2136]: 2025/11/08 00:35:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:35:17.446010 amazon-ssm-agent[2136]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:35:17.446010 amazon-ssm-agent[2136]: 2025/11/08 00:35:17 processing appconfig overrides Nov 8 00:35:17.453653 amazon-ssm-agent[2136]: 2025/11/08 00:35:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:35:17.453653 amazon-ssm-agent[2136]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:35:17.453653 amazon-ssm-agent[2136]: 2025-11-08 00:35:17 INFO Proxy environment variables: Nov 8 00:35:17.455120 amazon-ssm-agent[2136]: 2025/11/08 00:35:17 processing appconfig overrides Nov 8 00:35:17.455572 amazon-ssm-agent[2136]: 2025/11/08 00:35:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:35:17.460708 amazon-ssm-agent[2136]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:35:17.460708 amazon-ssm-agent[2136]: 2025/11/08 00:35:17 processing appconfig overrides Nov 8 00:35:17.478712 amazon-ssm-agent[2136]: 2025/11/08 00:35:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:35:17.478712 amazon-ssm-agent[2136]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:35:17.478712 amazon-ssm-agent[2136]: 2025/11/08 00:35:17 processing appconfig overrides Nov 8 00:35:17.561419 coreos-metadata[2168]: Nov 08 00:35:17.556 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 8 00:35:17.561898 coreos-metadata[2168]: Nov 08 00:35:17.561 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Nov 8 00:35:17.562424 dbus-daemon[2042]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 8 00:35:17.565600 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 8 00:35:17.566352 amazon-ssm-agent[2136]: 2025-11-08 00:35:17 INFO no_proxy: Nov 8 00:35:17.566425 coreos-metadata[2168]: Nov 08 00:35:17.562 INFO Fetch successful Nov 8 00:35:17.566425 coreos-metadata[2168]: Nov 08 00:35:17.562 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 8 00:35:17.566425 coreos-metadata[2168]: Nov 08 00:35:17.564 INFO Fetch successful Nov 8 00:35:17.567004 dbus-daemon[2042]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2125 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 8 00:35:17.572308 unknown[2168]: wrote ssh authorized keys file for user: core Nov 8 00:35:17.588156 systemd[1]: Starting polkit.service - Authorization Manager... Nov 8 00:35:17.647489 update-ssh-keys[2245]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:35:17.650201 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 8 00:35:17.662961 systemd[1]: Finished sshkeys.service. Nov 8 00:35:17.664908 amazon-ssm-agent[2136]: 2025-11-08 00:35:17 INFO https_proxy: Nov 8 00:35:17.695332 sshd_keygen[2084]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:35:17.698874 polkitd[2242]: Started polkitd version 121 Nov 8 00:35:17.721973 locksmithd[2126]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:35:17.749387 polkitd[2242]: Loading rules from directory /etc/polkit-1/rules.d Nov 8 00:35:17.753301 polkitd[2242]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 8 00:35:17.763745 polkitd[2242]: Finished loading, compiling and executing 2 rules Nov 8 00:35:17.764731 amazon-ssm-agent[2136]: 2025-11-08 00:35:17 INFO http_proxy: Nov 8 00:35:17.770301 dbus-daemon[2042]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 8 00:35:17.770509 systemd[1]: Started polkit.service - Authorization Manager. Nov 8 00:35:17.775535 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:35:17.778956 polkitd[2242]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 8 00:35:17.788051 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:35:17.842493 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:35:17.842866 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:35:17.857226 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:35:17.867873 systemd-hostnamed[2125]: Hostname set to (transient) Nov 8 00:35:17.868877 systemd-resolved[1987]: System hostname changed to 'ip-172-31-30-13'. Nov 8 00:35:17.871356 amazon-ssm-agent[2136]: 2025-11-08 00:35:17 INFO Checking if agent identity type OnPrem can be assumed Nov 8 00:35:17.905609 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:35:17.934997 containerd[2090]: time="2025-11-08T00:35:17.934879545Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:35:17.937473 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:35:17.948151 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 8 00:35:17.951216 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:35:17.970175 amazon-ssm-agent[2136]: 2025-11-08 00:35:17 INFO Checking if agent identity type EC2 can be assumed Nov 8 00:35:17.989898 containerd[2090]: time="2025-11-08T00:35:17.989836215Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:35:17.993571 containerd[2090]: time="2025-11-08T00:35:17.993513522Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:35:17.993571 containerd[2090]: time="2025-11-08T00:35:17.993568358Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:35:17.993790 containerd[2090]: time="2025-11-08T00:35:17.993592076Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:35:17.993840 containerd[2090]: time="2025-11-08T00:35:17.993805059Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:35:17.993840 containerd[2090]: time="2025-11-08T00:35:17.993829899Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:35:17.995735 containerd[2090]: time="2025-11-08T00:35:17.995085834Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:35:17.995735 containerd[2090]: time="2025-11-08T00:35:17.995117846Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:35:17.995735 containerd[2090]: time="2025-11-08T00:35:17.995508378Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:35:17.995735 containerd[2090]: time="2025-11-08T00:35:17.995531166Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:35:17.995735 containerd[2090]: time="2025-11-08T00:35:17.995552054Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:35:17.995735 containerd[2090]: time="2025-11-08T00:35:17.995568232Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:35:17.995735 containerd[2090]: time="2025-11-08T00:35:17.995667135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:35:17.996032 containerd[2090]: time="2025-11-08T00:35:17.996017948Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:35:17.996283 containerd[2090]: time="2025-11-08T00:35:17.996255725Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:35:17.996343 containerd[2090]: time="2025-11-08T00:35:17.996285212Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:35:17.997905 containerd[2090]: time="2025-11-08T00:35:17.997878773Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:35:17.997972 containerd[2090]: time="2025-11-08T00:35:17.997949151Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:35:18.003831 containerd[2090]: time="2025-11-08T00:35:18.003783940Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:35:18.003951 containerd[2090]: time="2025-11-08T00:35:18.003873608Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:35:18.003951 containerd[2090]: time="2025-11-08T00:35:18.003896345Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:35:18.004042 containerd[2090]: time="2025-11-08T00:35:18.003971942Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:35:18.004042 containerd[2090]: time="2025-11-08T00:35:18.003994675Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:35:18.004703 containerd[2090]: time="2025-11-08T00:35:18.004229583Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:35:18.005122 containerd[2090]: time="2025-11-08T00:35:18.005085368Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:35:18.005327 containerd[2090]: time="2025-11-08T00:35:18.005291562Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:35:18.006305 containerd[2090]: time="2025-11-08T00:35:18.006282333Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:35:18.006360 containerd[2090]: time="2025-11-08T00:35:18.006314719Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:35:18.006422 containerd[2090]: time="2025-11-08T00:35:18.006358242Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:35:18.006422 containerd[2090]: time="2025-11-08T00:35:18.006386768Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:35:18.006422 containerd[2090]: time="2025-11-08T00:35:18.006405924Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:35:18.006528 containerd[2090]: time="2025-11-08T00:35:18.006444234Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:35:18.006528 containerd[2090]: time="2025-11-08T00:35:18.006468239Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:35:18.006528 containerd[2090]: time="2025-11-08T00:35:18.006512348Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:35:18.006638 containerd[2090]: time="2025-11-08T00:35:18.006533238Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:35:18.006638 containerd[2090]: time="2025-11-08T00:35:18.006561534Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:35:18.006638 containerd[2090]: time="2025-11-08T00:35:18.006610058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:35:18.006638 containerd[2090]: time="2025-11-08T00:35:18.006632411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:35:18.007832 containerd[2090]: time="2025-11-08T00:35:18.006664869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:35:18.007832 containerd[2090]: time="2025-11-08T00:35:18.006710258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:35:18.007832 containerd[2090]: time="2025-11-08T00:35:18.006730402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:35:18.007832 containerd[2090]: time="2025-11-08T00:35:18.006757022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:35:18.007832 containerd[2090]: time="2025-11-08T00:35:18.006774933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:35:18.007832 containerd[2090]: time="2025-11-08T00:35:18.006794689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:35:18.007832 containerd[2090]: time="2025-11-08T00:35:18.006818490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:35:18.007832 containerd[2090]: time="2025-11-08T00:35:18.006840450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:35:18.007832 containerd[2090]: time="2025-11-08T00:35:18.006858841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:35:18.007832 containerd[2090]: time="2025-11-08T00:35:18.006895368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:35:18.007832 containerd[2090]: time="2025-11-08T00:35:18.006915815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:35:18.007832 containerd[2090]: time="2025-11-08T00:35:18.006942383Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:35:18.007832 containerd[2090]: time="2025-11-08T00:35:18.006974858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:35:18.007832 containerd[2090]: time="2025-11-08T00:35:18.006993004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:35:18.007832 containerd[2090]: time="2025-11-08T00:35:18.007010629Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:35:18.008344 containerd[2090]: time="2025-11-08T00:35:18.007082367Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:35:18.008344 containerd[2090]: time="2025-11-08T00:35:18.007108316Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:35:18.008344 containerd[2090]: time="2025-11-08T00:35:18.007202027Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:35:18.008344 containerd[2090]: time="2025-11-08T00:35:18.007222198Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:35:18.008344 containerd[2090]: time="2025-11-08T00:35:18.007237647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:35:18.008344 containerd[2090]: time="2025-11-08T00:35:18.007257723Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:35:18.008344 containerd[2090]: time="2025-11-08T00:35:18.007274716Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:35:18.008344 containerd[2090]: time="2025-11-08T00:35:18.007290133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:35:18.008630 containerd[2090]: time="2025-11-08T00:35:18.007717530Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:35:18.008630 containerd[2090]: time="2025-11-08T00:35:18.007812376Z" level=info msg="Connect containerd service" Nov 8 00:35:18.008630 containerd[2090]: time="2025-11-08T00:35:18.007859178Z" level=info msg="using legacy CRI server" Nov 8 00:35:18.008630 containerd[2090]: time="2025-11-08T00:35:18.007871318Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:35:18.008630 containerd[2090]: time="2025-11-08T00:35:18.008005343Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:35:18.009876 containerd[2090]: time="2025-11-08T00:35:18.009830238Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:35:18.010048 containerd[2090]: time="2025-11-08T00:35:18.009991223Z" level=info msg="Start subscribing containerd event" Nov 8 00:35:18.010110 containerd[2090]: time="2025-11-08T00:35:18.010085729Z" level=info msg="Start recovering state" Nov 8 00:35:18.010247 containerd[2090]: time="2025-11-08T00:35:18.010169858Z" level=info msg="Start event monitor" Nov 8 00:35:18.010247 containerd[2090]: time="2025-11-08T00:35:18.010193750Z" level=info msg="Start snapshots syncer" Nov 8 00:35:18.010247 containerd[2090]: time="2025-11-08T00:35:18.010207048Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:35:18.010247 containerd[2090]: time="2025-11-08T00:35:18.010218992Z" level=info msg="Start streaming server" Nov 8 00:35:18.012702 containerd[2090]: time="2025-11-08T00:35:18.011807754Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:35:18.012702 containerd[2090]: time="2025-11-08T00:35:18.011886663Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:35:18.016570 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:35:18.017815 containerd[2090]: time="2025-11-08T00:35:18.017777180Z" level=info msg="containerd successfully booted in 0.084977s" Nov 8 00:35:18.069457 amazon-ssm-agent[2136]: 2025-11-08 00:35:17 INFO Agent will take identity from EC2 Nov 8 00:35:18.168072 amazon-ssm-agent[2136]: 2025-11-08 00:35:17 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 8 00:35:18.183382 amazon-ssm-agent[2136]: 2025-11-08 00:35:17 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 8 00:35:18.183382 amazon-ssm-agent[2136]: 2025-11-08 00:35:17 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 8 00:35:18.183382 amazon-ssm-agent[2136]: 2025-11-08 00:35:17 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Nov 8 00:35:18.183382 amazon-ssm-agent[2136]: 2025-11-08 00:35:17 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Nov 8 00:35:18.183382 amazon-ssm-agent[2136]: 2025-11-08 00:35:17 INFO [amazon-ssm-agent] Starting Core Agent Nov 8 00:35:18.183382 amazon-ssm-agent[2136]: 2025-11-08 00:35:17 INFO [amazon-ssm-agent] registrar detected. Attempting registration Nov 8 00:35:18.183382 amazon-ssm-agent[2136]: 2025-11-08 00:35:17 INFO [Registrar] Starting registrar module Nov 8 00:35:18.183382 amazon-ssm-agent[2136]: 2025-11-08 00:35:17 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Nov 8 00:35:18.183382 amazon-ssm-agent[2136]: 2025-11-08 00:35:18 INFO [EC2Identity] EC2 registration was successful. Nov 8 00:35:18.183382 amazon-ssm-agent[2136]: 2025-11-08 00:35:18 INFO [CredentialRefresher] credentialRefresher has started Nov 8 00:35:18.183382 amazon-ssm-agent[2136]: 2025-11-08 00:35:18 INFO [CredentialRefresher] Starting credentials refresher loop Nov 8 00:35:18.183382 amazon-ssm-agent[2136]: 2025-11-08 00:35:18 INFO EC2RoleProvider Successfully connected with instance profile role credentials Nov 8 00:35:18.267169 amazon-ssm-agent[2136]: 2025-11-08 00:35:18 INFO [CredentialRefresher] Next credential rotation will be in 31.47499447765 minutes Nov 8 00:35:19.196886 amazon-ssm-agent[2136]: 2025-11-08 00:35:19 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Nov 8 00:35:19.297200 amazon-ssm-agent[2136]: 2025-11-08 00:35:19 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2305) started Nov 8 00:35:19.397998 amazon-ssm-agent[2136]: 2025-11-08 00:35:19 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Nov 8 00:35:19.628876 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:35:19.630128 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:35:19.631860 systemd[1]: Startup finished in 6.418s (kernel) + 7.192s (userspace) = 13.611s. Nov 8 00:35:19.633442 (kubelet)[2323]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:35:20.286772 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:35:20.294362 systemd[1]: Started sshd@0-172.31.30.13:22-139.178.89.65:41150.service - OpenSSH per-connection server daemon (139.178.89.65:41150). Nov 8 00:35:20.463253 sshd[2332]: Accepted publickey for core from 139.178.89.65 port 41150 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:35:20.465116 sshd[2332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:35:20.484112 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:35:20.485116 systemd-logind[2066]: New session 1 of user core. Nov 8 00:35:20.494069 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:35:20.514654 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:35:20.528186 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:35:20.533017 (systemd)[2339]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:35:20.687417 systemd[2339]: Queued start job for default target default.target. Nov 8 00:35:20.687961 systemd[2339]: Created slice app.slice - User Application Slice. Nov 8 00:35:20.687999 systemd[2339]: Reached target paths.target - Paths. Nov 8 00:35:20.688019 systemd[2339]: Reached target timers.target - Timers. Nov 8 00:35:20.696864 systemd[2339]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:35:20.705426 systemd[2339]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:35:20.706427 systemd[2339]: Reached target sockets.target - Sockets. Nov 8 00:35:20.706454 systemd[2339]: Reached target basic.target - Basic System. Nov 8 00:35:20.706511 systemd[2339]: Reached target default.target - Main User Target. Nov 8 00:35:20.706549 systemd[2339]: Startup finished in 165ms. Nov 8 00:35:20.706927 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:35:20.715670 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:35:20.722976 kubelet[2323]: E1108 00:35:20.722940 2323 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:35:20.726900 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:35:20.727152 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:35:20.873223 systemd[1]: Started sshd@1-172.31.30.13:22-139.178.89.65:41160.service - OpenSSH per-connection server daemon (139.178.89.65:41160). Nov 8 00:35:21.027950 sshd[2354]: Accepted publickey for core from 139.178.89.65 port 41160 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:35:21.029490 sshd[2354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:35:21.035326 systemd-logind[2066]: New session 2 of user core. Nov 8 00:35:21.043109 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:35:21.163351 sshd[2354]: pam_unix(sshd:session): session closed for user core Nov 8 00:35:21.167797 systemd[1]: sshd@1-172.31.30.13:22-139.178.89.65:41160.service: Deactivated successfully. Nov 8 00:35:21.172118 systemd-logind[2066]: Session 2 logged out. Waiting for processes to exit. Nov 8 00:35:21.172759 systemd[1]: session-2.scope: Deactivated successfully. Nov 8 00:35:21.174404 systemd-logind[2066]: Removed session 2. Nov 8 00:35:21.193052 systemd[1]: Started sshd@2-172.31.30.13:22-139.178.89.65:41166.service - OpenSSH per-connection server daemon (139.178.89.65:41166). Nov 8 00:35:21.349726 sshd[2362]: Accepted publickey for core from 139.178.89.65 port 41166 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:35:21.353715 sshd[2362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:35:21.369394 systemd-logind[2066]: New session 3 of user core. Nov 8 00:35:21.376079 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:35:21.492418 sshd[2362]: pam_unix(sshd:session): session closed for user core Nov 8 00:35:21.496113 systemd[1]: sshd@2-172.31.30.13:22-139.178.89.65:41166.service: Deactivated successfully. Nov 8 00:35:21.499256 systemd[1]: session-3.scope: Deactivated successfully. Nov 8 00:35:21.499916 systemd-logind[2066]: Session 3 logged out. Waiting for processes to exit. Nov 8 00:35:21.500992 systemd-logind[2066]: Removed session 3. Nov 8 00:35:21.528773 systemd[1]: Started sshd@3-172.31.30.13:22-139.178.89.65:41174.service - OpenSSH per-connection server daemon (139.178.89.65:41174). Nov 8 00:35:21.686934 sshd[2370]: Accepted publickey for core from 139.178.89.65 port 41174 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:35:21.688915 sshd[2370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:35:21.693784 systemd-logind[2066]: New session 4 of user core. Nov 8 00:35:21.701154 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:35:21.824011 sshd[2370]: pam_unix(sshd:session): session closed for user core Nov 8 00:35:21.827618 systemd[1]: sshd@3-172.31.30.13:22-139.178.89.65:41174.service: Deactivated successfully. Nov 8 00:35:21.833250 systemd-logind[2066]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:35:21.833985 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:35:21.835219 systemd-logind[2066]: Removed session 4. Nov 8 00:35:21.864053 systemd[1]: Started sshd@4-172.31.30.13:22-139.178.89.65:41190.service - OpenSSH per-connection server daemon (139.178.89.65:41190). Nov 8 00:35:22.026134 sshd[2378]: Accepted publickey for core from 139.178.89.65 port 41190 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:35:22.027859 sshd[2378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:35:22.033337 systemd-logind[2066]: New session 5 of user core. Nov 8 00:35:22.041370 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:35:22.156093 sudo[2382]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:35:22.156399 sudo[2382]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:35:22.173576 sudo[2382]: pam_unix(sudo:session): session closed for user root Nov 8 00:35:22.198710 sshd[2378]: pam_unix(sshd:session): session closed for user core Nov 8 00:35:22.202641 systemd[1]: sshd@4-172.31.30.13:22-139.178.89.65:41190.service: Deactivated successfully. Nov 8 00:35:22.207754 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:35:22.208800 systemd-logind[2066]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:35:22.209996 systemd-logind[2066]: Removed session 5. Nov 8 00:35:22.226059 systemd[1]: Started sshd@5-172.31.30.13:22-139.178.89.65:41206.service - OpenSSH per-connection server daemon (139.178.89.65:41206). Nov 8 00:35:22.386766 sshd[2387]: Accepted publickey for core from 139.178.89.65 port 41206 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:35:22.388503 sshd[2387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:35:22.394134 systemd-logind[2066]: New session 6 of user core. Nov 8 00:35:22.400085 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:35:22.501165 sudo[2392]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:35:22.501563 sudo[2392]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:35:22.505784 sudo[2392]: pam_unix(sudo:session): session closed for user root Nov 8 00:35:22.511512 sudo[2391]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:35:22.511950 sudo[2391]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:35:22.527097 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:35:22.529498 auditctl[2395]: No rules Nov 8 00:35:22.530011 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:35:22.530351 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:35:22.546004 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:35:22.571775 augenrules[2414]: No rules Nov 8 00:35:22.573499 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:35:22.575307 sudo[2391]: pam_unix(sudo:session): session closed for user root Nov 8 00:35:22.600104 sshd[2387]: pam_unix(sshd:session): session closed for user core Nov 8 00:35:22.603052 systemd[1]: sshd@5-172.31.30.13:22-139.178.89.65:41206.service: Deactivated successfully. Nov 8 00:35:22.606753 systemd-logind[2066]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:35:22.607235 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:35:22.608295 systemd-logind[2066]: Removed session 6. Nov 8 00:35:22.629084 systemd[1]: Started sshd@6-172.31.30.13:22-139.178.89.65:41216.service - OpenSSH per-connection server daemon (139.178.89.65:41216). Nov 8 00:35:22.791628 sshd[2423]: Accepted publickey for core from 139.178.89.65 port 41216 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:35:22.793148 sshd[2423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:35:22.798389 systemd-logind[2066]: New session 7 of user core. Nov 8 00:35:22.808072 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:35:22.908597 sudo[2427]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:35:22.909024 sudo[2427]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:35:24.082486 systemd-resolved[1987]: Clock change detected. Flushing caches. Nov 8 00:35:24.245772 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:35:24.253360 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:35:24.301608 systemd[1]: Reloading requested from client PID 2461 ('systemctl') (unit session-7.scope)... Nov 8 00:35:24.301629 systemd[1]: Reloading... Nov 8 00:35:24.438609 zram_generator::config[2504]: No configuration found. Nov 8 00:35:24.601096 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:35:24.688363 systemd[1]: Reloading finished in 386 ms. Nov 8 00:35:24.736206 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:35:24.742483 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:35:24.748140 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:35:24.748708 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:35:24.757302 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:35:25.181922 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:35:25.193169 (kubelet)[2579]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:35:25.247809 kubelet[2579]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:35:25.248144 kubelet[2579]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:35:25.248203 kubelet[2579]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:35:25.248377 kubelet[2579]: I1108 00:35:25.248349 2579 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:35:25.639346 kubelet[2579]: I1108 00:35:25.638901 2579 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:35:25.639346 kubelet[2579]: I1108 00:35:25.638936 2579 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:35:25.639560 kubelet[2579]: I1108 00:35:25.639538 2579 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:35:25.688721 kubelet[2579]: I1108 00:35:25.688686 2579 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:35:25.707863 kubelet[2579]: E1108 00:35:25.707628 2579 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:35:25.707863 kubelet[2579]: I1108 00:35:25.707707 2579 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:35:25.710490 kubelet[2579]: I1108 00:35:25.710119 2579 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:35:25.710633 kubelet[2579]: I1108 00:35:25.710600 2579 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:35:25.710830 kubelet[2579]: I1108 00:35:25.710634 2579 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.30.13","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 8 00:35:25.710830 kubelet[2579]: I1108 00:35:25.710815 2579 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:35:25.710830 kubelet[2579]: I1108 00:35:25.710829 2579 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:35:25.711086 kubelet[2579]: I1108 00:35:25.710978 2579 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:35:25.716214 kubelet[2579]: I1108 00:35:25.716175 2579 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:35:25.716214 kubelet[2579]: I1108 00:35:25.716235 2579 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:35:25.716406 kubelet[2579]: I1108 00:35:25.716265 2579 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:35:25.716406 kubelet[2579]: I1108 00:35:25.716278 2579 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:35:25.725899 kubelet[2579]: E1108 00:35:25.725517 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:35:25.725899 kubelet[2579]: E1108 00:35:25.725599 2579 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:35:25.726711 kubelet[2579]: I1108 00:35:25.726427 2579 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:35:25.727018 kubelet[2579]: I1108 00:35:25.727001 2579 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:35:25.728145 kubelet[2579]: W1108 00:35:25.728103 2579 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:35:25.734471 kubelet[2579]: I1108 00:35:25.733735 2579 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:35:25.734471 kubelet[2579]: I1108 00:35:25.733805 2579 server.go:1287] "Started kubelet" Nov 8 00:35:25.740711 kubelet[2579]: I1108 00:35:25.740660 2579 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:35:25.742312 kubelet[2579]: I1108 00:35:25.741980 2579 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:35:25.744399 kubelet[2579]: I1108 00:35:25.743182 2579 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:35:25.744399 kubelet[2579]: I1108 00:35:25.743770 2579 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:35:25.745465 kubelet[2579]: I1108 00:35:25.745441 2579 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:35:25.746835 kubelet[2579]: E1108 00:35:25.744262 2579 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.30.13.1875e0ea36eb3578 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.30.13,UID:172.31.30.13,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.30.13,},FirstTimestamp:2025-11-08 00:35:25.733770616 +0000 UTC m=+0.534131296,LastTimestamp:2025-11-08 00:35:25.733770616 +0000 UTC m=+0.534131296,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.30.13,}" Nov 8 00:35:25.746835 kubelet[2579]: W1108 00:35:25.746638 2579 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Nov 8 00:35:25.746835 kubelet[2579]: E1108 00:35:25.746685 2579 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Nov 8 00:35:25.747336 kubelet[2579]: W1108 00:35:25.747102 2579 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.30.13" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Nov 8 00:35:25.747336 kubelet[2579]: E1108 00:35:25.747137 2579 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.31.30.13\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Nov 8 00:35:25.751050 kubelet[2579]: I1108 00:35:25.750991 2579 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:35:25.754385 kubelet[2579]: E1108 00:35:25.754265 2579 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.30.13.1875e0ea379da42a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.30.13,UID:172.31.30.13,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:172.31.30.13,},FirstTimestamp:2025-11-08 00:35:25.745464362 +0000 UTC m=+0.545825043,LastTimestamp:2025-11-08 00:35:25.745464362 +0000 UTC m=+0.545825043,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.30.13,}" Nov 8 00:35:25.755027 kubelet[2579]: E1108 00:35:25.755005 2579 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:35:25.755597 kubelet[2579]: E1108 00:35:25.755562 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.30.13\" not found" Nov 8 00:35:25.755765 kubelet[2579]: I1108 00:35:25.755701 2579 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:35:25.756081 kubelet[2579]: I1108 00:35:25.756066 2579 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:35:25.756224 kubelet[2579]: I1108 00:35:25.756202 2579 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:35:25.757166 kubelet[2579]: I1108 00:35:25.757130 2579 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:35:25.757272 kubelet[2579]: I1108 00:35:25.757240 2579 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:35:25.761196 kubelet[2579]: I1108 00:35:25.761170 2579 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:35:25.774333 kubelet[2579]: W1108 00:35:25.773772 2579 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Nov 8 00:35:25.774333 kubelet[2579]: E1108 00:35:25.773827 2579 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Nov 8 00:35:25.774880 kubelet[2579]: E1108 00:35:25.774383 2579 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.30.13.1875e0ea382ef1e3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.30.13,UID:172.31.30.13,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.31.30.13,},FirstTimestamp:2025-11-08 00:35:25.754986979 +0000 UTC m=+0.555347660,LastTimestamp:2025-11-08 00:35:25.754986979 +0000 UTC m=+0.555347660,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.30.13,}" Nov 8 00:35:25.776599 kubelet[2579]: E1108 00:35:25.775087 2579 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.30.13\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Nov 8 00:35:25.796984 kubelet[2579]: I1108 00:35:25.796639 2579 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:35:25.796984 kubelet[2579]: I1108 00:35:25.796667 2579 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:35:25.796984 kubelet[2579]: I1108 00:35:25.796793 2579 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:35:25.799848 kubelet[2579]: E1108 00:35:25.797824 2579 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.30.13.1875e0ea3a9523c6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.30.13,UID:172.31.30.13,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 172.31.30.13 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:172.31.30.13,},FirstTimestamp:2025-11-08 00:35:25.795238854 +0000 UTC m=+0.595599514,LastTimestamp:2025-11-08 00:35:25.795238854 +0000 UTC m=+0.595599514,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.30.13,}" Nov 8 00:35:25.804021 kubelet[2579]: I1108 00:35:25.803822 2579 policy_none.go:49] "None policy: Start" Nov 8 00:35:25.804021 kubelet[2579]: I1108 00:35:25.803867 2579 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:35:25.804021 kubelet[2579]: I1108 00:35:25.803884 2579 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:35:25.813223 kubelet[2579]: E1108 00:35:25.813102 2579 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.30.13.1875e0ea3a95645f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.30.13,UID:172.31.30.13,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 172.31.30.13 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:172.31.30.13,},FirstTimestamp:2025-11-08 00:35:25.795255391 +0000 UTC m=+0.595616055,LastTimestamp:2025-11-08 00:35:25.795255391 +0000 UTC m=+0.595616055,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.30.13,}" Nov 8 00:35:25.820645 kubelet[2579]: I1108 00:35:25.820413 2579 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:35:25.822744 kubelet[2579]: I1108 00:35:25.822722 2579 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:35:25.823970 kubelet[2579]: I1108 00:35:25.822879 2579 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:35:25.823970 kubelet[2579]: I1108 00:35:25.823767 2579 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:35:25.824642 kubelet[2579]: E1108 00:35:25.824615 2579 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:35:25.824716 kubelet[2579]: E1108 00:35:25.824662 2579 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.30.13\" not found" Nov 8 00:35:25.899781 kubelet[2579]: I1108 00:35:25.898033 2579 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:35:25.900795 kubelet[2579]: I1108 00:35:25.900769 2579 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:35:25.900966 kubelet[2579]: I1108 00:35:25.900885 2579 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:35:25.900966 kubelet[2579]: I1108 00:35:25.900914 2579 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:35:25.900966 kubelet[2579]: I1108 00:35:25.900921 2579 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:35:25.900966 kubelet[2579]: E1108 00:35:25.900969 2579 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Nov 8 00:35:25.924671 kubelet[2579]: I1108 00:35:25.924626 2579 kubelet_node_status.go:75] "Attempting to register node" node="172.31.30.13" Nov 8 00:35:25.930430 kubelet[2579]: I1108 00:35:25.930391 2579 kubelet_node_status.go:78] "Successfully registered node" node="172.31.30.13" Nov 8 00:35:25.930430 kubelet[2579]: E1108 00:35:25.930423 2579 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172.31.30.13\": node \"172.31.30.13\" not found" Nov 8 00:35:25.939206 kubelet[2579]: E1108 00:35:25.939151 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.30.13\" not found" Nov 8 00:35:26.040257 kubelet[2579]: E1108 00:35:26.040211 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.30.13\" not found" Nov 8 00:35:26.140962 kubelet[2579]: E1108 00:35:26.140898 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.30.13\" not found" Nov 8 00:35:26.222041 sudo[2427]: pam_unix(sudo:session): session closed for user root Nov 8 00:35:26.241569 kubelet[2579]: E1108 00:35:26.241525 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.30.13\" not found" Nov 8 00:35:26.246574 sshd[2423]: pam_unix(sshd:session): session closed for user core Nov 8 00:35:26.250068 systemd[1]: sshd@6-172.31.30.13:22-139.178.89.65:41216.service: Deactivated successfully. Nov 8 00:35:26.254455 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:35:26.255729 systemd-logind[2066]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:35:26.257149 systemd-logind[2066]: Removed session 7. Nov 8 00:35:26.342271 kubelet[2579]: E1108 00:35:26.342223 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.30.13\" not found" Nov 8 00:35:26.443061 kubelet[2579]: E1108 00:35:26.442994 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.30.13\" not found" Nov 8 00:35:26.543895 kubelet[2579]: E1108 00:35:26.543745 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.30.13\" not found" Nov 8 00:35:26.642475 kubelet[2579]: I1108 00:35:26.642427 2579 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Nov 8 00:35:26.642664 kubelet[2579]: W1108 00:35:26.642647 2579 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Nov 8 00:35:26.644740 kubelet[2579]: E1108 00:35:26.644680 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.30.13\" not found" Nov 8 00:35:26.726601 kubelet[2579]: E1108 00:35:26.726547 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:35:26.744838 kubelet[2579]: E1108 00:35:26.744788 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.30.13\" not found" Nov 8 00:35:26.845033 kubelet[2579]: E1108 00:35:26.844910 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.30.13\" not found" Nov 8 00:35:26.946734 kubelet[2579]: I1108 00:35:26.946702 2579 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Nov 8 00:35:26.947141 containerd[2090]: time="2025-11-08T00:35:26.947100696Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:35:26.947785 kubelet[2579]: I1108 00:35:26.947537 2579 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Nov 8 00:35:27.721408 kubelet[2579]: I1108 00:35:27.721366 2579 apiserver.go:52] "Watching apiserver" Nov 8 00:35:27.725606 kubelet[2579]: E1108 00:35:27.724993 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-49v22" podUID="34d56e07-ff7d-441e-b5c1-bf41dd56f15b" Nov 8 00:35:27.726695 kubelet[2579]: E1108 00:35:27.726663 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:35:27.757139 kubelet[2579]: I1108 00:35:27.757103 2579 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:35:27.769327 kubelet[2579]: I1108 00:35:27.769292 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c44ed85b-0b82-43dc-8673-616abb232d4b-kube-proxy\") pod \"kube-proxy-c2hxf\" (UID: \"c44ed85b-0b82-43dc-8673-616abb232d4b\") " pod="kube-system/kube-proxy-c2hxf" Nov 8 00:35:27.769327 kubelet[2579]: I1108 00:35:27.769335 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c44ed85b-0b82-43dc-8673-616abb232d4b-xtables-lock\") pod \"kube-proxy-c2hxf\" (UID: \"c44ed85b-0b82-43dc-8673-616abb232d4b\") " pod="kube-system/kube-proxy-c2hxf" Nov 8 00:35:27.769560 kubelet[2579]: I1108 00:35:27.769361 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f996e695-d649-43f3-b43f-b6d136d7800c-lib-modules\") pod \"calico-node-7xq75\" (UID: \"f996e695-d649-43f3-b43f-b6d136d7800c\") " pod="calico-system/calico-node-7xq75" Nov 8 00:35:27.769560 kubelet[2579]: I1108 00:35:27.769384 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f996e695-d649-43f3-b43f-b6d136d7800c-policysync\") pod \"calico-node-7xq75\" (UID: \"f996e695-d649-43f3-b43f-b6d136d7800c\") " pod="calico-system/calico-node-7xq75" Nov 8 00:35:27.769560 kubelet[2579]: I1108 00:35:27.769405 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f996e695-d649-43f3-b43f-b6d136d7800c-var-run-calico\") pod \"calico-node-7xq75\" (UID: \"f996e695-d649-43f3-b43f-b6d136d7800c\") " pod="calico-system/calico-node-7xq75" Nov 8 00:35:27.769560 kubelet[2579]: I1108 00:35:27.769427 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rft5\" (UniqueName: \"kubernetes.io/projected/f996e695-d649-43f3-b43f-b6d136d7800c-kube-api-access-5rft5\") pod \"calico-node-7xq75\" (UID: \"f996e695-d649-43f3-b43f-b6d136d7800c\") " pod="calico-system/calico-node-7xq75" Nov 8 00:35:27.769560 kubelet[2579]: I1108 00:35:27.769451 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/34d56e07-ff7d-441e-b5c1-bf41dd56f15b-varrun\") pod \"csi-node-driver-49v22\" (UID: \"34d56e07-ff7d-441e-b5c1-bf41dd56f15b\") " pod="calico-system/csi-node-driver-49v22" Nov 8 00:35:27.769789 kubelet[2579]: I1108 00:35:27.769503 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f996e695-d649-43f3-b43f-b6d136d7800c-cni-net-dir\") pod \"calico-node-7xq75\" (UID: \"f996e695-d649-43f3-b43f-b6d136d7800c\") " pod="calico-system/calico-node-7xq75" Nov 8 00:35:27.769789 kubelet[2579]: I1108 00:35:27.769531 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f996e695-d649-43f3-b43f-b6d136d7800c-flexvol-driver-host\") pod \"calico-node-7xq75\" (UID: \"f996e695-d649-43f3-b43f-b6d136d7800c\") " pod="calico-system/calico-node-7xq75" Nov 8 00:35:27.769789 kubelet[2579]: I1108 00:35:27.769556 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f996e695-d649-43f3-b43f-b6d136d7800c-node-certs\") pod \"calico-node-7xq75\" (UID: \"f996e695-d649-43f3-b43f-b6d136d7800c\") " pod="calico-system/calico-node-7xq75" Nov 8 00:35:27.769789 kubelet[2579]: I1108 00:35:27.769596 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f996e695-d649-43f3-b43f-b6d136d7800c-tigera-ca-bundle\") pod \"calico-node-7xq75\" (UID: \"f996e695-d649-43f3-b43f-b6d136d7800c\") " pod="calico-system/calico-node-7xq75" Nov 8 00:35:27.769789 kubelet[2579]: I1108 00:35:27.769620 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f996e695-d649-43f3-b43f-b6d136d7800c-var-lib-calico\") pod \"calico-node-7xq75\" (UID: \"f996e695-d649-43f3-b43f-b6d136d7800c\") " pod="calico-system/calico-node-7xq75" Nov 8 00:35:27.769973 kubelet[2579]: I1108 00:35:27.769647 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/34d56e07-ff7d-441e-b5c1-bf41dd56f15b-kubelet-dir\") pod \"csi-node-driver-49v22\" (UID: \"34d56e07-ff7d-441e-b5c1-bf41dd56f15b\") " pod="calico-system/csi-node-driver-49v22" Nov 8 00:35:27.769973 kubelet[2579]: I1108 00:35:27.769674 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f996e695-d649-43f3-b43f-b6d136d7800c-cni-bin-dir\") pod \"calico-node-7xq75\" (UID: \"f996e695-d649-43f3-b43f-b6d136d7800c\") " pod="calico-system/calico-node-7xq75" Nov 8 00:35:27.769973 kubelet[2579]: I1108 00:35:27.769698 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f996e695-d649-43f3-b43f-b6d136d7800c-cni-log-dir\") pod \"calico-node-7xq75\" (UID: \"f996e695-d649-43f3-b43f-b6d136d7800c\") " pod="calico-system/calico-node-7xq75" Nov 8 00:35:27.769973 kubelet[2579]: I1108 00:35:27.769723 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/34d56e07-ff7d-441e-b5c1-bf41dd56f15b-registration-dir\") pod \"csi-node-driver-49v22\" (UID: \"34d56e07-ff7d-441e-b5c1-bf41dd56f15b\") " pod="calico-system/csi-node-driver-49v22" Nov 8 00:35:27.769973 kubelet[2579]: I1108 00:35:27.769748 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/34d56e07-ff7d-441e-b5c1-bf41dd56f15b-socket-dir\") pod \"csi-node-driver-49v22\" (UID: \"34d56e07-ff7d-441e-b5c1-bf41dd56f15b\") " pod="calico-system/csi-node-driver-49v22" Nov 8 00:35:27.770165 kubelet[2579]: I1108 00:35:27.769777 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5thvs\" (UniqueName: \"kubernetes.io/projected/34d56e07-ff7d-441e-b5c1-bf41dd56f15b-kube-api-access-5thvs\") pod \"csi-node-driver-49v22\" (UID: \"34d56e07-ff7d-441e-b5c1-bf41dd56f15b\") " pod="calico-system/csi-node-driver-49v22" Nov 8 00:35:27.770165 kubelet[2579]: I1108 00:35:27.769802 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c44ed85b-0b82-43dc-8673-616abb232d4b-lib-modules\") pod \"kube-proxy-c2hxf\" (UID: \"c44ed85b-0b82-43dc-8673-616abb232d4b\") " pod="kube-system/kube-proxy-c2hxf" Nov 8 00:35:27.770165 kubelet[2579]: I1108 00:35:27.769896 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbcxt\" (UniqueName: \"kubernetes.io/projected/c44ed85b-0b82-43dc-8673-616abb232d4b-kube-api-access-bbcxt\") pod \"kube-proxy-c2hxf\" (UID: \"c44ed85b-0b82-43dc-8673-616abb232d4b\") " pod="kube-system/kube-proxy-c2hxf" Nov 8 00:35:27.770165 kubelet[2579]: I1108 00:35:27.769923 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f996e695-d649-43f3-b43f-b6d136d7800c-xtables-lock\") pod \"calico-node-7xq75\" (UID: \"f996e695-d649-43f3-b43f-b6d136d7800c\") " pod="calico-system/calico-node-7xq75" Nov 8 00:35:27.873432 kubelet[2579]: E1108 00:35:27.873313 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:27.873432 kubelet[2579]: W1108 00:35:27.873341 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:27.873432 kubelet[2579]: E1108 00:35:27.873393 2579 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:27.882053 kubelet[2579]: E1108 00:35:27.881919 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:27.882053 kubelet[2579]: W1108 00:35:27.881969 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:27.882053 kubelet[2579]: E1108 00:35:27.881999 2579 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:27.899798 kubelet[2579]: E1108 00:35:27.899692 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:27.899798 kubelet[2579]: W1108 00:35:27.899714 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:27.899798 kubelet[2579]: E1108 00:35:27.899744 2579 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:27.905526 kubelet[2579]: E1108 00:35:27.905504 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:27.905810 kubelet[2579]: W1108 00:35:27.905606 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:27.905810 kubelet[2579]: E1108 00:35:27.905627 2579 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:27.911718 kubelet[2579]: E1108 00:35:27.911651 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:27.911718 kubelet[2579]: W1108 00:35:27.911667 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:27.911718 kubelet[2579]: E1108 00:35:27.911685 2579 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:28.029912 containerd[2090]: time="2025-11-08T00:35:28.029781809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7xq75,Uid:f996e695-d649-43f3-b43f-b6d136d7800c,Namespace:calico-system,Attempt:0,}" Nov 8 00:35:28.032631 containerd[2090]: time="2025-11-08T00:35:28.032595555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c2hxf,Uid:c44ed85b-0b82-43dc-8673-616abb232d4b,Namespace:kube-system,Attempt:0,}" Nov 8 00:35:28.586223 containerd[2090]: time="2025-11-08T00:35:28.586171340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:35:28.588223 containerd[2090]: time="2025-11-08T00:35:28.588161083Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:35:28.589079 containerd[2090]: time="2025-11-08T00:35:28.588996294Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 8 00:35:28.589957 containerd[2090]: time="2025-11-08T00:35:28.589910722Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:35:28.592108 containerd[2090]: time="2025-11-08T00:35:28.590658347Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:35:28.594421 containerd[2090]: time="2025-11-08T00:35:28.593375600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:35:28.594421 containerd[2090]: time="2025-11-08T00:35:28.594208734Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 564.345111ms" Nov 8 00:35:28.596140 containerd[2090]: time="2025-11-08T00:35:28.596106284Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 563.42601ms" Nov 8 00:35:28.728008 kubelet[2579]: E1108 00:35:28.727958 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:35:28.791220 containerd[2090]: time="2025-11-08T00:35:28.790985616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:35:28.791220 containerd[2090]: time="2025-11-08T00:35:28.791055826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:35:28.791220 containerd[2090]: time="2025-11-08T00:35:28.791081211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:35:28.791857 containerd[2090]: time="2025-11-08T00:35:28.791207681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:35:28.792349 containerd[2090]: time="2025-11-08T00:35:28.792261431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:35:28.792735 containerd[2090]: time="2025-11-08T00:35:28.792687217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:35:28.792871 containerd[2090]: time="2025-11-08T00:35:28.792723796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:35:28.793071 containerd[2090]: time="2025-11-08T00:35:28.793017974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:35:28.882380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3615603972.mount: Deactivated successfully. Nov 8 00:35:28.901871 kubelet[2579]: E1108 00:35:28.901836 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-49v22" podUID="34d56e07-ff7d-441e-b5c1-bf41dd56f15b" Nov 8 00:35:28.912669 containerd[2090]: time="2025-11-08T00:35:28.912506143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7xq75,Uid:f996e695-d649-43f3-b43f-b6d136d7800c,Namespace:calico-system,Attempt:0,} returns sandbox id \"fbb3dcc1ca1a98f20a23d6d56903dd34e1bf3de4d667ee4b5b257a5e099535cb\"" Nov 8 00:35:28.918121 containerd[2090]: time="2025-11-08T00:35:28.918079453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 8 00:35:28.928808 containerd[2090]: time="2025-11-08T00:35:28.928767406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c2hxf,Uid:c44ed85b-0b82-43dc-8673-616abb232d4b,Namespace:kube-system,Attempt:0,} returns sandbox id \"0411e2d628bced2bbc958a580139dde816d4918be1980d3cb9f814cabfd9092f\"" Nov 8 00:35:29.728846 kubelet[2579]: E1108 00:35:29.728791 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:35:30.083155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3881751813.mount: Deactivated successfully. Nov 8 00:35:30.194246 containerd[2090]: time="2025-11-08T00:35:30.194177060Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:35:30.196732 containerd[2090]: time="2025-11-08T00:35:30.196601385Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Nov 8 00:35:30.198532 containerd[2090]: time="2025-11-08T00:35:30.198458934Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:35:30.204615 containerd[2090]: time="2025-11-08T00:35:30.203359294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:35:30.204615 containerd[2090]: time="2025-11-08T00:35:30.204352914Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.28623083s" Nov 8 00:35:30.204615 containerd[2090]: time="2025-11-08T00:35:30.204401718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 8 00:35:30.206601 containerd[2090]: time="2025-11-08T00:35:30.206545185Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 8 00:35:30.207819 containerd[2090]: time="2025-11-08T00:35:30.207782387Z" level=info msg="CreateContainer within sandbox \"fbb3dcc1ca1a98f20a23d6d56903dd34e1bf3de4d667ee4b5b257a5e099535cb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 8 00:35:30.237858 containerd[2090]: time="2025-11-08T00:35:30.237810028Z" level=info msg="CreateContainer within sandbox \"fbb3dcc1ca1a98f20a23d6d56903dd34e1bf3de4d667ee4b5b257a5e099535cb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c046962e3937b0f5d2ef06a8a8efc349283a72eb9890e3b57aee7b9316da7d15\"" Nov 8 00:35:30.238805 containerd[2090]: time="2025-11-08T00:35:30.238776163Z" level=info msg="StartContainer for \"c046962e3937b0f5d2ef06a8a8efc349283a72eb9890e3b57aee7b9316da7d15\"" Nov 8 00:35:30.317969 containerd[2090]: time="2025-11-08T00:35:30.317745264Z" level=info msg="StartContainer for \"c046962e3937b0f5d2ef06a8a8efc349283a72eb9890e3b57aee7b9316da7d15\" returns successfully" Nov 8 00:35:30.382941 containerd[2090]: time="2025-11-08T00:35:30.382803996Z" level=info msg="shim disconnected" id=c046962e3937b0f5d2ef06a8a8efc349283a72eb9890e3b57aee7b9316da7d15 namespace=k8s.io Nov 8 00:35:30.382941 containerd[2090]: time="2025-11-08T00:35:30.382860846Z" level=warning msg="cleaning up after shim disconnected" id=c046962e3937b0f5d2ef06a8a8efc349283a72eb9890e3b57aee7b9316da7d15 namespace=k8s.io Nov 8 00:35:30.382941 containerd[2090]: time="2025-11-08T00:35:30.382869643Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:35:30.729677 kubelet[2579]: E1108 00:35:30.729535 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:35:30.902403 kubelet[2579]: E1108 00:35:30.902020 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-49v22" podUID="34d56e07-ff7d-441e-b5c1-bf41dd56f15b" Nov 8 00:35:31.050023 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c046962e3937b0f5d2ef06a8a8efc349283a72eb9890e3b57aee7b9316da7d15-rootfs.mount: Deactivated successfully. Nov 8 00:35:31.432082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2753240381.mount: Deactivated successfully. Nov 8 00:35:31.734310 kubelet[2579]: E1108 00:35:31.734065 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:35:32.018471 containerd[2090]: time="2025-11-08T00:35:32.018344643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:35:32.020014 containerd[2090]: time="2025-11-08T00:35:32.019808725Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Nov 8 00:35:32.021684 containerd[2090]: time="2025-11-08T00:35:32.021634896Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:35:32.025332 containerd[2090]: time="2025-11-08T00:35:32.024623149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:35:32.025332 containerd[2090]: time="2025-11-08T00:35:32.025166458Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.818576856s" Nov 8 00:35:32.025332 containerd[2090]: time="2025-11-08T00:35:32.025211766Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 8 00:35:32.026852 containerd[2090]: time="2025-11-08T00:35:32.026820225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 8 00:35:32.028055 containerd[2090]: time="2025-11-08T00:35:32.028022211Z" level=info msg="CreateContainer within sandbox \"0411e2d628bced2bbc958a580139dde816d4918be1980d3cb9f814cabfd9092f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:35:32.053038 containerd[2090]: time="2025-11-08T00:35:32.052985539Z" level=info msg="CreateContainer within sandbox \"0411e2d628bced2bbc958a580139dde816d4918be1980d3cb9f814cabfd9092f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d57d4515fd5e38bae7d025f94a8d12e90e992da3c34de2bfd23a881af94520d4\"" Nov 8 00:35:32.053837 containerd[2090]: time="2025-11-08T00:35:32.053803992Z" level=info msg="StartContainer for \"d57d4515fd5e38bae7d025f94a8d12e90e992da3c34de2bfd23a881af94520d4\"" Nov 8 00:35:32.095427 systemd[1]: run-containerd-runc-k8s.io-d57d4515fd5e38bae7d025f94a8d12e90e992da3c34de2bfd23a881af94520d4-runc.kQYcNd.mount: Deactivated successfully. Nov 8 00:35:32.127569 containerd[2090]: time="2025-11-08T00:35:32.127518273Z" level=info msg="StartContainer for \"d57d4515fd5e38bae7d025f94a8d12e90e992da3c34de2bfd23a881af94520d4\" returns successfully" Nov 8 00:35:32.735195 kubelet[2579]: E1108 00:35:32.735124 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:35:32.901746 kubelet[2579]: E1108 00:35:32.901701 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-49v22" podUID="34d56e07-ff7d-441e-b5c1-bf41dd56f15b" Nov 8 00:35:33.735813 kubelet[2579]: E1108 00:35:33.735767 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:35:34.736601 kubelet[2579]: E1108 00:35:34.736353 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:35:34.901703 kubelet[2579]: E1108 00:35:34.901386 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-49v22" podUID="34d56e07-ff7d-441e-b5c1-bf41dd56f15b" Nov 8 00:35:35.141408 containerd[2090]: time="2025-11-08T00:35:35.141086652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:35:35.142155 containerd[2090]: time="2025-11-08T00:35:35.142105600Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 8 00:35:35.143317 containerd[2090]: time="2025-11-08T00:35:35.143146504Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:35:35.145553 containerd[2090]: time="2025-11-08T00:35:35.145520810Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:35:35.146369 containerd[2090]: time="2025-11-08T00:35:35.146198909Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.119341732s" Nov 8 00:35:35.146369 containerd[2090]: time="2025-11-08T00:35:35.146228401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 8 00:35:35.150611 containerd[2090]: time="2025-11-08T00:35:35.148620420Z" level=info msg="CreateContainer within sandbox \"fbb3dcc1ca1a98f20a23d6d56903dd34e1bf3de4d667ee4b5b257a5e099535cb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 8 00:35:35.168802 containerd[2090]: time="2025-11-08T00:35:35.168740577Z" level=info msg="CreateContainer within sandbox \"fbb3dcc1ca1a98f20a23d6d56903dd34e1bf3de4d667ee4b5b257a5e099535cb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b5750c6c3495346e233d51e98427232db8051ca660ec27fdadf989bcbfca63aa\"" Nov 8 00:35:35.169460 containerd[2090]: time="2025-11-08T00:35:35.169382407Z" level=info msg="StartContainer for \"b5750c6c3495346e233d51e98427232db8051ca660ec27fdadf989bcbfca63aa\"" Nov 8 00:35:35.233406 containerd[2090]: time="2025-11-08T00:35:35.233361164Z" level=info msg="StartContainer for \"b5750c6c3495346e233d51e98427232db8051ca660ec27fdadf989bcbfca63aa\" returns successfully" Nov 8 00:35:35.737414 kubelet[2579]: E1108 00:35:35.737354 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:35:35.960307 kubelet[2579]: I1108 00:35:35.960141 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c2hxf" podStartSLOduration=7.863950981 podStartE2EDuration="10.960119931s" podCreationTimestamp="2025-11-08 00:35:25 +0000 UTC" firstStartedPulling="2025-11-08 00:35:28.930215745 +0000 UTC m=+3.730576413" lastFinishedPulling="2025-11-08 00:35:32.026384685 +0000 UTC m=+6.826745363" observedRunningTime="2025-11-08 00:35:32.984016285 +0000 UTC m=+7.784376969" watchObservedRunningTime="2025-11-08 00:35:35.960119931 +0000 UTC m=+10.760480593" Nov 8 00:35:36.470642 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5750c6c3495346e233d51e98427232db8051ca660ec27fdadf989bcbfca63aa-rootfs.mount: Deactivated successfully. Nov 8 00:35:36.494492 kubelet[2579]: I1108 00:35:36.494459 2579 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 8 00:35:36.732958 containerd[2090]: time="2025-11-08T00:35:36.732724639Z" level=info msg="shim disconnected" id=b5750c6c3495346e233d51e98427232db8051ca660ec27fdadf989bcbfca63aa namespace=k8s.io Nov 8 00:35:36.732958 containerd[2090]: time="2025-11-08T00:35:36.732800016Z" level=warning msg="cleaning up after shim disconnected" id=b5750c6c3495346e233d51e98427232db8051ca660ec27fdadf989bcbfca63aa namespace=k8s.io Nov 8 00:35:36.732958 containerd[2090]: time="2025-11-08T00:35:36.732809770Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:35:36.737927 kubelet[2579]: E1108 00:35:36.737877 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:35:36.904007 containerd[2090]: time="2025-11-08T00:35:36.903636005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-49v22,Uid:34d56e07-ff7d-441e-b5c1-bf41dd56f15b,Namespace:calico-system,Attempt:0,}" Nov 8 00:35:36.951860 containerd[2090]: time="2025-11-08T00:35:36.951199469Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 8 00:35:36.986628 containerd[2090]: time="2025-11-08T00:35:36.984723145Z" level=error msg="Failed to destroy network for sandbox \"da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:36.986904 containerd[2090]: time="2025-11-08T00:35:36.986863308Z" level=error msg="encountered an error cleaning up failed sandbox \"da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:36.986985 containerd[2090]: time="2025-11-08T00:35:36.986938684Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-49v22,Uid:34d56e07-ff7d-441e-b5c1-bf41dd56f15b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:36.987677 kubelet[2579]: E1108 00:35:36.987234 2579 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:36.987677 kubelet[2579]: E1108 00:35:36.987611 2579 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-49v22" Nov 8 00:35:36.987677 kubelet[2579]: E1108 00:35:36.987644 2579 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-49v22" Nov 8 00:35:36.988166 kubelet[2579]: E1108 00:35:36.987926 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-49v22_calico-system(34d56e07-ff7d-441e-b5c1-bf41dd56f15b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-49v22_calico-system(34d56e07-ff7d-441e-b5c1-bf41dd56f15b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-49v22" podUID="34d56e07-ff7d-441e-b5c1-bf41dd56f15b" Nov 8 00:35:36.988852 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79-shm.mount: Deactivated successfully. Nov 8 00:35:37.738087 kubelet[2579]: E1108 00:35:37.738047 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:35:37.950840 kubelet[2579]: I1108 00:35:37.950806 2579 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" Nov 8 00:35:37.951918 containerd[2090]: time="2025-11-08T00:35:37.951875946Z" level=info msg="StopPodSandbox for \"da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79\"" Nov 8 00:35:37.952485 containerd[2090]: time="2025-11-08T00:35:37.952088388Z" level=info msg="Ensure that sandbox da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79 in task-service has been cleanup successfully" Nov 8 00:35:37.984681 containerd[2090]: time="2025-11-08T00:35:37.984627141Z" level=error msg="StopPodSandbox for \"da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79\" failed" error="failed to destroy network for sandbox \"da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:37.985064 kubelet[2579]: E1108 00:35:37.985003 2579 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" Nov 8 00:35:37.985221 kubelet[2579]: E1108 00:35:37.985094 2579 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79"} Nov 8 00:35:37.985221 kubelet[2579]: E1108 00:35:37.985165 2579 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"34d56e07-ff7d-441e-b5c1-bf41dd56f15b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:35:37.985353 kubelet[2579]: E1108 00:35:37.985212 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"34d56e07-ff7d-441e-b5c1-bf41dd56f15b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-49v22" podUID="34d56e07-ff7d-441e-b5c1-bf41dd56f15b" Nov 8 00:35:38.739000 kubelet[2579]: E1108 00:35:38.738771 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:35:39.550690 kubelet[2579]: I1108 00:35:39.550607 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfm6q\" (UniqueName: \"kubernetes.io/projected/a27fa730-09db-44b5-8f4c-0360ea131785-kube-api-access-xfm6q\") pod \"nginx-deployment-7fcdb87857-zrh5l\" (UID: \"a27fa730-09db-44b5-8f4c-0360ea131785\") " pod="default/nginx-deployment-7fcdb87857-zrh5l" Nov 8 00:35:39.740119 kubelet[2579]: E1108 00:35:39.740077 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:35:39.778356 containerd[2090]: time="2025-11-08T00:35:39.778307114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-zrh5l,Uid:a27fa730-09db-44b5-8f4c-0360ea131785,Namespace:default,Attempt:0,}" Nov 8 00:35:39.903108 containerd[2090]: time="2025-11-08T00:35:39.902880757Z" level=error msg="Failed to destroy network for sandbox \"7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:39.906466 containerd[2090]: time="2025-11-08T00:35:39.906319499Z" level=error msg="encountered an error cleaning up failed sandbox \"7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:39.906466 containerd[2090]: time="2025-11-08T00:35:39.906399480Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-zrh5l,Uid:a27fa730-09db-44b5-8f4c-0360ea131785,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:39.906926 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa-shm.mount: Deactivated successfully. Nov 8 00:35:39.909599 kubelet[2579]: E1108 00:35:39.908868 2579 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:39.909599 kubelet[2579]: E1108 00:35:39.908937 2579 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-zrh5l" Nov 8 00:35:39.909599 kubelet[2579]: E1108 00:35:39.908964 2579 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-zrh5l" Nov 8 00:35:39.909809 kubelet[2579]: E1108 00:35:39.909015 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-zrh5l_default(a27fa730-09db-44b5-8f4c-0360ea131785)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-zrh5l_default(a27fa730-09db-44b5-8f4c-0360ea131785)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-zrh5l" podUID="a27fa730-09db-44b5-8f4c-0360ea131785" Nov 8 00:35:39.956826 kubelet[2579]: I1108 00:35:39.956153 2579 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" Nov 8 00:35:39.957107 containerd[2090]: time="2025-11-08T00:35:39.957075591Z" level=info msg="StopPodSandbox for \"7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa\"" Nov 8 00:35:39.957497 containerd[2090]: time="2025-11-08T00:35:39.957421638Z" level=info msg="Ensure that sandbox 7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa in task-service has been cleanup successfully" Nov 8 00:35:40.006012 containerd[2090]: time="2025-11-08T00:35:40.005936279Z" level=error msg="StopPodSandbox for \"7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa\" failed" error="failed to destroy network for sandbox \"7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:40.006259 kubelet[2579]: E1108 00:35:40.006213 2579 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" Nov 8 00:35:40.006367 kubelet[2579]: E1108 00:35:40.006270 2579 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa"} Nov 8 00:35:40.006367 kubelet[2579]: E1108 00:35:40.006314 2579 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a27fa730-09db-44b5-8f4c-0360ea131785\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:35:40.006511 kubelet[2579]: E1108 00:35:40.006361 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a27fa730-09db-44b5-8f4c-0360ea131785\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-zrh5l" podUID="a27fa730-09db-44b5-8f4c-0360ea131785" Nov 8 00:35:40.741174 kubelet[2579]: E1108 00:35:40.741133 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:35:41.742600 kubelet[2579]: E1108 00:35:41.742549 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:35:42.743529 kubelet[2579]: E1108 00:35:42.743452 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:35:42.826134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3050274560.mount: Deactivated successfully. Nov 8 00:35:42.867638 containerd[2090]: time="2025-11-08T00:35:42.867561296Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:35:42.869529 containerd[2090]: time="2025-11-08T00:35:42.869341309Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 8 00:35:42.872909 containerd[2090]: time="2025-11-08T00:35:42.871909663Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:35:42.875781 containerd[2090]: time="2025-11-08T00:35:42.875112197Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:35:42.875781 containerd[2090]: time="2025-11-08T00:35:42.875664295Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 5.924420153s" Nov 8 00:35:42.875781 containerd[2090]: time="2025-11-08T00:35:42.875696035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 8 00:35:42.903797 containerd[2090]: time="2025-11-08T00:35:42.903750731Z" level=info msg="CreateContainer within sandbox \"fbb3dcc1ca1a98f20a23d6d56903dd34e1bf3de4d667ee4b5b257a5e099535cb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 8 00:35:42.953468 containerd[2090]: time="2025-11-08T00:35:42.953396794Z" level=info msg="CreateContainer within sandbox \"fbb3dcc1ca1a98f20a23d6d56903dd34e1bf3de4d667ee4b5b257a5e099535cb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"fe6f846d15c2ca87b661a1057a2cd0f4b945086d8c025e504167ef4391f05046\"" Nov 8 00:35:42.955774 containerd[2090]: time="2025-11-08T00:35:42.954086209Z" level=info msg="StartContainer for \"fe6f846d15c2ca87b661a1057a2cd0f4b945086d8c025e504167ef4391f05046\"" Nov 8 00:35:43.064390 containerd[2090]: time="2025-11-08T00:35:43.064286328Z" level=info msg="StartContainer for \"fe6f846d15c2ca87b661a1057a2cd0f4b945086d8c025e504167ef4391f05046\" returns successfully" Nov 8 00:35:43.170620 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 8 00:35:43.170736 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 8 00:35:43.744526 kubelet[2579]: E1108 00:35:43.744470 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:35:43.998448 kubelet[2579]: I1108 00:35:43.997850 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7xq75" podStartSLOduration=5.038735484 podStartE2EDuration="18.997828739s" podCreationTimestamp="2025-11-08 00:35:25 +0000 UTC" firstStartedPulling="2025-11-08 00:35:28.917457903 +0000 UTC m=+3.717818576" lastFinishedPulling="2025-11-08 00:35:42.876551172 +0000 UTC m=+17.676911831" observedRunningTime="2025-11-08 00:35:43.996797897 +0000 UTC m=+18.797158582" watchObservedRunningTime="2025-11-08 00:35:43.997828739 +0000 UTC m=+18.798189410" Nov 8 00:35:44.745424 kubelet[2579]: E1108 00:35:44.745362 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:35:44.841775 kernel: bpftool[3316]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 8 00:35:45.016015 systemd[1]: run-containerd-runc-k8s.io-fe6f846d15c2ca87b661a1057a2cd0f4b945086d8c025e504167ef4391f05046-runc.m6HuBK.mount: Deactivated successfully. Nov 8 00:35:45.170771 systemd-networkd[1655]: vxlan.calico: Link UP Nov 8 00:35:45.170781 systemd-networkd[1655]: vxlan.calico: Gained carrier Nov 8 00:35:45.174935 (udev-worker)[3371]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:35:45.203869 (udev-worker)[3129]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:35:45.717275 kubelet[2579]: E1108 00:35:45.717205 2579 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:35:45.745830 kubelet[2579]: E1108 00:35:45.745771 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:35:46.004355 systemd[1]: run-containerd-runc-k8s.io-fe6f846d15c2ca87b661a1057a2cd0f4b945086d8c025e504167ef4391f05046-runc.aevkVc.mount: Deactivated successfully. Nov 8 00:35:46.746560 kubelet[2579]: E1108 00:35:46.746505 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:35:47.214354 systemd-networkd[1655]: vxlan.calico: Gained IPv6LL Nov 8 00:35:47.747220 kubelet[2579]: E1108 00:35:47.747150 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:35:48.127022 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 8 00:35:48.747641 kubelet[2579]: E1108 00:35:48.747562 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:35:49.748315 kubelet[2579]: E1108 00:35:49.748250 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:35:50.082400 ntpd[2051]: Listen normally on 6 vxlan.calico 192.168.19.192:123 Nov 8 00:35:50.082930 ntpd[2051]: 8 Nov 00:35:50 ntpd[2051]: Listen normally on 6 vxlan.calico 192.168.19.192:123 Nov 8 00:35:50.082930 ntpd[2051]: 8 Nov 00:35:50 ntpd[2051]: Listen normally on 7 vxlan.calico [fe80::64cb:4bff:fedc:6acf%3]:123 Nov 8 00:35:50.082476 ntpd[2051]: Listen normally on 7 vxlan.calico [fe80::64cb:4bff:fedc:6acf%3]:123 Nov 8 00:35:50.748603 kubelet[2579]: E1108 00:35:50.748441 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:35:50.902598 containerd[2090]: time="2025-11-08T00:35:50.902244142Z" level=info msg="StopPodSandbox for \"da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79\"" Nov 8 00:35:50.902598 containerd[2090]: time="2025-11-08T00:35:50.902314859Z" level=info msg="StopPodSandbox for \"7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa\"" Nov 8 00:35:51.166762 containerd[2090]: 2025-11-08 00:35:51.016 [INFO][3481] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" Nov 8 00:35:51.166762 containerd[2090]: 2025-11-08 00:35:51.016 [INFO][3481] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" iface="eth0" netns="/var/run/netns/cni-58588822-5b31-cb49-a2dc-4f5590d16f31" Nov 8 00:35:51.166762 containerd[2090]: 2025-11-08 00:35:51.017 [INFO][3481] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" iface="eth0" netns="/var/run/netns/cni-58588822-5b31-cb49-a2dc-4f5590d16f31" Nov 8 00:35:51.166762 containerd[2090]: 2025-11-08 00:35:51.017 [INFO][3481] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" iface="eth0" netns="/var/run/netns/cni-58588822-5b31-cb49-a2dc-4f5590d16f31" Nov 8 00:35:51.166762 containerd[2090]: 2025-11-08 00:35:51.017 [INFO][3481] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" Nov 8 00:35:51.166762 containerd[2090]: 2025-11-08 00:35:51.017 [INFO][3481] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" Nov 8 00:35:51.166762 containerd[2090]: 2025-11-08 00:35:51.148 [INFO][3495] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" HandleID="k8s-pod-network.da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" Workload="172.31.30.13-k8s-csi--node--driver--49v22-eth0" Nov 8 00:35:51.166762 containerd[2090]: 2025-11-08 00:35:51.148 [INFO][3495] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:51.166762 containerd[2090]: 2025-11-08 00:35:51.148 [INFO][3495] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:51.166762 containerd[2090]: 2025-11-08 00:35:51.158 [WARNING][3495] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" HandleID="k8s-pod-network.da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" Workload="172.31.30.13-k8s-csi--node--driver--49v22-eth0" Nov 8 00:35:51.166762 containerd[2090]: 2025-11-08 00:35:51.158 [INFO][3495] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" HandleID="k8s-pod-network.da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" Workload="172.31.30.13-k8s-csi--node--driver--49v22-eth0" Nov 8 00:35:51.166762 containerd[2090]: 2025-11-08 00:35:51.160 [INFO][3495] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:51.166762 containerd[2090]: 2025-11-08 00:35:51.162 [INFO][3481] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" Nov 8 00:35:51.166762 containerd[2090]: time="2025-11-08T00:35:51.164847578Z" level=info msg="TearDown network for sandbox \"da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79\" successfully" Nov 8 00:35:51.166762 containerd[2090]: time="2025-11-08T00:35:51.164874518Z" level=info msg="StopPodSandbox for \"da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79\" returns successfully" Nov 8 00:35:51.168195 containerd[2090]: time="2025-11-08T00:35:51.167787612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-49v22,Uid:34d56e07-ff7d-441e-b5c1-bf41dd56f15b,Namespace:calico-system,Attempt:1,}" Nov 8 00:35:51.168982 systemd[1]: run-netns-cni\x2d58588822\x2d5b31\x2dcb49\x2da2dc\x2d4f5590d16f31.mount: Deactivated successfully. Nov 8 00:35:51.176996 containerd[2090]: 2025-11-08 00:35:51.012 [INFO][3480] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" Nov 8 00:35:51.176996 containerd[2090]: 2025-11-08 00:35:51.013 [INFO][3480] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" iface="eth0" netns="/var/run/netns/cni-c74a6133-d752-a76b-544e-94f790e14dec" Nov 8 00:35:51.176996 containerd[2090]: 2025-11-08 00:35:51.014 [INFO][3480] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" iface="eth0" netns="/var/run/netns/cni-c74a6133-d752-a76b-544e-94f790e14dec" Nov 8 00:35:51.176996 containerd[2090]: 2025-11-08 00:35:51.016 [INFO][3480] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" iface="eth0" netns="/var/run/netns/cni-c74a6133-d752-a76b-544e-94f790e14dec" Nov 8 00:35:51.176996 containerd[2090]: 2025-11-08 00:35:51.016 [INFO][3480] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" Nov 8 00:35:51.176996 containerd[2090]: 2025-11-08 00:35:51.016 [INFO][3480] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" Nov 8 00:35:51.176996 containerd[2090]: 2025-11-08 00:35:51.148 [INFO][3493] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" HandleID="k8s-pod-network.7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" Workload="172.31.30.13-k8s-nginx--deployment--7fcdb87857--zrh5l-eth0" Nov 8 00:35:51.176996 containerd[2090]: 2025-11-08 00:35:51.148 [INFO][3493] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:51.176996 containerd[2090]: 2025-11-08 00:35:51.160 [INFO][3493] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:51.176996 containerd[2090]: 2025-11-08 00:35:51.166 [WARNING][3493] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" HandleID="k8s-pod-network.7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" Workload="172.31.30.13-k8s-nginx--deployment--7fcdb87857--zrh5l-eth0" Nov 8 00:35:51.176996 containerd[2090]: 2025-11-08 00:35:51.167 [INFO][3493] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" HandleID="k8s-pod-network.7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" Workload="172.31.30.13-k8s-nginx--deployment--7fcdb87857--zrh5l-eth0" Nov 8 00:35:51.176996 containerd[2090]: 2025-11-08 00:35:51.171 [INFO][3493] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:51.176996 containerd[2090]: 2025-11-08 00:35:51.175 [INFO][3480] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" Nov 8 00:35:51.179791 containerd[2090]: time="2025-11-08T00:35:51.178219706Z" level=info msg="TearDown network for sandbox \"7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa\" successfully" Nov 8 00:35:51.179791 containerd[2090]: time="2025-11-08T00:35:51.178256394Z" level=info msg="StopPodSandbox for \"7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa\" returns successfully" Nov 8 00:35:51.180031 containerd[2090]: time="2025-11-08T00:35:51.179903211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-zrh5l,Uid:a27fa730-09db-44b5-8f4c-0360ea131785,Namespace:default,Attempt:1,}" Nov 8 00:35:51.181853 systemd[1]: run-netns-cni\x2dc74a6133\x2dd752\x2da76b\x2d544e\x2d94f790e14dec.mount: Deactivated successfully. Nov 8 00:35:51.366884 systemd-networkd[1655]: cali126ce79c85f: Link UP Nov 8 00:35:51.368192 systemd-networkd[1655]: cali126ce79c85f: Gained carrier Nov 8 00:35:51.372347 (udev-worker)[3546]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:35:51.391640 containerd[2090]: 2025-11-08 00:35:51.263 [INFO][3506] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.30.13-k8s-csi--node--driver--49v22-eth0 csi-node-driver- calico-system 34d56e07-ff7d-441e-b5c1-bf41dd56f15b 1280 0 2025-11-08 00:35:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.31.30.13 csi-node-driver-49v22 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali126ce79c85f [] [] }} ContainerID="e279d14bad2d0141592fbe0fc9973acb660f08649e4afed716e09dbb536355ec" Namespace="calico-system" Pod="csi-node-driver-49v22" WorkloadEndpoint="172.31.30.13-k8s-csi--node--driver--49v22-" Nov 8 00:35:51.391640 containerd[2090]: 2025-11-08 00:35:51.265 [INFO][3506] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e279d14bad2d0141592fbe0fc9973acb660f08649e4afed716e09dbb536355ec" Namespace="calico-system" Pod="csi-node-driver-49v22" WorkloadEndpoint="172.31.30.13-k8s-csi--node--driver--49v22-eth0" Nov 8 00:35:51.391640 containerd[2090]: 2025-11-08 00:35:51.311 [INFO][3531] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e279d14bad2d0141592fbe0fc9973acb660f08649e4afed716e09dbb536355ec" HandleID="k8s-pod-network.e279d14bad2d0141592fbe0fc9973acb660f08649e4afed716e09dbb536355ec" Workload="172.31.30.13-k8s-csi--node--driver--49v22-eth0" Nov 8 00:35:51.391640 containerd[2090]: 2025-11-08 00:35:51.311 [INFO][3531] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e279d14bad2d0141592fbe0fc9973acb660f08649e4afed716e09dbb536355ec" HandleID="k8s-pod-network.e279d14bad2d0141592fbe0fc9973acb660f08649e4afed716e09dbb536355ec" Workload="172.31.30.13-k8s-csi--node--driver--49v22-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d50f0), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.30.13", "pod":"csi-node-driver-49v22", "timestamp":"2025-11-08 00:35:51.311013587 +0000 UTC"}, Hostname:"172.31.30.13", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:35:51.391640 containerd[2090]: 2025-11-08 00:35:51.311 [INFO][3531] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:51.391640 containerd[2090]: 2025-11-08 00:35:51.311 [INFO][3531] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:51.391640 containerd[2090]: 2025-11-08 00:35:51.311 [INFO][3531] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.30.13' Nov 8 00:35:51.391640 containerd[2090]: 2025-11-08 00:35:51.321 [INFO][3531] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e279d14bad2d0141592fbe0fc9973acb660f08649e4afed716e09dbb536355ec" host="172.31.30.13" Nov 8 00:35:51.391640 containerd[2090]: 2025-11-08 00:35:51.331 [INFO][3531] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.30.13" Nov 8 00:35:51.391640 containerd[2090]: 2025-11-08 00:35:51.337 [INFO][3531] ipam/ipam.go 511: Trying affinity for 192.168.19.192/26 host="172.31.30.13" Nov 8 00:35:51.391640 containerd[2090]: 2025-11-08 00:35:51.340 [INFO][3531] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.192/26 host="172.31.30.13" Nov 8 00:35:51.391640 containerd[2090]: 2025-11-08 00:35:51.342 [INFO][3531] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.192/26 host="172.31.30.13" Nov 8 00:35:51.391640 containerd[2090]: 2025-11-08 00:35:51.343 [INFO][3531] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.19.192/26 handle="k8s-pod-network.e279d14bad2d0141592fbe0fc9973acb660f08649e4afed716e09dbb536355ec" host="172.31.30.13" Nov 8 00:35:51.391640 containerd[2090]: 2025-11-08 00:35:51.344 [INFO][3531] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e279d14bad2d0141592fbe0fc9973acb660f08649e4afed716e09dbb536355ec Nov 8 00:35:51.391640 containerd[2090]: 2025-11-08 00:35:51.351 [INFO][3531] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.19.192/26 handle="k8s-pod-network.e279d14bad2d0141592fbe0fc9973acb660f08649e4afed716e09dbb536355ec" host="172.31.30.13" Nov 8 00:35:51.391640 containerd[2090]: 2025-11-08 00:35:51.357 [INFO][3531] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.19.193/26] block=192.168.19.192/26 handle="k8s-pod-network.e279d14bad2d0141592fbe0fc9973acb660f08649e4afed716e09dbb536355ec" host="172.31.30.13" Nov 8 00:35:51.391640 containerd[2090]: 2025-11-08 00:35:51.357 [INFO][3531] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.193/26] handle="k8s-pod-network.e279d14bad2d0141592fbe0fc9973acb660f08649e4afed716e09dbb536355ec" host="172.31.30.13" Nov 8 00:35:51.391640 containerd[2090]: 2025-11-08 00:35:51.357 [INFO][3531] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:51.391640 containerd[2090]: 2025-11-08 00:35:51.357 [INFO][3531] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.19.193/26] IPv6=[] ContainerID="e279d14bad2d0141592fbe0fc9973acb660f08649e4afed716e09dbb536355ec" HandleID="k8s-pod-network.e279d14bad2d0141592fbe0fc9973acb660f08649e4afed716e09dbb536355ec" Workload="172.31.30.13-k8s-csi--node--driver--49v22-eth0" Nov 8 00:35:51.392731 containerd[2090]: 2025-11-08 00:35:51.360 [INFO][3506] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e279d14bad2d0141592fbe0fc9973acb660f08649e4afed716e09dbb536355ec" Namespace="calico-system" Pod="csi-node-driver-49v22" WorkloadEndpoint="172.31.30.13-k8s-csi--node--driver--49v22-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.13-k8s-csi--node--driver--49v22-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"34d56e07-ff7d-441e-b5c1-bf41dd56f15b", ResourceVersion:"1280", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 35, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.13", ContainerID:"", Pod:"csi-node-driver-49v22", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.19.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali126ce79c85f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:35:51.392731 containerd[2090]: 2025-11-08 00:35:51.360 [INFO][3506] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.193/32] ContainerID="e279d14bad2d0141592fbe0fc9973acb660f08649e4afed716e09dbb536355ec" Namespace="calico-system" Pod="csi-node-driver-49v22" WorkloadEndpoint="172.31.30.13-k8s-csi--node--driver--49v22-eth0" Nov 8 00:35:51.392731 containerd[2090]: 2025-11-08 00:35:51.360 [INFO][3506] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali126ce79c85f ContainerID="e279d14bad2d0141592fbe0fc9973acb660f08649e4afed716e09dbb536355ec" Namespace="calico-system" Pod="csi-node-driver-49v22" WorkloadEndpoint="172.31.30.13-k8s-csi--node--driver--49v22-eth0" Nov 8 00:35:51.392731 containerd[2090]: 2025-11-08 00:35:51.368 [INFO][3506] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e279d14bad2d0141592fbe0fc9973acb660f08649e4afed716e09dbb536355ec" Namespace="calico-system" Pod="csi-node-driver-49v22" WorkloadEndpoint="172.31.30.13-k8s-csi--node--driver--49v22-eth0" Nov 8 00:35:51.392731 containerd[2090]: 2025-11-08 00:35:51.369 [INFO][3506] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e279d14bad2d0141592fbe0fc9973acb660f08649e4afed716e09dbb536355ec" Namespace="calico-system" Pod="csi-node-driver-49v22" WorkloadEndpoint="172.31.30.13-k8s-csi--node--driver--49v22-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.13-k8s-csi--node--driver--49v22-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"34d56e07-ff7d-441e-b5c1-bf41dd56f15b", ResourceVersion:"1280", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 35, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.13", ContainerID:"e279d14bad2d0141592fbe0fc9973acb660f08649e4afed716e09dbb536355ec", Pod:"csi-node-driver-49v22", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.19.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali126ce79c85f", MAC:"4e:ca:49:87:ff:ac", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:35:51.392731 containerd[2090]: 2025-11-08 00:35:51.385 [INFO][3506] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e279d14bad2d0141592fbe0fc9973acb660f08649e4afed716e09dbb536355ec" Namespace="calico-system" Pod="csi-node-driver-49v22" WorkloadEndpoint="172.31.30.13-k8s-csi--node--driver--49v22-eth0" Nov 8 00:35:51.421311 containerd[2090]: time="2025-11-08T00:35:51.421141529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:35:51.424448 containerd[2090]: time="2025-11-08T00:35:51.421349650Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:35:51.424448 containerd[2090]: time="2025-11-08T00:35:51.421370422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:35:51.424448 containerd[2090]: time="2025-11-08T00:35:51.421473649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:35:51.480365 systemd-networkd[1655]: califfb8327876b: Link UP Nov 8 00:35:51.485675 systemd-networkd[1655]: califfb8327876b: Gained carrier Nov 8 00:35:51.501297 containerd[2090]: time="2025-11-08T00:35:51.501128535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-49v22,Uid:34d56e07-ff7d-441e-b5c1-bf41dd56f15b,Namespace:calico-system,Attempt:1,} returns sandbox id \"e279d14bad2d0141592fbe0fc9973acb660f08649e4afed716e09dbb536355ec\"" Nov 8 00:35:51.505814 containerd[2090]: time="2025-11-08T00:35:51.505661702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:35:51.507744 containerd[2090]: 2025-11-08 00:35:51.275 [INFO][3516] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.30.13-k8s-nginx--deployment--7fcdb87857--zrh5l-eth0 nginx-deployment-7fcdb87857- default a27fa730-09db-44b5-8f4c-0360ea131785 1279 0 2025-11-08 00:35:39 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.30.13 nginx-deployment-7fcdb87857-zrh5l eth0 default [] [] [kns.default ksa.default.default] califfb8327876b [] [] }} ContainerID="a4accb949242d90bf5fc52d103fc135e5a0695c4fa849e0672cdedf92faed6af" Namespace="default" Pod="nginx-deployment-7fcdb87857-zrh5l" WorkloadEndpoint="172.31.30.13-k8s-nginx--deployment--7fcdb87857--zrh5l-" Nov 8 00:35:51.507744 containerd[2090]: 2025-11-08 00:35:51.275 [INFO][3516] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a4accb949242d90bf5fc52d103fc135e5a0695c4fa849e0672cdedf92faed6af" Namespace="default" Pod="nginx-deployment-7fcdb87857-zrh5l" WorkloadEndpoint="172.31.30.13-k8s-nginx--deployment--7fcdb87857--zrh5l-eth0" Nov 8 00:35:51.507744 containerd[2090]: 2025-11-08 00:35:51.316 [INFO][3537] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a4accb949242d90bf5fc52d103fc135e5a0695c4fa849e0672cdedf92faed6af" HandleID="k8s-pod-network.a4accb949242d90bf5fc52d103fc135e5a0695c4fa849e0672cdedf92faed6af" Workload="172.31.30.13-k8s-nginx--deployment--7fcdb87857--zrh5l-eth0" Nov 8 00:35:51.507744 containerd[2090]: 2025-11-08 00:35:51.316 [INFO][3537] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a4accb949242d90bf5fc52d103fc135e5a0695c4fa849e0672cdedf92faed6af" HandleID="k8s-pod-network.a4accb949242d90bf5fc52d103fc135e5a0695c4fa849e0672cdedf92faed6af" Workload="172.31.30.13-k8s-nginx--deployment--7fcdb87857--zrh5l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4f70), Attrs:map[string]string{"namespace":"default", "node":"172.31.30.13", "pod":"nginx-deployment-7fcdb87857-zrh5l", "timestamp":"2025-11-08 00:35:51.316259087 +0000 UTC"}, Hostname:"172.31.30.13", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:35:51.507744 containerd[2090]: 2025-11-08 00:35:51.316 [INFO][3537] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:51.507744 containerd[2090]: 2025-11-08 00:35:51.357 [INFO][3537] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:51.507744 containerd[2090]: 2025-11-08 00:35:51.357 [INFO][3537] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.30.13' Nov 8 00:35:51.507744 containerd[2090]: 2025-11-08 00:35:51.424 [INFO][3537] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a4accb949242d90bf5fc52d103fc135e5a0695c4fa849e0672cdedf92faed6af" host="172.31.30.13" Nov 8 00:35:51.507744 containerd[2090]: 2025-11-08 00:35:51.434 [INFO][3537] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.30.13" Nov 8 00:35:51.507744 containerd[2090]: 2025-11-08 00:35:51.442 [INFO][3537] ipam/ipam.go 511: Trying affinity for 192.168.19.192/26 host="172.31.30.13" Nov 8 00:35:51.507744 containerd[2090]: 2025-11-08 00:35:51.444 [INFO][3537] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.192/26 host="172.31.30.13" Nov 8 00:35:51.507744 containerd[2090]: 2025-11-08 00:35:51.449 [INFO][3537] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.192/26 host="172.31.30.13" Nov 8 00:35:51.507744 containerd[2090]: 2025-11-08 00:35:51.450 [INFO][3537] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.19.192/26 handle="k8s-pod-network.a4accb949242d90bf5fc52d103fc135e5a0695c4fa849e0672cdedf92faed6af" host="172.31.30.13" Nov 8 00:35:51.507744 containerd[2090]: 2025-11-08 00:35:51.454 [INFO][3537] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a4accb949242d90bf5fc52d103fc135e5a0695c4fa849e0672cdedf92faed6af Nov 8 00:35:51.507744 containerd[2090]: 2025-11-08 00:35:51.459 [INFO][3537] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.19.192/26 handle="k8s-pod-network.a4accb949242d90bf5fc52d103fc135e5a0695c4fa849e0672cdedf92faed6af" host="172.31.30.13" Nov 8 00:35:51.507744 containerd[2090]: 2025-11-08 00:35:51.468 [INFO][3537] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.19.194/26] block=192.168.19.192/26 handle="k8s-pod-network.a4accb949242d90bf5fc52d103fc135e5a0695c4fa849e0672cdedf92faed6af" host="172.31.30.13" Nov 8 00:35:51.507744 containerd[2090]: 2025-11-08 00:35:51.468 [INFO][3537] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.194/26] handle="k8s-pod-network.a4accb949242d90bf5fc52d103fc135e5a0695c4fa849e0672cdedf92faed6af" host="172.31.30.13" Nov 8 00:35:51.507744 containerd[2090]: 2025-11-08 00:35:51.469 [INFO][3537] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:51.507744 containerd[2090]: 2025-11-08 00:35:51.469 [INFO][3537] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.19.194/26] IPv6=[] ContainerID="a4accb949242d90bf5fc52d103fc135e5a0695c4fa849e0672cdedf92faed6af" HandleID="k8s-pod-network.a4accb949242d90bf5fc52d103fc135e5a0695c4fa849e0672cdedf92faed6af" Workload="172.31.30.13-k8s-nginx--deployment--7fcdb87857--zrh5l-eth0" Nov 8 00:35:51.515433 containerd[2090]: 2025-11-08 00:35:51.473 [INFO][3516] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a4accb949242d90bf5fc52d103fc135e5a0695c4fa849e0672cdedf92faed6af" Namespace="default" Pod="nginx-deployment-7fcdb87857-zrh5l" WorkloadEndpoint="172.31.30.13-k8s-nginx--deployment--7fcdb87857--zrh5l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.13-k8s-nginx--deployment--7fcdb87857--zrh5l-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"a27fa730-09db-44b5-8f4c-0360ea131785", ResourceVersion:"1279", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 35, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.13", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-zrh5l", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.19.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"califfb8327876b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:35:51.515433 containerd[2090]: 2025-11-08 00:35:51.473 [INFO][3516] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.194/32] ContainerID="a4accb949242d90bf5fc52d103fc135e5a0695c4fa849e0672cdedf92faed6af" Namespace="default" Pod="nginx-deployment-7fcdb87857-zrh5l" WorkloadEndpoint="172.31.30.13-k8s-nginx--deployment--7fcdb87857--zrh5l-eth0" Nov 8 00:35:51.515433 containerd[2090]: 2025-11-08 00:35:51.473 [INFO][3516] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califfb8327876b ContainerID="a4accb949242d90bf5fc52d103fc135e5a0695c4fa849e0672cdedf92faed6af" Namespace="default" Pod="nginx-deployment-7fcdb87857-zrh5l" WorkloadEndpoint="172.31.30.13-k8s-nginx--deployment--7fcdb87857--zrh5l-eth0" Nov 8 00:35:51.515433 containerd[2090]: 2025-11-08 00:35:51.492 [INFO][3516] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a4accb949242d90bf5fc52d103fc135e5a0695c4fa849e0672cdedf92faed6af" Namespace="default" Pod="nginx-deployment-7fcdb87857-zrh5l" WorkloadEndpoint="172.31.30.13-k8s-nginx--deployment--7fcdb87857--zrh5l-eth0" Nov 8 00:35:51.515433 containerd[2090]: 2025-11-08 00:35:51.493 [INFO][3516] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a4accb949242d90bf5fc52d103fc135e5a0695c4fa849e0672cdedf92faed6af" Namespace="default" Pod="nginx-deployment-7fcdb87857-zrh5l" WorkloadEndpoint="172.31.30.13-k8s-nginx--deployment--7fcdb87857--zrh5l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.13-k8s-nginx--deployment--7fcdb87857--zrh5l-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"a27fa730-09db-44b5-8f4c-0360ea131785", ResourceVersion:"1279", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 35, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.13", ContainerID:"a4accb949242d90bf5fc52d103fc135e5a0695c4fa849e0672cdedf92faed6af", Pod:"nginx-deployment-7fcdb87857-zrh5l", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.19.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"califfb8327876b", MAC:"ce:c1:bc:cf:75:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:35:51.515433 containerd[2090]: 2025-11-08 00:35:51.502 [INFO][3516] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a4accb949242d90bf5fc52d103fc135e5a0695c4fa849e0672cdedf92faed6af" Namespace="default" Pod="nginx-deployment-7fcdb87857-zrh5l" WorkloadEndpoint="172.31.30.13-k8s-nginx--deployment--7fcdb87857--zrh5l-eth0" Nov 8 00:35:51.541649 containerd[2090]: time="2025-11-08T00:35:51.541346975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:35:51.541649 containerd[2090]: time="2025-11-08T00:35:51.541492293Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:35:51.541649 containerd[2090]: time="2025-11-08T00:35:51.541512735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:35:51.542687 containerd[2090]: time="2025-11-08T00:35:51.542627130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:35:51.602913 containerd[2090]: time="2025-11-08T00:35:51.602769827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-zrh5l,Uid:a27fa730-09db-44b5-8f4c-0360ea131785,Namespace:default,Attempt:1,} returns sandbox id \"a4accb949242d90bf5fc52d103fc135e5a0695c4fa849e0672cdedf92faed6af\"" Nov 8 00:35:51.749216 kubelet[2579]: E1108 00:35:51.749042 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:35:51.799111 containerd[2090]: time="2025-11-08T00:35:51.799064013Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:35:51.801086 containerd[2090]: time="2025-11-08T00:35:51.801011746Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:35:51.801285 containerd[2090]: time="2025-11-08T00:35:51.801070463Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:35:51.801349 kubelet[2579]: E1108 00:35:51.801297 2579 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:35:51.801391 kubelet[2579]: E1108 00:35:51.801356 2579 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:35:51.801767 kubelet[2579]: E1108 00:35:51.801630 2579 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5thvs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-49v22_calico-system(34d56e07-ff7d-441e-b5c1-bf41dd56f15b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:35:51.802181 containerd[2090]: time="2025-11-08T00:35:51.802025755Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Nov 8 00:35:52.653906 systemd-networkd[1655]: califfb8327876b: Gained IPv6LL Nov 8 00:35:52.750072 kubelet[2579]: E1108 00:35:52.749793 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:35:53.358189 systemd-networkd[1655]: cali126ce79c85f: Gained IPv6LL Nov 8 00:35:53.750309 kubelet[2579]: E1108 00:35:53.750185 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:35:54.750933 kubelet[2579]: E1108 00:35:54.750894 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:35:54.932681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3626592097.mount: Deactivated successfully. Nov 8 00:35:55.752033 kubelet[2579]: E1108 00:35:55.751939 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:35:56.082532 ntpd[2051]: Listen normally on 8 cali126ce79c85f [fe80::ecee:eeff:feee:eeee%6]:123 Nov 8 00:35:56.084654 ntpd[2051]: 8 Nov 00:35:56 ntpd[2051]: Listen normally on 8 cali126ce79c85f [fe80::ecee:eeff:feee:eeee%6]:123 Nov 8 00:35:56.084654 ntpd[2051]: 8 Nov 00:35:56 ntpd[2051]: Listen normally on 9 califfb8327876b [fe80::ecee:eeff:feee:eeee%7]:123 Nov 8 00:35:56.084597 ntpd[2051]: Listen normally on 9 califfb8327876b [fe80::ecee:eeff:feee:eeee%7]:123 Nov 8 00:35:56.379288 containerd[2090]: time="2025-11-08T00:35:56.378925367Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:35:56.383208 containerd[2090]: time="2025-11-08T00:35:56.382281645Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73311946" Nov 8 00:35:56.384634 containerd[2090]: time="2025-11-08T00:35:56.384565142Z" level=info msg="ImageCreate event name:\"sha256:8d14817f00613fe76ef7459f977ad93e7b71a3948346b7ac4d50e35f3dd518e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:35:56.391632 containerd[2090]: time="2025-11-08T00:35:56.391455682Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:0537df20ac7c5485a0f6b7bfb8e3fbbc8714fce070bab2a6344e5cadfba58d90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:35:56.392967 containerd[2090]: time="2025-11-08T00:35:56.392124629Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:8d14817f00613fe76ef7459f977ad93e7b71a3948346b7ac4d50e35f3dd518e9\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:0537df20ac7c5485a0f6b7bfb8e3fbbc8714fce070bab2a6344e5cadfba58d90\", size \"73311824\" in 4.59007209s" Nov 8 00:35:56.392967 containerd[2090]: time="2025-11-08T00:35:56.392157638Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:8d14817f00613fe76ef7459f977ad93e7b71a3948346b7ac4d50e35f3dd518e9\"" Nov 8 00:35:56.393946 containerd[2090]: time="2025-11-08T00:35:56.393908158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:35:56.406351 containerd[2090]: time="2025-11-08T00:35:56.406295032Z" level=info msg="CreateContainer within sandbox \"a4accb949242d90bf5fc52d103fc135e5a0695c4fa849e0672cdedf92faed6af\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Nov 8 00:35:56.435667 containerd[2090]: time="2025-11-08T00:35:56.435353922Z" level=info msg="CreateContainer within sandbox \"a4accb949242d90bf5fc52d103fc135e5a0695c4fa849e0672cdedf92faed6af\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"49f1727a41fa06e9c63cec5c14992b36bf91eb1ebbbf7e00a0bbefa50babaf60\"" Nov 8 00:35:56.436636 containerd[2090]: time="2025-11-08T00:35:56.436292659Z" level=info msg="StartContainer for \"49f1727a41fa06e9c63cec5c14992b36bf91eb1ebbbf7e00a0bbefa50babaf60\"" Nov 8 00:35:56.508877 containerd[2090]: time="2025-11-08T00:35:56.507571661Z" level=info msg="StartContainer for \"49f1727a41fa06e9c63cec5c14992b36bf91eb1ebbbf7e00a0bbefa50babaf60\" returns successfully" Nov 8 00:35:56.659327 containerd[2090]: time="2025-11-08T00:35:56.658957159Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:35:56.664887 containerd[2090]: time="2025-11-08T00:35:56.664768744Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:35:56.664887 containerd[2090]: time="2025-11-08T00:35:56.664824641Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:35:56.665099 kubelet[2579]: E1108 00:35:56.664998 2579 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:35:56.665099 kubelet[2579]: E1108 00:35:56.665039 2579 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:35:56.665217 kubelet[2579]: E1108 00:35:56.665171 2579 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5thvs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-49v22_calico-system(34d56e07-ff7d-441e-b5c1-bf41dd56f15b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:35:56.666571 kubelet[2579]: E1108 00:35:56.666516 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-49v22" podUID="34d56e07-ff7d-441e-b5c1-bf41dd56f15b" Nov 8 00:35:56.753217 kubelet[2579]: E1108 00:35:56.753144 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:35:57.005563 kubelet[2579]: E1108 00:35:57.005379 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-49v22" podUID="34d56e07-ff7d-441e-b5c1-bf41dd56f15b" Nov 8 00:35:57.028485 kubelet[2579]: I1108 00:35:57.028429 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-zrh5l" podStartSLOduration=13.23878361 podStartE2EDuration="18.028412451s" podCreationTimestamp="2025-11-08 00:35:39 +0000 UTC" firstStartedPulling="2025-11-08 00:35:51.604068323 +0000 UTC m=+26.404428996" lastFinishedPulling="2025-11-08 00:35:56.393697177 +0000 UTC m=+31.194057837" observedRunningTime="2025-11-08 00:35:57.028119746 +0000 UTC m=+31.828480422" watchObservedRunningTime="2025-11-08 00:35:57.028412451 +0000 UTC m=+31.828773132" Nov 8 00:35:57.754204 kubelet[2579]: E1108 00:35:57.754124 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:35:58.754844 kubelet[2579]: E1108 00:35:58.754784 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:35:59.755538 kubelet[2579]: E1108 00:35:59.755463 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:00.756466 kubelet[2579]: E1108 00:36:00.756409 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:01.756852 kubelet[2579]: E1108 00:36:01.756790 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:02.251877 update_engine[2069]: I20251108 00:36:02.251780 2069 update_attempter.cc:509] Updating boot flags... Nov 8 00:36:02.534749 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (3758) Nov 8 00:36:02.758362 kubelet[2579]: E1108 00:36:02.756953 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:02.839719 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (3762) Nov 8 00:36:03.119751 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (3762) Nov 8 00:36:03.758007 kubelet[2579]: E1108 00:36:03.757950 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:04.759262 kubelet[2579]: E1108 00:36:04.759013 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:04.872475 kubelet[2579]: I1108 00:36:04.872350 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xf6s\" (UniqueName: \"kubernetes.io/projected/98aac7b0-298f-4231-8588-b0ba58cb1e0f-kube-api-access-4xf6s\") pod \"nfs-server-provisioner-0\" (UID: \"98aac7b0-298f-4231-8588-b0ba58cb1e0f\") " pod="default/nfs-server-provisioner-0" Nov 8 00:36:04.872475 kubelet[2579]: I1108 00:36:04.872443 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/98aac7b0-298f-4231-8588-b0ba58cb1e0f-data\") pod \"nfs-server-provisioner-0\" (UID: \"98aac7b0-298f-4231-8588-b0ba58cb1e0f\") " pod="default/nfs-server-provisioner-0" Nov 8 00:36:05.031865 containerd[2090]: time="2025-11-08T00:36:05.031639949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:98aac7b0-298f-4231-8588-b0ba58cb1e0f,Namespace:default,Attempt:0,}" Nov 8 00:36:05.212432 (udev-worker)[3759]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:36:05.213782 systemd-networkd[1655]: cali60e51b789ff: Link UP Nov 8 00:36:05.215633 systemd-networkd[1655]: cali60e51b789ff: Gained carrier Nov 8 00:36:05.241847 containerd[2090]: 2025-11-08 00:36:05.110 [INFO][4016] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.30.13-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 98aac7b0-298f-4231-8588-b0ba58cb1e0f 1376 0 2025-11-08 00:36:04 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.30.13 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="8ab87b9cf52fcf7472d41cdcb2aacc506893957585639f5317be8ec8d981021a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.30.13-k8s-nfs--server--provisioner--0-" Nov 8 00:36:05.241847 containerd[2090]: 2025-11-08 00:36:05.110 [INFO][4016] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8ab87b9cf52fcf7472d41cdcb2aacc506893957585639f5317be8ec8d981021a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.30.13-k8s-nfs--server--provisioner--0-eth0" Nov 8 00:36:05.241847 containerd[2090]: 2025-11-08 00:36:05.145 [INFO][4028] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ab87b9cf52fcf7472d41cdcb2aacc506893957585639f5317be8ec8d981021a" HandleID="k8s-pod-network.8ab87b9cf52fcf7472d41cdcb2aacc506893957585639f5317be8ec8d981021a" Workload="172.31.30.13-k8s-nfs--server--provisioner--0-eth0" Nov 8 00:36:05.241847 containerd[2090]: 2025-11-08 00:36:05.146 [INFO][4028] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8ab87b9cf52fcf7472d41cdcb2aacc506893957585639f5317be8ec8d981021a" HandleID="k8s-pod-network.8ab87b9cf52fcf7472d41cdcb2aacc506893957585639f5317be8ec8d981021a" Workload="172.31.30.13-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5800), Attrs:map[string]string{"namespace":"default", "node":"172.31.30.13", "pod":"nfs-server-provisioner-0", "timestamp":"2025-11-08 00:36:05.145865548 +0000 UTC"}, Hostname:"172.31.30.13", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:36:05.241847 containerd[2090]: 2025-11-08 00:36:05.146 [INFO][4028] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:36:05.241847 containerd[2090]: 2025-11-08 00:36:05.146 [INFO][4028] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:36:05.241847 containerd[2090]: 2025-11-08 00:36:05.146 [INFO][4028] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.30.13' Nov 8 00:36:05.241847 containerd[2090]: 2025-11-08 00:36:05.156 [INFO][4028] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8ab87b9cf52fcf7472d41cdcb2aacc506893957585639f5317be8ec8d981021a" host="172.31.30.13" Nov 8 00:36:05.241847 containerd[2090]: 2025-11-08 00:36:05.164 [INFO][4028] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.30.13" Nov 8 00:36:05.241847 containerd[2090]: 2025-11-08 00:36:05.172 [INFO][4028] ipam/ipam.go 511: Trying affinity for 192.168.19.192/26 host="172.31.30.13" Nov 8 00:36:05.241847 containerd[2090]: 2025-11-08 00:36:05.177 [INFO][4028] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.192/26 host="172.31.30.13" Nov 8 00:36:05.241847 containerd[2090]: 2025-11-08 00:36:05.182 [INFO][4028] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.192/26 host="172.31.30.13" Nov 8 00:36:05.241847 containerd[2090]: 2025-11-08 00:36:05.183 [INFO][4028] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.19.192/26 handle="k8s-pod-network.8ab87b9cf52fcf7472d41cdcb2aacc506893957585639f5317be8ec8d981021a" host="172.31.30.13" Nov 8 00:36:05.241847 containerd[2090]: 2025-11-08 00:36:05.187 [INFO][4028] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8ab87b9cf52fcf7472d41cdcb2aacc506893957585639f5317be8ec8d981021a Nov 8 00:36:05.241847 containerd[2090]: 2025-11-08 00:36:05.193 [INFO][4028] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.19.192/26 handle="k8s-pod-network.8ab87b9cf52fcf7472d41cdcb2aacc506893957585639f5317be8ec8d981021a" host="172.31.30.13" Nov 8 00:36:05.241847 containerd[2090]: 2025-11-08 00:36:05.203 [INFO][4028] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.19.195/26] block=192.168.19.192/26 handle="k8s-pod-network.8ab87b9cf52fcf7472d41cdcb2aacc506893957585639f5317be8ec8d981021a" host="172.31.30.13" Nov 8 00:36:05.241847 containerd[2090]: 2025-11-08 00:36:05.203 [INFO][4028] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.195/26] handle="k8s-pod-network.8ab87b9cf52fcf7472d41cdcb2aacc506893957585639f5317be8ec8d981021a" host="172.31.30.13" Nov 8 00:36:05.241847 containerd[2090]: 2025-11-08 00:36:05.203 [INFO][4028] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:36:05.241847 containerd[2090]: 2025-11-08 00:36:05.203 [INFO][4028] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.19.195/26] IPv6=[] ContainerID="8ab87b9cf52fcf7472d41cdcb2aacc506893957585639f5317be8ec8d981021a" HandleID="k8s-pod-network.8ab87b9cf52fcf7472d41cdcb2aacc506893957585639f5317be8ec8d981021a" Workload="172.31.30.13-k8s-nfs--server--provisioner--0-eth0" Nov 8 00:36:05.243129 containerd[2090]: 2025-11-08 00:36:05.209 [INFO][4016] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8ab87b9cf52fcf7472d41cdcb2aacc506893957585639f5317be8ec8d981021a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.30.13-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.13-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"98aac7b0-298f-4231-8588-b0ba58cb1e0f", ResourceVersion:"1376", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 36, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.13", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.19.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:36:05.243129 containerd[2090]: 2025-11-08 00:36:05.209 [INFO][4016] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.195/32] ContainerID="8ab87b9cf52fcf7472d41cdcb2aacc506893957585639f5317be8ec8d981021a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.30.13-k8s-nfs--server--provisioner--0-eth0" Nov 8 00:36:05.243129 containerd[2090]: 2025-11-08 00:36:05.209 [INFO][4016] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="8ab87b9cf52fcf7472d41cdcb2aacc506893957585639f5317be8ec8d981021a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.30.13-k8s-nfs--server--provisioner--0-eth0" Nov 8 00:36:05.243129 containerd[2090]: 2025-11-08 00:36:05.216 [INFO][4016] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ab87b9cf52fcf7472d41cdcb2aacc506893957585639f5317be8ec8d981021a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.30.13-k8s-nfs--server--provisioner--0-eth0" Nov 8 00:36:05.246761 containerd[2090]: 2025-11-08 00:36:05.217 [INFO][4016] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8ab87b9cf52fcf7472d41cdcb2aacc506893957585639f5317be8ec8d981021a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.30.13-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.13-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"98aac7b0-298f-4231-8588-b0ba58cb1e0f", ResourceVersion:"1376", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 36, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.13", ContainerID:"8ab87b9cf52fcf7472d41cdcb2aacc506893957585639f5317be8ec8d981021a", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.19.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"3a:e6:88:32:f0:fe", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:36:05.246761 containerd[2090]: 2025-11-08 00:36:05.234 [INFO][4016] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8ab87b9cf52fcf7472d41cdcb2aacc506893957585639f5317be8ec8d981021a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.30.13-k8s-nfs--server--provisioner--0-eth0" Nov 8 00:36:05.273716 containerd[2090]: time="2025-11-08T00:36:05.273438334Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:36:05.273716 containerd[2090]: time="2025-11-08T00:36:05.273507831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:36:05.273716 containerd[2090]: time="2025-11-08T00:36:05.273529274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:36:05.273716 containerd[2090]: time="2025-11-08T00:36:05.273676431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:36:05.343876 containerd[2090]: time="2025-11-08T00:36:05.343829847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:98aac7b0-298f-4231-8588-b0ba58cb1e0f,Namespace:default,Attempt:0,} returns sandbox id \"8ab87b9cf52fcf7472d41cdcb2aacc506893957585639f5317be8ec8d981021a\"" Nov 8 00:36:05.345812 containerd[2090]: time="2025-11-08T00:36:05.345778050Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Nov 8 00:36:05.716655 kubelet[2579]: E1108 00:36:05.716613 2579 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:05.760164 kubelet[2579]: E1108 00:36:05.760095 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:06.286382 systemd-networkd[1655]: cali60e51b789ff: Gained IPv6LL Nov 8 00:36:06.760413 kubelet[2579]: E1108 00:36:06.760360 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:07.761122 kubelet[2579]: E1108 00:36:07.761076 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:07.815659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2166415626.mount: Deactivated successfully. Nov 8 00:36:08.761753 kubelet[2579]: E1108 00:36:08.761665 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:09.082420 ntpd[2051]: Listen normally on 10 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Nov 8 00:36:09.083159 ntpd[2051]: 8 Nov 00:36:09 ntpd[2051]: Listen normally on 10 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Nov 8 00:36:09.762678 kubelet[2579]: E1108 00:36:09.762645 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:10.102067 containerd[2090]: time="2025-11-08T00:36:10.101693490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:36:10.105874 containerd[2090]: time="2025-11-08T00:36:10.105813067Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Nov 8 00:36:10.106660 containerd[2090]: time="2025-11-08T00:36:10.106599191Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:36:10.112600 containerd[2090]: time="2025-11-08T00:36:10.110786313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:36:10.113625 containerd[2090]: time="2025-11-08T00:36:10.113574350Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.767758225s" Nov 8 00:36:10.113701 containerd[2090]: time="2025-11-08T00:36:10.113630333Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Nov 8 00:36:10.129481 containerd[2090]: time="2025-11-08T00:36:10.129435748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:36:10.181203 containerd[2090]: time="2025-11-08T00:36:10.181159580Z" level=info msg="CreateContainer within sandbox \"8ab87b9cf52fcf7472d41cdcb2aacc506893957585639f5317be8ec8d981021a\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Nov 8 00:36:10.231930 containerd[2090]: time="2025-11-08T00:36:10.231866518Z" level=info msg="CreateContainer within sandbox \"8ab87b9cf52fcf7472d41cdcb2aacc506893957585639f5317be8ec8d981021a\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"06c5059c0e8d46fca3c6b537057cb26a15b9cbdf213c79423565b4855cdf1575\"" Nov 8 00:36:10.238994 containerd[2090]: time="2025-11-08T00:36:10.238951447Z" level=info msg="StartContainer for \"06c5059c0e8d46fca3c6b537057cb26a15b9cbdf213c79423565b4855cdf1575\"" Nov 8 00:36:10.282564 systemd[1]: run-containerd-runc-k8s.io-06c5059c0e8d46fca3c6b537057cb26a15b9cbdf213c79423565b4855cdf1575-runc.P7oSg6.mount: Deactivated successfully. Nov 8 00:36:10.329407 containerd[2090]: time="2025-11-08T00:36:10.328893254Z" level=info msg="StartContainer for \"06c5059c0e8d46fca3c6b537057cb26a15b9cbdf213c79423565b4855cdf1575\" returns successfully" Nov 8 00:36:10.399864 containerd[2090]: time="2025-11-08T00:36:10.399707955Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:36:10.402673 containerd[2090]: time="2025-11-08T00:36:10.402388798Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:36:10.402673 containerd[2090]: time="2025-11-08T00:36:10.402495814Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:36:10.403135 kubelet[2579]: E1108 00:36:10.402820 2579 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:36:10.403135 kubelet[2579]: E1108 00:36:10.402875 2579 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:36:10.409922 kubelet[2579]: E1108 00:36:10.409792 2579 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5thvs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-49v22_calico-system(34d56e07-ff7d-441e-b5c1-bf41dd56f15b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:36:10.412406 containerd[2090]: time="2025-11-08T00:36:10.412185253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:36:10.684475 containerd[2090]: time="2025-11-08T00:36:10.684402410Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:36:10.686704 containerd[2090]: time="2025-11-08T00:36:10.686617759Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:36:10.686864 containerd[2090]: time="2025-11-08T00:36:10.686755904Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:36:10.686995 kubelet[2579]: E1108 00:36:10.686943 2579 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:36:10.687083 kubelet[2579]: E1108 00:36:10.687001 2579 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:36:10.687278 kubelet[2579]: E1108 00:36:10.687215 2579 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5thvs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-49v22_calico-system(34d56e07-ff7d-441e-b5c1-bf41dd56f15b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:36:10.689308 kubelet[2579]: E1108 00:36:10.689250 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-49v22" podUID="34d56e07-ff7d-441e-b5c1-bf41dd56f15b" Nov 8 00:36:10.765173 kubelet[2579]: E1108 00:36:10.765107 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:11.108187 kubelet[2579]: I1108 00:36:11.108036 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.326224519 podStartE2EDuration="7.108017863s" podCreationTimestamp="2025-11-08 00:36:04 +0000 UTC" firstStartedPulling="2025-11-08 00:36:05.345252293 +0000 UTC m=+40.145612963" lastFinishedPulling="2025-11-08 00:36:10.127045633 +0000 UTC m=+44.927406307" observedRunningTime="2025-11-08 00:36:11.107821897 +0000 UTC m=+45.908182581" watchObservedRunningTime="2025-11-08 00:36:11.108017863 +0000 UTC m=+45.908378544" Nov 8 00:36:11.765939 kubelet[2579]: E1108 00:36:11.765884 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:12.766303 kubelet[2579]: E1108 00:36:12.766222 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:13.766600 kubelet[2579]: E1108 00:36:13.766508 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:14.767333 kubelet[2579]: E1108 00:36:14.767276 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:15.767972 kubelet[2579]: E1108 00:36:15.767894 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:16.768772 kubelet[2579]: E1108 00:36:16.768724 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:17.768925 kubelet[2579]: E1108 00:36:17.768854 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:18.769865 kubelet[2579]: E1108 00:36:18.769790 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:19.770393 kubelet[2579]: E1108 00:36:19.770307 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:20.770983 kubelet[2579]: E1108 00:36:20.770903 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:21.771130 kubelet[2579]: E1108 00:36:21.771078 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:22.771922 kubelet[2579]: E1108 00:36:22.771865 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:23.772291 kubelet[2579]: E1108 00:36:23.772241 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:24.773193 kubelet[2579]: E1108 00:36:24.773126 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:25.716699 kubelet[2579]: E1108 00:36:25.716653 2579 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:25.755750 containerd[2090]: time="2025-11-08T00:36:25.755639446Z" level=info msg="StopPodSandbox for \"da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79\"" Nov 8 00:36:25.774132 kubelet[2579]: E1108 00:36:25.774088 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:25.847230 containerd[2090]: 2025-11-08 00:36:25.803 [WARNING][4233] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.13-k8s-csi--node--driver--49v22-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"34d56e07-ff7d-441e-b5c1-bf41dd56f15b", ResourceVersion:"1399", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 35, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.13", ContainerID:"e279d14bad2d0141592fbe0fc9973acb660f08649e4afed716e09dbb536355ec", Pod:"csi-node-driver-49v22", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.19.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali126ce79c85f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:36:25.847230 containerd[2090]: 2025-11-08 00:36:25.803 [INFO][4233] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" Nov 8 00:36:25.847230 containerd[2090]: 2025-11-08 00:36:25.804 [INFO][4233] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" iface="eth0" netns="" Nov 8 00:36:25.847230 containerd[2090]: 2025-11-08 00:36:25.804 [INFO][4233] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" Nov 8 00:36:25.847230 containerd[2090]: 2025-11-08 00:36:25.804 [INFO][4233] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" Nov 8 00:36:25.847230 containerd[2090]: 2025-11-08 00:36:25.831 [INFO][4240] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" HandleID="k8s-pod-network.da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" Workload="172.31.30.13-k8s-csi--node--driver--49v22-eth0" Nov 8 00:36:25.847230 containerd[2090]: 2025-11-08 00:36:25.831 [INFO][4240] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:36:25.847230 containerd[2090]: 2025-11-08 00:36:25.831 [INFO][4240] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:36:25.847230 containerd[2090]: 2025-11-08 00:36:25.841 [WARNING][4240] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" HandleID="k8s-pod-network.da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" Workload="172.31.30.13-k8s-csi--node--driver--49v22-eth0" Nov 8 00:36:25.847230 containerd[2090]: 2025-11-08 00:36:25.841 [INFO][4240] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" HandleID="k8s-pod-network.da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" Workload="172.31.30.13-k8s-csi--node--driver--49v22-eth0" Nov 8 00:36:25.847230 containerd[2090]: 2025-11-08 00:36:25.844 [INFO][4240] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:36:25.847230 containerd[2090]: 2025-11-08 00:36:25.845 [INFO][4233] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" Nov 8 00:36:25.847230 containerd[2090]: time="2025-11-08T00:36:25.847243477Z" level=info msg="TearDown network for sandbox \"da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79\" successfully" Nov 8 00:36:25.847230 containerd[2090]: time="2025-11-08T00:36:25.847266184Z" level=info msg="StopPodSandbox for \"da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79\" returns successfully" Nov 8 00:36:25.870062 containerd[2090]: time="2025-11-08T00:36:25.870001753Z" level=info msg="RemovePodSandbox for \"da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79\"" Nov 8 00:36:25.870062 containerd[2090]: time="2025-11-08T00:36:25.870053513Z" level=info msg="Forcibly stopping sandbox \"da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79\"" Nov 8 00:36:25.905536 kubelet[2579]: E1108 00:36:25.905475 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-49v22" podUID="34d56e07-ff7d-441e-b5c1-bf41dd56f15b" Nov 8 00:36:25.954027 containerd[2090]: 2025-11-08 00:36:25.906 [WARNING][4254] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.13-k8s-csi--node--driver--49v22-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"34d56e07-ff7d-441e-b5c1-bf41dd56f15b", ResourceVersion:"1399", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 35, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.13", ContainerID:"e279d14bad2d0141592fbe0fc9973acb660f08649e4afed716e09dbb536355ec", Pod:"csi-node-driver-49v22", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.19.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali126ce79c85f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:36:25.954027 containerd[2090]: 2025-11-08 00:36:25.906 [INFO][4254] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" Nov 8 00:36:25.954027 containerd[2090]: 2025-11-08 00:36:25.906 [INFO][4254] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" iface="eth0" netns="" Nov 8 00:36:25.954027 containerd[2090]: 2025-11-08 00:36:25.906 [INFO][4254] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" Nov 8 00:36:25.954027 containerd[2090]: 2025-11-08 00:36:25.906 [INFO][4254] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" Nov 8 00:36:25.954027 containerd[2090]: 2025-11-08 00:36:25.940 [INFO][4261] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" HandleID="k8s-pod-network.da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" Workload="172.31.30.13-k8s-csi--node--driver--49v22-eth0" Nov 8 00:36:25.954027 containerd[2090]: 2025-11-08 00:36:25.941 [INFO][4261] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:36:25.954027 containerd[2090]: 2025-11-08 00:36:25.941 [INFO][4261] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:36:25.954027 containerd[2090]: 2025-11-08 00:36:25.949 [WARNING][4261] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" HandleID="k8s-pod-network.da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" Workload="172.31.30.13-k8s-csi--node--driver--49v22-eth0" Nov 8 00:36:25.954027 containerd[2090]: 2025-11-08 00:36:25.949 [INFO][4261] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" HandleID="k8s-pod-network.da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" Workload="172.31.30.13-k8s-csi--node--driver--49v22-eth0" Nov 8 00:36:25.954027 containerd[2090]: 2025-11-08 00:36:25.951 [INFO][4261] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:36:25.954027 containerd[2090]: 2025-11-08 00:36:25.952 [INFO][4254] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79" Nov 8 00:36:25.954826 containerd[2090]: time="2025-11-08T00:36:25.954069528Z" level=info msg="TearDown network for sandbox \"da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79\" successfully" Nov 8 00:36:25.966601 containerd[2090]: time="2025-11-08T00:36:25.966523668Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:36:25.966829 containerd[2090]: time="2025-11-08T00:36:25.966625946Z" level=info msg="RemovePodSandbox \"da11b92eef056a53d40abfb19923a36f2bdfcbeaf0e4be1b804868661467bb79\" returns successfully" Nov 8 00:36:25.972711 containerd[2090]: time="2025-11-08T00:36:25.972668678Z" level=info msg="StopPodSandbox for \"7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa\"" Nov 8 00:36:26.047893 containerd[2090]: 2025-11-08 00:36:26.008 [WARNING][4277] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.13-k8s-nginx--deployment--7fcdb87857--zrh5l-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"a27fa730-09db-44b5-8f4c-0360ea131785", ResourceVersion:"1319", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 35, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.13", ContainerID:"a4accb949242d90bf5fc52d103fc135e5a0695c4fa849e0672cdedf92faed6af", Pod:"nginx-deployment-7fcdb87857-zrh5l", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.19.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"califfb8327876b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:36:26.047893 containerd[2090]: 2025-11-08 00:36:26.008 [INFO][4277] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" Nov 8 00:36:26.047893 containerd[2090]: 2025-11-08 00:36:26.008 [INFO][4277] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" iface="eth0" netns="" Nov 8 00:36:26.047893 containerd[2090]: 2025-11-08 00:36:26.008 [INFO][4277] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" Nov 8 00:36:26.047893 containerd[2090]: 2025-11-08 00:36:26.008 [INFO][4277] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" Nov 8 00:36:26.047893 containerd[2090]: 2025-11-08 00:36:26.031 [INFO][4284] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" HandleID="k8s-pod-network.7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" Workload="172.31.30.13-k8s-nginx--deployment--7fcdb87857--zrh5l-eth0" Nov 8 00:36:26.047893 containerd[2090]: 2025-11-08 00:36:26.032 [INFO][4284] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:36:26.047893 containerd[2090]: 2025-11-08 00:36:26.032 [INFO][4284] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:36:26.047893 containerd[2090]: 2025-11-08 00:36:26.040 [WARNING][4284] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" HandleID="k8s-pod-network.7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" Workload="172.31.30.13-k8s-nginx--deployment--7fcdb87857--zrh5l-eth0" Nov 8 00:36:26.047893 containerd[2090]: 2025-11-08 00:36:26.041 [INFO][4284] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" HandleID="k8s-pod-network.7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" Workload="172.31.30.13-k8s-nginx--deployment--7fcdb87857--zrh5l-eth0" Nov 8 00:36:26.047893 containerd[2090]: 2025-11-08 00:36:26.043 [INFO][4284] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:36:26.047893 containerd[2090]: 2025-11-08 00:36:26.044 [INFO][4277] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" Nov 8 00:36:26.047893 containerd[2090]: time="2025-11-08T00:36:26.047791191Z" level=info msg="TearDown network for sandbox \"7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa\" successfully" Nov 8 00:36:26.047893 containerd[2090]: time="2025-11-08T00:36:26.047840875Z" level=info msg="StopPodSandbox for \"7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa\" returns successfully" Nov 8 00:36:26.050026 containerd[2090]: time="2025-11-08T00:36:26.049534695Z" level=info msg="RemovePodSandbox for \"7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa\"" Nov 8 00:36:26.050026 containerd[2090]: time="2025-11-08T00:36:26.049574808Z" level=info msg="Forcibly stopping sandbox \"7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa\"" Nov 8 00:36:26.131669 containerd[2090]: 2025-11-08 00:36:26.088 [WARNING][4298] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.13-k8s-nginx--deployment--7fcdb87857--zrh5l-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"a27fa730-09db-44b5-8f4c-0360ea131785", ResourceVersion:"1319", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 35, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.13", ContainerID:"a4accb949242d90bf5fc52d103fc135e5a0695c4fa849e0672cdedf92faed6af", Pod:"nginx-deployment-7fcdb87857-zrh5l", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.19.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"califfb8327876b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:36:26.131669 containerd[2090]: 2025-11-08 00:36:26.088 [INFO][4298] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" Nov 8 00:36:26.131669 containerd[2090]: 2025-11-08 00:36:26.088 [INFO][4298] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" iface="eth0" netns="" Nov 8 00:36:26.131669 containerd[2090]: 2025-11-08 00:36:26.088 [INFO][4298] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" Nov 8 00:36:26.131669 containerd[2090]: 2025-11-08 00:36:26.088 [INFO][4298] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" Nov 8 00:36:26.131669 containerd[2090]: 2025-11-08 00:36:26.113 [INFO][4305] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" HandleID="k8s-pod-network.7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" Workload="172.31.30.13-k8s-nginx--deployment--7fcdb87857--zrh5l-eth0" Nov 8 00:36:26.131669 containerd[2090]: 2025-11-08 00:36:26.113 [INFO][4305] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:36:26.131669 containerd[2090]: 2025-11-08 00:36:26.113 [INFO][4305] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:36:26.131669 containerd[2090]: 2025-11-08 00:36:26.126 [WARNING][4305] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" HandleID="k8s-pod-network.7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" Workload="172.31.30.13-k8s-nginx--deployment--7fcdb87857--zrh5l-eth0" Nov 8 00:36:26.131669 containerd[2090]: 2025-11-08 00:36:26.126 [INFO][4305] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" HandleID="k8s-pod-network.7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" Workload="172.31.30.13-k8s-nginx--deployment--7fcdb87857--zrh5l-eth0" Nov 8 00:36:26.131669 containerd[2090]: 2025-11-08 00:36:26.129 [INFO][4305] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:36:26.131669 containerd[2090]: 2025-11-08 00:36:26.130 [INFO][4298] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa" Nov 8 00:36:26.144772 containerd[2090]: time="2025-11-08T00:36:26.144705815Z" level=info msg="TearDown network for sandbox \"7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa\" successfully" Nov 8 00:36:26.149930 containerd[2090]: time="2025-11-08T00:36:26.149867502Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:36:26.149930 containerd[2090]: time="2025-11-08T00:36:26.149922434Z" level=info msg="RemovePodSandbox \"7a433772309dfe0552c7bb7484c42eec8d72938de2ee7c680c49364fd951fbfa\" returns successfully" Nov 8 00:36:26.774564 kubelet[2579]: E1108 00:36:26.774516 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:27.775284 kubelet[2579]: E1108 00:36:27.775230 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:28.776353 kubelet[2579]: E1108 00:36:28.776294 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:29.777123 kubelet[2579]: E1108 00:36:29.777047 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:30.777626 kubelet[2579]: E1108 00:36:30.777386 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:31.778526 kubelet[2579]: E1108 00:36:31.778466 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:32.778967 kubelet[2579]: E1108 00:36:32.778906 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:33.779599 kubelet[2579]: E1108 00:36:33.779541 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:34.780494 kubelet[2579]: E1108 00:36:34.780441 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:35.589495 kubelet[2579]: I1108 00:36:35.589369 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0c87bdc4-18b6-437f-b31e-ff4d86011bb3\" (UniqueName: \"kubernetes.io/nfs/9e72a885-1ca3-41b0-b5fd-f72bdc008c06-pvc-0c87bdc4-18b6-437f-b31e-ff4d86011bb3\") pod \"test-pod-1\" (UID: \"9e72a885-1ca3-41b0-b5fd-f72bdc008c06\") " pod="default/test-pod-1" Nov 8 00:36:35.589495 kubelet[2579]: I1108 00:36:35.589428 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjx5s\" (UniqueName: \"kubernetes.io/projected/9e72a885-1ca3-41b0-b5fd-f72bdc008c06-kube-api-access-kjx5s\") pod \"test-pod-1\" (UID: \"9e72a885-1ca3-41b0-b5fd-f72bdc008c06\") " pod="default/test-pod-1" Nov 8 00:36:35.740614 kernel: FS-Cache: Loaded Nov 8 00:36:35.780977 kubelet[2579]: E1108 00:36:35.780882 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:35.817762 kernel: RPC: Registered named UNIX socket transport module. Nov 8 00:36:35.817858 kernel: RPC: Registered udp transport module. Nov 8 00:36:35.818940 kernel: RPC: Registered tcp transport module. Nov 8 00:36:35.819034 kernel: RPC: Registered tcp-with-tls transport module. Nov 8 00:36:35.820043 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Nov 8 00:36:36.133905 kernel: NFS: Registering the id_resolver key type Nov 8 00:36:36.134041 kernel: Key type id_resolver registered Nov 8 00:36:36.134079 kernel: Key type id_legacy registered Nov 8 00:36:36.169025 nfsidmap[4329]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Nov 8 00:36:36.177748 nfsidmap[4330]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Nov 8 00:36:36.414548 containerd[2090]: time="2025-11-08T00:36:36.414418370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:9e72a885-1ca3-41b0-b5fd-f72bdc008c06,Namespace:default,Attempt:0,}" Nov 8 00:36:36.554329 (udev-worker)[4326]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:36:36.555703 systemd-networkd[1655]: cali5ec59c6bf6e: Link UP Nov 8 00:36:36.555983 systemd-networkd[1655]: cali5ec59c6bf6e: Gained carrier Nov 8 00:36:36.572145 containerd[2090]: 2025-11-08 00:36:36.469 [INFO][4332] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.30.13-k8s-test--pod--1-eth0 default 9e72a885-1ca3-41b0-b5fd-f72bdc008c06 1559 0 2025-11-08 00:36:05 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.30.13 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="90f41dc7d839a22fcd264cdcf072397867eacf56f61803d4962413348c318ea8" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.30.13-k8s-test--pod--1-" Nov 8 00:36:36.572145 containerd[2090]: 2025-11-08 00:36:36.470 [INFO][4332] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="90f41dc7d839a22fcd264cdcf072397867eacf56f61803d4962413348c318ea8" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.30.13-k8s-test--pod--1-eth0" Nov 8 00:36:36.572145 containerd[2090]: 2025-11-08 00:36:36.497 [INFO][4343] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90f41dc7d839a22fcd264cdcf072397867eacf56f61803d4962413348c318ea8" HandleID="k8s-pod-network.90f41dc7d839a22fcd264cdcf072397867eacf56f61803d4962413348c318ea8" Workload="172.31.30.13-k8s-test--pod--1-eth0" Nov 8 00:36:36.572145 containerd[2090]: 2025-11-08 00:36:36.497 [INFO][4343] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="90f41dc7d839a22fcd264cdcf072397867eacf56f61803d4962413348c318ea8" HandleID="k8s-pod-network.90f41dc7d839a22fcd264cdcf072397867eacf56f61803d4962413348c318ea8" Workload="172.31.30.13-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"default", "node":"172.31.30.13", "pod":"test-pod-1", "timestamp":"2025-11-08 00:36:36.497617735 +0000 UTC"}, Hostname:"172.31.30.13", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:36:36.572145 containerd[2090]: 2025-11-08 00:36:36.497 [INFO][4343] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:36:36.572145 containerd[2090]: 2025-11-08 00:36:36.497 [INFO][4343] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:36:36.572145 containerd[2090]: 2025-11-08 00:36:36.497 [INFO][4343] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.30.13' Nov 8 00:36:36.572145 containerd[2090]: 2025-11-08 00:36:36.507 [INFO][4343] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.90f41dc7d839a22fcd264cdcf072397867eacf56f61803d4962413348c318ea8" host="172.31.30.13" Nov 8 00:36:36.572145 containerd[2090]: 2025-11-08 00:36:36.512 [INFO][4343] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.30.13" Nov 8 00:36:36.572145 containerd[2090]: 2025-11-08 00:36:36.521 [INFO][4343] ipam/ipam.go 511: Trying affinity for 192.168.19.192/26 host="172.31.30.13" Nov 8 00:36:36.572145 containerd[2090]: 2025-11-08 00:36:36.526 [INFO][4343] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.192/26 host="172.31.30.13" Nov 8 00:36:36.572145 containerd[2090]: 2025-11-08 00:36:36.529 [INFO][4343] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.192/26 host="172.31.30.13" Nov 8 00:36:36.572145 containerd[2090]: 2025-11-08 00:36:36.529 [INFO][4343] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.19.192/26 handle="k8s-pod-network.90f41dc7d839a22fcd264cdcf072397867eacf56f61803d4962413348c318ea8" host="172.31.30.13" Nov 8 00:36:36.572145 containerd[2090]: 2025-11-08 00:36:36.532 [INFO][4343] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.90f41dc7d839a22fcd264cdcf072397867eacf56f61803d4962413348c318ea8 Nov 8 00:36:36.572145 containerd[2090]: 2025-11-08 00:36:36.538 [INFO][4343] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.19.192/26 handle="k8s-pod-network.90f41dc7d839a22fcd264cdcf072397867eacf56f61803d4962413348c318ea8" host="172.31.30.13" Nov 8 00:36:36.572145 containerd[2090]: 2025-11-08 00:36:36.548 [INFO][4343] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.19.196/26] block=192.168.19.192/26 handle="k8s-pod-network.90f41dc7d839a22fcd264cdcf072397867eacf56f61803d4962413348c318ea8" host="172.31.30.13" Nov 8 00:36:36.572145 containerd[2090]: 2025-11-08 00:36:36.548 [INFO][4343] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.196/26] handle="k8s-pod-network.90f41dc7d839a22fcd264cdcf072397867eacf56f61803d4962413348c318ea8" host="172.31.30.13" Nov 8 00:36:36.572145 containerd[2090]: 2025-11-08 00:36:36.548 [INFO][4343] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:36:36.572145 containerd[2090]: 2025-11-08 00:36:36.549 [INFO][4343] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.19.196/26] IPv6=[] ContainerID="90f41dc7d839a22fcd264cdcf072397867eacf56f61803d4962413348c318ea8" HandleID="k8s-pod-network.90f41dc7d839a22fcd264cdcf072397867eacf56f61803d4962413348c318ea8" Workload="172.31.30.13-k8s-test--pod--1-eth0" Nov 8 00:36:36.572145 containerd[2090]: 2025-11-08 00:36:36.550 [INFO][4332] cni-plugin/k8s.go 418: Populated endpoint ContainerID="90f41dc7d839a22fcd264cdcf072397867eacf56f61803d4962413348c318ea8" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.30.13-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.13-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9e72a885-1ca3-41b0-b5fd-f72bdc008c06", ResourceVersion:"1559", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 36, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.13", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.19.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:36:36.573886 containerd[2090]: 2025-11-08 00:36:36.551 [INFO][4332] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.196/32] ContainerID="90f41dc7d839a22fcd264cdcf072397867eacf56f61803d4962413348c318ea8" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.30.13-k8s-test--pod--1-eth0" Nov 8 00:36:36.573886 containerd[2090]: 2025-11-08 00:36:36.551 [INFO][4332] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="90f41dc7d839a22fcd264cdcf072397867eacf56f61803d4962413348c318ea8" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.30.13-k8s-test--pod--1-eth0" Nov 8 00:36:36.573886 containerd[2090]: 2025-11-08 00:36:36.556 [INFO][4332] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="90f41dc7d839a22fcd264cdcf072397867eacf56f61803d4962413348c318ea8" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.30.13-k8s-test--pod--1-eth0" Nov 8 00:36:36.573886 containerd[2090]: 2025-11-08 00:36:36.557 [INFO][4332] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="90f41dc7d839a22fcd264cdcf072397867eacf56f61803d4962413348c318ea8" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.30.13-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.13-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9e72a885-1ca3-41b0-b5fd-f72bdc008c06", ResourceVersion:"1559", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 36, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.13", ContainerID:"90f41dc7d839a22fcd264cdcf072397867eacf56f61803d4962413348c318ea8", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.19.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"1e:12:3a:6b:ba:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:36:36.573886 containerd[2090]: 2025-11-08 00:36:36.568 [INFO][4332] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="90f41dc7d839a22fcd264cdcf072397867eacf56f61803d4962413348c318ea8" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.30.13-k8s-test--pod--1-eth0" Nov 8 00:36:36.608098 containerd[2090]: time="2025-11-08T00:36:36.607991860Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:36:36.608098 containerd[2090]: time="2025-11-08T00:36:36.608054492Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:36:36.608370 containerd[2090]: time="2025-11-08T00:36:36.608070579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:36:36.608370 containerd[2090]: time="2025-11-08T00:36:36.608184892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:36:36.682301 containerd[2090]: time="2025-11-08T00:36:36.682185064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:9e72a885-1ca3-41b0-b5fd-f72bdc008c06,Namespace:default,Attempt:0,} returns sandbox id \"90f41dc7d839a22fcd264cdcf072397867eacf56f61803d4962413348c318ea8\"" Nov 8 00:36:36.684089 containerd[2090]: time="2025-11-08T00:36:36.683923201Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Nov 8 00:36:36.781867 kubelet[2579]: E1108 00:36:36.781813 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:36.996904 containerd[2090]: time="2025-11-08T00:36:36.996789509Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:36:36.998665 containerd[2090]: time="2025-11-08T00:36:36.998604834Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Nov 8 00:36:37.002219 containerd[2090]: time="2025-11-08T00:36:37.002174585Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:8d14817f00613fe76ef7459f977ad93e7b71a3948346b7ac4d50e35f3dd518e9\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:0537df20ac7c5485a0f6b7bfb8e3fbbc8714fce070bab2a6344e5cadfba58d90\", size \"73311824\" in 318.218574ms" Nov 8 00:36:37.002219 containerd[2090]: time="2025-11-08T00:36:37.002218341Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:8d14817f00613fe76ef7459f977ad93e7b71a3948346b7ac4d50e35f3dd518e9\"" Nov 8 00:36:37.005890 containerd[2090]: time="2025-11-08T00:36:37.005848935Z" level=info msg="CreateContainer within sandbox \"90f41dc7d839a22fcd264cdcf072397867eacf56f61803d4962413348c318ea8\" for container &ContainerMetadata{Name:test,Attempt:0,}" Nov 8 00:36:37.030827 containerd[2090]: time="2025-11-08T00:36:37.030707641Z" level=info msg="CreateContainer within sandbox \"90f41dc7d839a22fcd264cdcf072397867eacf56f61803d4962413348c318ea8\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"2804a602ee92aaae506e0706dfb549f9aba751e44d0ef645c5355101032b3047\"" Nov 8 00:36:37.031700 containerd[2090]: time="2025-11-08T00:36:37.031616246Z" level=info msg="StartContainer for \"2804a602ee92aaae506e0706dfb549f9aba751e44d0ef645c5355101032b3047\"" Nov 8 00:36:37.115450 containerd[2090]: time="2025-11-08T00:36:37.115261841Z" level=info msg="StartContainer for \"2804a602ee92aaae506e0706dfb549f9aba751e44d0ef645c5355101032b3047\" returns successfully" Nov 8 00:36:37.783003 kubelet[2579]: E1108 00:36:37.782926 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:38.029928 systemd-networkd[1655]: cali5ec59c6bf6e: Gained IPv6LL Nov 8 00:36:38.783605 kubelet[2579]: E1108 00:36:38.783528 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:39.784736 kubelet[2579]: E1108 00:36:39.784665 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:40.082444 ntpd[2051]: Listen normally on 11 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Nov 8 00:36:40.083033 ntpd[2051]: 8 Nov 00:36:40 ntpd[2051]: Listen normally on 11 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Nov 8 00:36:40.785449 kubelet[2579]: E1108 00:36:40.785251 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:40.902873 containerd[2090]: time="2025-11-08T00:36:40.902682026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:36:40.917674 kubelet[2579]: I1108 00:36:40.917259 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=35.597519136 podStartE2EDuration="35.917241512s" podCreationTimestamp="2025-11-08 00:36:05 +0000 UTC" firstStartedPulling="2025-11-08 00:36:36.683213301 +0000 UTC m=+71.483573962" lastFinishedPulling="2025-11-08 00:36:37.002935671 +0000 UTC m=+71.803296338" observedRunningTime="2025-11-08 00:36:37.175925937 +0000 UTC m=+71.976286620" watchObservedRunningTime="2025-11-08 00:36:40.917241512 +0000 UTC m=+75.717602175" Nov 8 00:36:41.221225 containerd[2090]: time="2025-11-08T00:36:41.221177518Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:36:41.223707 containerd[2090]: time="2025-11-08T00:36:41.223629578Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:36:41.223707 containerd[2090]: time="2025-11-08T00:36:41.223662896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:36:41.224024 kubelet[2579]: E1108 00:36:41.223949 2579 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:36:41.224024 kubelet[2579]: E1108 00:36:41.224011 2579 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:36:41.224227 kubelet[2579]: E1108 00:36:41.224167 2579 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5thvs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-49v22_calico-system(34d56e07-ff7d-441e-b5c1-bf41dd56f15b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:36:41.226948 containerd[2090]: time="2025-11-08T00:36:41.226905007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:36:41.512697 containerd[2090]: time="2025-11-08T00:36:41.512552428Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:36:41.514808 containerd[2090]: time="2025-11-08T00:36:41.514730828Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:36:41.515004 containerd[2090]: time="2025-11-08T00:36:41.514821448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:36:41.515045 kubelet[2579]: E1108 00:36:41.514961 2579 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:36:41.515045 kubelet[2579]: E1108 00:36:41.515003 2579 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:36:41.515173 kubelet[2579]: E1108 00:36:41.515131 2579 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5thvs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-49v22_calico-system(34d56e07-ff7d-441e-b5c1-bf41dd56f15b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:36:41.516883 kubelet[2579]: E1108 00:36:41.516828 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-49v22" podUID="34d56e07-ff7d-441e-b5c1-bf41dd56f15b" Nov 8 00:36:41.785892 kubelet[2579]: E1108 00:36:41.785752 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:42.786465 kubelet[2579]: E1108 00:36:42.786409 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:43.787141 kubelet[2579]: E1108 00:36:43.787024 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:44.787718 kubelet[2579]: E1108 00:36:44.787652 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:45.716858 kubelet[2579]: E1108 00:36:45.716803 2579 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:45.788830 kubelet[2579]: E1108 00:36:45.788777 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:46.789282 kubelet[2579]: E1108 00:36:46.789210 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:47.790380 kubelet[2579]: E1108 00:36:47.790320 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:48.790749 kubelet[2579]: E1108 00:36:48.790674 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:49.791649 kubelet[2579]: E1108 00:36:49.791561 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:50.792683 kubelet[2579]: E1108 00:36:50.792631 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:51.793206 kubelet[2579]: E1108 00:36:51.793145 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:51.906015 kubelet[2579]: E1108 00:36:51.905944 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-49v22" podUID="34d56e07-ff7d-441e-b5c1-bf41dd56f15b" Nov 8 00:36:52.793351 kubelet[2579]: E1108 00:36:52.793300 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:53.793661 kubelet[2579]: E1108 00:36:53.793608 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:54.794036 kubelet[2579]: E1108 00:36:54.793977 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:55.795171 kubelet[2579]: E1108 00:36:55.795121 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:56.795637 kubelet[2579]: E1108 00:36:56.795535 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:57.796053 kubelet[2579]: E1108 00:36:57.795977 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:58.037130 kubelet[2579]: E1108 00:36:58.037056 2579 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.248:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.30.13?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 8 00:36:58.797004 kubelet[2579]: E1108 00:36:58.796920 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:36:59.797378 kubelet[2579]: E1108 00:36:59.797300 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:00.797550 kubelet[2579]: E1108 00:37:00.797495 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:01.798565 kubelet[2579]: E1108 00:37:01.798477 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:02.799016 kubelet[2579]: E1108 00:37:02.798956 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:02.903574 kubelet[2579]: E1108 00:37:02.903520 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-49v22" podUID="34d56e07-ff7d-441e-b5c1-bf41dd56f15b" Nov 8 00:37:03.799866 kubelet[2579]: E1108 00:37:03.799777 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:04.800729 kubelet[2579]: E1108 00:37:04.800661 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:05.716389 kubelet[2579]: E1108 00:37:05.716344 2579 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:05.801299 kubelet[2579]: E1108 00:37:05.801233 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:06.802334 kubelet[2579]: E1108 00:37:06.802273 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:07.803604 kubelet[2579]: E1108 00:37:07.803534 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:08.043979 kubelet[2579]: E1108 00:37:08.043920 2579 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.248:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.30.13?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 8 00:37:08.804245 kubelet[2579]: E1108 00:37:08.804203 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:09.805417 kubelet[2579]: E1108 00:37:09.805342 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:10.806288 kubelet[2579]: E1108 00:37:10.806235 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:11.807179 kubelet[2579]: E1108 00:37:11.807122 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:12.808081 kubelet[2579]: E1108 00:37:12.808038 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:13.808835 kubelet[2579]: E1108 00:37:13.808763 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:14.809244 kubelet[2579]: E1108 00:37:14.809191 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:15.809734 kubelet[2579]: E1108 00:37:15.809673 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:16.809963 kubelet[2579]: E1108 00:37:16.809912 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:17.810436 kubelet[2579]: E1108 00:37:17.810361 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:17.903160 kubelet[2579]: E1108 00:37:17.903103 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-49v22" podUID="34d56e07-ff7d-441e-b5c1-bf41dd56f15b" Nov 8 00:37:18.051832 kubelet[2579]: E1108 00:37:18.051749 2579 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.248:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.30.13?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 8 00:37:18.811321 kubelet[2579]: E1108 00:37:18.811173 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:19.811732 kubelet[2579]: E1108 00:37:19.811669 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:20.812209 kubelet[2579]: E1108 00:37:20.812150 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:21.812695 kubelet[2579]: E1108 00:37:21.812636 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:22.813707 kubelet[2579]: E1108 00:37:22.813636 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:23.814808 kubelet[2579]: E1108 00:37:23.814755 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:24.782803 kubelet[2579]: E1108 00:37:24.781734 2579 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://172.31.19.248:6443/api/v1/namespaces/calico-system/events/csi-node-driver-49v22.1875e0f17ed11a3e\": unexpected EOF" event="&Event{ObjectMeta:{csi-node-driver-49v22.1875e0f17ed11a3e calico-system 1505 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:csi-node-driver-49v22,UID:34d56e07-ff7d-441e-b5c1-bf41dd56f15b,APIVersion:v1,ResourceVersion:936,FieldPath:spec.containers{calico-csi},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/csi:v3.30.4\",Source:EventSource{Component:kubelet,Host:172.31.30.13,},FirstTimestamp:2025-11-08 00:35:57 +0000 UTC,LastTimestamp:2025-11-08 00:36:51.905282066 +0000 UTC m=+86.705642756,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.30.13,}" Nov 8 00:37:24.782803 kubelet[2579]: E1108 00:37:24.781857 2579 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.248:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.30.13?timeout=10s\": unexpected EOF" Nov 8 00:37:24.799606 kubelet[2579]: E1108 00:37:24.798748 2579 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.248:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.30.13?timeout=10s\": read tcp 172.31.30.13:59618->172.31.19.248:6443: read: connection reset by peer" Nov 8 00:37:24.802150 kubelet[2579]: I1108 00:37:24.800026 2579 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Nov 8 00:37:24.803933 kubelet[2579]: E1108 00:37:24.803903 2579 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.248:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.30.13?timeout=10s\": dial tcp 172.31.19.248:6443: connect: connection refused" interval="200ms" Nov 8 00:37:24.815020 kubelet[2579]: E1108 00:37:24.814979 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:25.005395 kubelet[2579]: E1108 00:37:25.005351 2579 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.248:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.30.13?timeout=10s\": dial tcp 172.31.19.248:6443: connect: connection refused" interval="400ms" Nov 8 00:37:25.407101 kubelet[2579]: E1108 00:37:25.406989 2579 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.248:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.30.13?timeout=10s\": dial tcp 172.31.19.248:6443: connect: connection refused" interval="800ms" Nov 8 00:37:25.491416 kubelet[2579]: E1108 00:37:25.491170 2579 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://172.31.19.248:6443/api/v1/namespaces/calico-system/events/csi-node-driver-49v22.1875e0f17ed11a3e\": dial tcp 172.31.19.248:6443: connect: connection refused" event="&Event{ObjectMeta:{csi-node-driver-49v22.1875e0f17ed11a3e calico-system 1505 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:csi-node-driver-49v22,UID:34d56e07-ff7d-441e-b5c1-bf41dd56f15b,APIVersion:v1,ResourceVersion:936,FieldPath:spec.containers{calico-csi},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/csi:v3.30.4\",Source:EventSource{Component:kubelet,Host:172.31.30.13,},FirstTimestamp:2025-11-08 00:35:57 +0000 UTC,LastTimestamp:2025-11-08 00:36:51.905282066 +0000 UTC m=+86.705642756,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.30.13,}" Nov 8 00:37:25.716829 kubelet[2579]: E1108 00:37:25.716750 2579 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:25.805912 kubelet[2579]: I1108 00:37:25.805330 2579 status_manager.go:890] "Failed to get status for pod" podUID="34d56e07-ff7d-441e-b5c1-bf41dd56f15b" pod="calico-system/csi-node-driver-49v22" err="Get \"https://172.31.19.248:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-49v22\": dial tcp 172.31.19.248:6443: connect: connection refused - error from a previous attempt: unexpected EOF" Nov 8 00:37:25.806870 kubelet[2579]: I1108 00:37:25.806830 2579 status_manager.go:890] "Failed to get status for pod" podUID="34d56e07-ff7d-441e-b5c1-bf41dd56f15b" pod="calico-system/csi-node-driver-49v22" err="Get \"https://172.31.19.248:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-49v22\": dial tcp 172.31.19.248:6443: connect: connection refused" Nov 8 00:37:25.807857 kubelet[2579]: I1108 00:37:25.807814 2579 status_manager.go:890] "Failed to get status for pod" podUID="34d56e07-ff7d-441e-b5c1-bf41dd56f15b" pod="calico-system/csi-node-driver-49v22" err="Get \"https://172.31.19.248:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-49v22\": dial tcp 172.31.19.248:6443: connect: connection refused" Nov 8 00:37:25.815183 kubelet[2579]: E1108 00:37:25.815142 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:25.909660 kubelet[2579]: I1108 00:37:25.908777 2579 status_manager.go:890] "Failed to get status for pod" podUID="34d56e07-ff7d-441e-b5c1-bf41dd56f15b" pod="calico-system/csi-node-driver-49v22" err="Get \"https://172.31.19.248:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-49v22\": dial tcp 172.31.19.248:6443: connect: connection refused" Nov 8 00:37:26.208782 kubelet[2579]: E1108 00:37:26.208735 2579 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.248:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.30.13?timeout=10s\": dial tcp 172.31.19.248:6443: connect: connection refused" interval="1.6s" Nov 8 00:37:26.815546 kubelet[2579]: E1108 00:37:26.815496 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:27.810427 kubelet[2579]: E1108 00:37:27.810384 2579 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.248:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.30.13?timeout=10s\": dial tcp 172.31.19.248:6443: connect: connection refused" interval="3.2s" Nov 8 00:37:27.817017 kubelet[2579]: E1108 00:37:27.816920 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:28.817862 kubelet[2579]: E1108 00:37:28.817771 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:29.819005 kubelet[2579]: E1108 00:37:29.818949 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:30.819094 kubelet[2579]: E1108 00:37:30.819054 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:30.903433 containerd[2090]: time="2025-11-08T00:37:30.903373953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:37:31.181841 containerd[2090]: time="2025-11-08T00:37:31.181782323Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:37:31.184241 containerd[2090]: time="2025-11-08T00:37:31.183991442Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:37:31.184241 containerd[2090]: time="2025-11-08T00:37:31.184024937Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:37:31.184459 kubelet[2579]: E1108 00:37:31.184413 2579 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:37:31.184522 kubelet[2579]: E1108 00:37:31.184468 2579 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:37:31.184706 kubelet[2579]: E1108 00:37:31.184649 2579 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5thvs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-49v22_calico-system(34d56e07-ff7d-441e-b5c1-bf41dd56f15b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:37:31.186762 containerd[2090]: time="2025-11-08T00:37:31.186723023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:37:31.446333 containerd[2090]: time="2025-11-08T00:37:31.446107634Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:37:31.448573 containerd[2090]: time="2025-11-08T00:37:31.448504688Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:37:31.448715 containerd[2090]: time="2025-11-08T00:37:31.448626376Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:37:31.448993 kubelet[2579]: E1108 00:37:31.448766 2579 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:37:31.448993 kubelet[2579]: E1108 00:37:31.448819 2579 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:37:31.448993 kubelet[2579]: E1108 00:37:31.448932 2579 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5thvs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-49v22_calico-system(34d56e07-ff7d-441e-b5c1-bf41dd56f15b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:37:31.450162 kubelet[2579]: E1108 00:37:31.450113 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-49v22" podUID="34d56e07-ff7d-441e-b5c1-bf41dd56f15b" Nov 8 00:37:31.819668 kubelet[2579]: E1108 00:37:31.819513 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:32.820369 kubelet[2579]: E1108 00:37:32.820302 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:33.821052 kubelet[2579]: E1108 00:37:33.820989 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:34.822314 kubelet[2579]: E1108 00:37:34.822262 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:35.823605 kubelet[2579]: E1108 00:37:35.823547 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:36.823969 kubelet[2579]: E1108 00:37:36.823905 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:37.824637 kubelet[2579]: E1108 00:37:37.824556 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:38.825737 kubelet[2579]: E1108 00:37:38.825665 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:39.826438 kubelet[2579]: E1108 00:37:39.826378 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:40.827680 kubelet[2579]: E1108 00:37:40.827554 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:41.012108 kubelet[2579]: E1108 00:37:41.012052 2579 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.248:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.30.13?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Nov 8 00:37:41.827765 kubelet[2579]: E1108 00:37:41.827700 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:42.828253 kubelet[2579]: E1108 00:37:42.828166 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:42.903361 kubelet[2579]: E1108 00:37:42.903291 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-49v22" podUID="34d56e07-ff7d-441e-b5c1-bf41dd56f15b" Nov 8 00:37:43.829156 kubelet[2579]: E1108 00:37:43.829024 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:44.830061 kubelet[2579]: E1108 00:37:44.830003 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:45.717241 kubelet[2579]: E1108 00:37:45.717183 2579 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:45.830283 kubelet[2579]: E1108 00:37:45.830225 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:46.830821 kubelet[2579]: E1108 00:37:46.830761 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 8 00:37:47.831745 kubelet[2579]: E1108 00:37:47.831697 2579 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"