Jul 14 22:21:47.873139 kernel: Linux version 6.6.97-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 14 20:23:49 -00 2025 Jul 14 22:21:47.873163 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bfa97d577a2baa7448b0ab2cae71f1606bd0084ffae5b72cc7eef5122a2ca497 Jul 14 22:21:47.873175 kernel: BIOS-provided physical RAM map: Jul 14 22:21:47.873182 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 14 22:21:47.873188 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 14 22:21:47.873194 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 14 22:21:47.873201 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 14 22:21:47.873217 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 14 22:21:47.873225 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 14 22:21:47.873242 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 14 22:21:47.873253 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 14 22:21:47.873261 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 14 22:21:47.873269 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 14 22:21:47.873278 kernel: NX (Execute Disable) protection: active Jul 14 22:21:47.873288 kernel: APIC: Static calls initialized Jul 14 22:21:47.873298 kernel: SMBIOS 2.8 present. Jul 14 22:21:47.873305 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 14 22:21:47.873312 kernel: Hypervisor detected: KVM Jul 14 22:21:47.873318 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 14 22:21:47.873325 kernel: kvm-clock: using sched offset of 2198103544 cycles Jul 14 22:21:47.873332 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 14 22:21:47.873339 kernel: tsc: Detected 2794.748 MHz processor Jul 14 22:21:47.873346 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 14 22:21:47.873353 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 14 22:21:47.873360 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 14 22:21:47.873370 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 14 22:21:47.873376 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 14 22:21:47.873383 kernel: Using GB pages for direct mapping Jul 14 22:21:47.873390 kernel: ACPI: Early table checksum verification disabled Jul 14 22:21:47.873397 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 14 22:21:47.873404 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:21:47.873411 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:21:47.873417 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:21:47.873426 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 14 22:21:47.873433 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:21:47.873440 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:21:47.873447 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:21:47.873454 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:21:47.873461 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 14 22:21:47.873468 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 14 22:21:47.873478 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 14 22:21:47.873487 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 14 22:21:47.873494 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 14 22:21:47.873501 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 14 22:21:47.873508 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 14 22:21:47.873515 kernel: No NUMA configuration found Jul 14 22:21:47.873522 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 14 22:21:47.873529 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jul 14 22:21:47.873539 kernel: Zone ranges: Jul 14 22:21:47.873546 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 14 22:21:47.873553 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 14 22:21:47.873560 kernel: Normal empty Jul 14 22:21:47.873567 kernel: Movable zone start for each node Jul 14 22:21:47.873574 kernel: Early memory node ranges Jul 14 22:21:47.873581 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 14 22:21:47.873588 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 14 22:21:47.873595 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 14 22:21:47.873604 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 14 22:21:47.873611 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 14 22:21:47.873618 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 14 22:21:47.873625 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 14 22:21:47.873632 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 14 22:21:47.873639 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 14 22:21:47.873646 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 14 22:21:47.873653 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 14 22:21:47.873660 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 14 22:21:47.873669 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 14 22:21:47.873676 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 14 22:21:47.873683 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 14 22:21:47.873690 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 14 22:21:47.873697 kernel: TSC deadline timer available Jul 14 22:21:47.873704 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 14 22:21:47.873712 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 14 22:21:47.873719 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 14 22:21:47.873726 kernel: kvm-guest: setup PV sched yield Jul 14 22:21:47.873733 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 14 22:21:47.873742 kernel: Booting paravirtualized kernel on KVM Jul 14 22:21:47.873749 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 14 22:21:47.873756 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 14 22:21:47.873764 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Jul 14 22:21:47.873771 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Jul 14 22:21:47.873777 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 14 22:21:47.873784 kernel: kvm-guest: PV spinlocks enabled Jul 14 22:21:47.873791 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 14 22:21:47.873800 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bfa97d577a2baa7448b0ab2cae71f1606bd0084ffae5b72cc7eef5122a2ca497 Jul 14 22:21:47.873810 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 14 22:21:47.873817 kernel: random: crng init done Jul 14 22:21:47.873824 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 14 22:21:47.873831 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 14 22:21:47.873838 kernel: Fallback order for Node 0: 0 Jul 14 22:21:47.873845 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jul 14 22:21:47.873852 kernel: Policy zone: DMA32 Jul 14 22:21:47.873859 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 14 22:21:47.873869 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 136900K reserved, 0K cma-reserved) Jul 14 22:21:47.873877 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 14 22:21:47.873884 kernel: ftrace: allocating 37970 entries in 149 pages Jul 14 22:21:47.873891 kernel: ftrace: allocated 149 pages with 4 groups Jul 14 22:21:47.873898 kernel: Dynamic Preempt: voluntary Jul 14 22:21:47.873905 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 14 22:21:47.873912 kernel: rcu: RCU event tracing is enabled. Jul 14 22:21:47.873920 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 14 22:21:47.873927 kernel: Trampoline variant of Tasks RCU enabled. Jul 14 22:21:47.873937 kernel: Rude variant of Tasks RCU enabled. Jul 14 22:21:47.873944 kernel: Tracing variant of Tasks RCU enabled. Jul 14 22:21:47.873953 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 14 22:21:47.873963 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 14 22:21:47.873972 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 14 22:21:47.873982 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 14 22:21:47.873992 kernel: Console: colour VGA+ 80x25 Jul 14 22:21:47.874001 kernel: printk: console [ttyS0] enabled Jul 14 22:21:47.874008 kernel: ACPI: Core revision 20230628 Jul 14 22:21:47.874019 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 14 22:21:47.874026 kernel: APIC: Switch to symmetric I/O mode setup Jul 14 22:21:47.874033 kernel: x2apic enabled Jul 14 22:21:47.874040 kernel: APIC: Switched APIC routing to: physical x2apic Jul 14 22:21:47.874047 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 14 22:21:47.874055 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 14 22:21:47.874062 kernel: kvm-guest: setup PV IPIs Jul 14 22:21:47.874094 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 14 22:21:47.874112 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 14 22:21:47.874139 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 14 22:21:47.874150 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 14 22:21:47.874161 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 14 22:21:47.874173 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 14 22:21:47.874180 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 14 22:21:47.874188 kernel: Spectre V2 : Mitigation: Retpolines Jul 14 22:21:47.874195 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 14 22:21:47.874212 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 14 22:21:47.874225 kernel: RETBleed: Mitigation: untrained return thunk Jul 14 22:21:47.874235 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 14 22:21:47.874245 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 14 22:21:47.874255 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 14 22:21:47.874265 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 14 22:21:47.874275 kernel: x86/bugs: return thunk changed Jul 14 22:21:47.874285 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 14 22:21:47.874296 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 14 22:21:47.874309 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 14 22:21:47.874319 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 14 22:21:47.874328 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 14 22:21:47.874336 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 14 22:21:47.874344 kernel: Freeing SMP alternatives memory: 32K Jul 14 22:21:47.874351 kernel: pid_max: default: 32768 minimum: 301 Jul 14 22:21:47.874358 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 14 22:21:47.874366 kernel: landlock: Up and running. Jul 14 22:21:47.874373 kernel: SELinux: Initializing. Jul 14 22:21:47.874383 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 22:21:47.874391 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 22:21:47.874398 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 14 22:21:47.874406 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 22:21:47.874414 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 22:21:47.874421 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 22:21:47.874429 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 14 22:21:47.874436 kernel: ... version: 0 Jul 14 22:21:47.874444 kernel: ... bit width: 48 Jul 14 22:21:47.874453 kernel: ... generic registers: 6 Jul 14 22:21:47.874461 kernel: ... value mask: 0000ffffffffffff Jul 14 22:21:47.874468 kernel: ... max period: 00007fffffffffff Jul 14 22:21:47.874476 kernel: ... fixed-purpose events: 0 Jul 14 22:21:47.874484 kernel: ... event mask: 000000000000003f Jul 14 22:21:47.874495 kernel: signal: max sigframe size: 1776 Jul 14 22:21:47.874504 kernel: rcu: Hierarchical SRCU implementation. Jul 14 22:21:47.874515 kernel: rcu: Max phase no-delay instances is 400. Jul 14 22:21:47.874525 kernel: smp: Bringing up secondary CPUs ... Jul 14 22:21:47.874538 kernel: smpboot: x86: Booting SMP configuration: Jul 14 22:21:47.874547 kernel: .... node #0, CPUs: #1 #2 #3 Jul 14 22:21:47.874555 kernel: smp: Brought up 1 node, 4 CPUs Jul 14 22:21:47.874562 kernel: smpboot: Max logical packages: 1 Jul 14 22:21:47.874569 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 14 22:21:47.874577 kernel: devtmpfs: initialized Jul 14 22:21:47.874584 kernel: x86/mm: Memory block size: 128MB Jul 14 22:21:47.874592 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 14 22:21:47.874599 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 14 22:21:47.874609 kernel: pinctrl core: initialized pinctrl subsystem Jul 14 22:21:47.874616 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 14 22:21:47.874624 kernel: audit: initializing netlink subsys (disabled) Jul 14 22:21:47.874631 kernel: audit: type=2000 audit(1752531707.616:1): state=initialized audit_enabled=0 res=1 Jul 14 22:21:47.874639 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 14 22:21:47.874646 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 14 22:21:47.874653 kernel: cpuidle: using governor menu Jul 14 22:21:47.874661 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 14 22:21:47.874668 kernel: dca service started, version 1.12.1 Jul 14 22:21:47.874678 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 14 22:21:47.874685 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 14 22:21:47.874693 kernel: PCI: Using configuration type 1 for base access Jul 14 22:21:47.874700 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 14 22:21:47.874708 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 14 22:21:47.874715 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 14 22:21:47.874723 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 14 22:21:47.874730 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 14 22:21:47.874737 kernel: ACPI: Added _OSI(Module Device) Jul 14 22:21:47.874747 kernel: ACPI: Added _OSI(Processor Device) Jul 14 22:21:47.874754 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 14 22:21:47.874762 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 14 22:21:47.874769 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 14 22:21:47.874776 kernel: ACPI: Interpreter enabled Jul 14 22:21:47.874784 kernel: ACPI: PM: (supports S0 S3 S5) Jul 14 22:21:47.874791 kernel: ACPI: Using IOAPIC for interrupt routing Jul 14 22:21:47.874798 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 14 22:21:47.874806 kernel: PCI: Using E820 reservations for host bridge windows Jul 14 22:21:47.874816 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 14 22:21:47.874823 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 14 22:21:47.875016 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 14 22:21:47.875198 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 14 22:21:47.875360 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 14 22:21:47.875375 kernel: PCI host bridge to bus 0000:00 Jul 14 22:21:47.875529 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 14 22:21:47.875655 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 14 22:21:47.875767 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 14 22:21:47.875905 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 14 22:21:47.876050 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 14 22:21:47.876190 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 14 22:21:47.876359 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 14 22:21:47.876522 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 14 22:21:47.876691 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 14 22:21:47.876832 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jul 14 22:21:47.876973 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jul 14 22:21:47.877140 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jul 14 22:21:47.877277 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 14 22:21:47.877408 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 14 22:21:47.877541 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jul 14 22:21:47.877662 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jul 14 22:21:47.877781 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jul 14 22:21:47.877928 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 14 22:21:47.878052 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jul 14 22:21:47.878254 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jul 14 22:21:47.878400 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jul 14 22:21:47.878537 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 14 22:21:47.878662 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jul 14 22:21:47.878797 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jul 14 22:21:47.878917 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 14 22:21:47.879036 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jul 14 22:21:47.879183 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 14 22:21:47.879314 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 14 22:21:47.879451 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 14 22:21:47.879573 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jul 14 22:21:47.879708 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jul 14 22:21:47.879839 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 14 22:21:47.879982 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jul 14 22:21:47.879993 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 14 22:21:47.880001 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 14 22:21:47.880012 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 14 22:21:47.880021 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 14 22:21:47.880029 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 14 22:21:47.880036 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 14 22:21:47.880044 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 14 22:21:47.880052 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 14 22:21:47.880059 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 14 22:21:47.880067 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 14 22:21:47.880086 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 14 22:21:47.880097 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 14 22:21:47.880105 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 14 22:21:47.880113 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 14 22:21:47.880120 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 14 22:21:47.880128 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 14 22:21:47.880136 kernel: iommu: Default domain type: Translated Jul 14 22:21:47.880143 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 14 22:21:47.880151 kernel: PCI: Using ACPI for IRQ routing Jul 14 22:21:47.880159 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 14 22:21:47.880169 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 14 22:21:47.880177 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 14 22:21:47.880332 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 14 22:21:47.880458 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 14 22:21:47.880577 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 14 22:21:47.880587 kernel: vgaarb: loaded Jul 14 22:21:47.880595 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 14 22:21:47.880603 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 14 22:21:47.880615 kernel: clocksource: Switched to clocksource kvm-clock Jul 14 22:21:47.880623 kernel: VFS: Disk quotas dquot_6.6.0 Jul 14 22:21:47.880631 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 14 22:21:47.880639 kernel: pnp: PnP ACPI init Jul 14 22:21:47.880767 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 14 22:21:47.880778 kernel: pnp: PnP ACPI: found 6 devices Jul 14 22:21:47.880786 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 14 22:21:47.880794 kernel: NET: Registered PF_INET protocol family Jul 14 22:21:47.880806 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 14 22:21:47.880813 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 14 22:21:47.880821 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 14 22:21:47.880829 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 14 22:21:47.880837 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 14 22:21:47.880845 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 14 22:21:47.880852 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 22:21:47.880860 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 22:21:47.880868 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 14 22:21:47.880878 kernel: NET: Registered PF_XDP protocol family Jul 14 22:21:47.880989 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 14 22:21:47.881169 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 14 22:21:47.881298 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 14 22:21:47.881407 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 14 22:21:47.881513 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 14 22:21:47.881619 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 14 22:21:47.881629 kernel: PCI: CLS 0 bytes, default 64 Jul 14 22:21:47.881641 kernel: Initialise system trusted keyrings Jul 14 22:21:47.881649 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 14 22:21:47.881656 kernel: Key type asymmetric registered Jul 14 22:21:47.881664 kernel: Asymmetric key parser 'x509' registered Jul 14 22:21:47.881672 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 14 22:21:47.881679 kernel: io scheduler mq-deadline registered Jul 14 22:21:47.881686 kernel: io scheduler kyber registered Jul 14 22:21:47.881694 kernel: io scheduler bfq registered Jul 14 22:21:47.881701 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 14 22:21:47.881709 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 14 22:21:47.881720 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 14 22:21:47.881727 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 14 22:21:47.881735 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 14 22:21:47.881743 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 14 22:21:47.881751 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 14 22:21:47.881758 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 14 22:21:47.881766 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 14 22:21:47.881891 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 14 22:21:47.881906 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 14 22:21:47.882017 kernel: rtc_cmos 00:04: registered as rtc0 Jul 14 22:21:47.882146 kernel: rtc_cmos 00:04: setting system clock to 2025-07-14T22:21:47 UTC (1752531707) Jul 14 22:21:47.882271 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 14 22:21:47.882281 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 14 22:21:47.882289 kernel: NET: Registered PF_INET6 protocol family Jul 14 22:21:47.882297 kernel: Segment Routing with IPv6 Jul 14 22:21:47.882304 kernel: In-situ OAM (IOAM) with IPv6 Jul 14 22:21:47.882315 kernel: NET: Registered PF_PACKET protocol family Jul 14 22:21:47.882323 kernel: Key type dns_resolver registered Jul 14 22:21:47.882330 kernel: IPI shorthand broadcast: enabled Jul 14 22:21:47.882338 kernel: sched_clock: Marking stable (576002515, 101598620)->(727483381, -49882246) Jul 14 22:21:47.882348 kernel: registered taskstats version 1 Jul 14 22:21:47.882356 kernel: Loading compiled-in X.509 certificates Jul 14 22:21:47.882366 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.97-flatcar: ff10e110ca3923b510cf0133f4e9f48dd636b870' Jul 14 22:21:47.882374 kernel: Key type .fscrypt registered Jul 14 22:21:47.882381 kernel: Key type fscrypt-provisioning registered Jul 14 22:21:47.882392 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 14 22:21:47.882400 kernel: ima: Allocated hash algorithm: sha1 Jul 14 22:21:47.882407 kernel: ima: No architecture policies found Jul 14 22:21:47.882414 kernel: clk: Disabling unused clocks Jul 14 22:21:47.882422 kernel: Freeing unused kernel image (initmem) memory: 42876K Jul 14 22:21:47.882430 kernel: Write protecting the kernel read-only data: 36864k Jul 14 22:21:47.882437 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 14 22:21:47.882445 kernel: Run /init as init process Jul 14 22:21:47.882452 kernel: with arguments: Jul 14 22:21:47.882462 kernel: /init Jul 14 22:21:47.882475 kernel: with environment: Jul 14 22:21:47.882488 kernel: HOME=/ Jul 14 22:21:47.882498 kernel: TERM=linux Jul 14 22:21:47.882508 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 14 22:21:47.882518 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 14 22:21:47.882528 systemd[1]: Detected virtualization kvm. Jul 14 22:21:47.882536 systemd[1]: Detected architecture x86-64. Jul 14 22:21:47.882549 systemd[1]: Running in initrd. Jul 14 22:21:47.882557 systemd[1]: No hostname configured, using default hostname. Jul 14 22:21:47.882565 systemd[1]: Hostname set to . Jul 14 22:21:47.882573 systemd[1]: Initializing machine ID from VM UUID. Jul 14 22:21:47.882581 systemd[1]: Queued start job for default target initrd.target. Jul 14 22:21:47.882590 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 22:21:47.882598 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 22:21:47.882607 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 14 22:21:47.882618 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 22:21:47.882638 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 14 22:21:47.882649 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 14 22:21:47.882660 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 14 22:21:47.882670 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 14 22:21:47.882679 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 22:21:47.882687 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 22:21:47.882695 systemd[1]: Reached target paths.target - Path Units. Jul 14 22:21:47.882704 systemd[1]: Reached target slices.target - Slice Units. Jul 14 22:21:47.882712 systemd[1]: Reached target swap.target - Swaps. Jul 14 22:21:47.882720 systemd[1]: Reached target timers.target - Timer Units. Jul 14 22:21:47.882728 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 22:21:47.882736 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 22:21:47.882747 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 14 22:21:47.882755 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 14 22:21:47.882763 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 22:21:47.882771 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 22:21:47.882780 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 22:21:47.882788 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 22:21:47.882796 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 14 22:21:47.882804 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 22:21:47.882812 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 14 22:21:47.882823 systemd[1]: Starting systemd-fsck-usr.service... Jul 14 22:21:47.882831 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 22:21:47.882839 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 22:21:47.882847 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:21:47.882856 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 14 22:21:47.882864 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 22:21:47.882872 systemd[1]: Finished systemd-fsck-usr.service. Jul 14 22:21:47.882886 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 14 22:21:47.882917 systemd-journald[192]: Collecting audit messages is disabled. Jul 14 22:21:47.882940 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 22:21:47.882949 systemd-journald[192]: Journal started Jul 14 22:21:47.882972 systemd-journald[192]: Runtime Journal (/run/log/journal/68f0848777684e5e93c88716d4866465) is 6.0M, max 48.4M, 42.3M free. Jul 14 22:21:47.869779 systemd-modules-load[193]: Inserted module 'overlay' Jul 14 22:21:47.911569 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 14 22:21:47.911602 kernel: Bridge firewalling registered Jul 14 22:21:47.911616 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 22:21:47.897403 systemd-modules-load[193]: Inserted module 'br_netfilter' Jul 14 22:21:47.913575 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 22:21:47.915875 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:21:47.931251 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 22:21:47.934351 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 22:21:47.936906 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 22:21:47.940543 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 22:21:47.950028 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:21:47.951444 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 22:21:47.956327 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:21:47.965364 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 14 22:21:47.966018 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 22:21:47.969843 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 22:21:47.978893 dracut-cmdline[224]: dracut-dracut-053 Jul 14 22:21:47.982310 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bfa97d577a2baa7448b0ab2cae71f1606bd0084ffae5b72cc7eef5122a2ca497 Jul 14 22:21:48.014319 systemd-resolved[231]: Positive Trust Anchors: Jul 14 22:21:48.014334 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 22:21:48.014366 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 22:21:48.017328 systemd-resolved[231]: Defaulting to hostname 'linux'. Jul 14 22:21:48.018450 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 22:21:48.023865 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 22:21:48.070111 kernel: SCSI subsystem initialized Jul 14 22:21:48.079108 kernel: Loading iSCSI transport class v2.0-870. Jul 14 22:21:48.090120 kernel: iscsi: registered transport (tcp) Jul 14 22:21:48.114250 kernel: iscsi: registered transport (qla4xxx) Jul 14 22:21:48.114271 kernel: QLogic iSCSI HBA Driver Jul 14 22:21:48.166283 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 14 22:21:48.174224 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 14 22:21:48.199349 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 14 22:21:48.199379 kernel: device-mapper: uevent: version 1.0.3 Jul 14 22:21:48.200359 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 14 22:21:48.249115 kernel: raid6: avx2x4 gen() 26440 MB/s Jul 14 22:21:48.266114 kernel: raid6: avx2x2 gen() 28149 MB/s Jul 14 22:21:48.283201 kernel: raid6: avx2x1 gen() 25842 MB/s Jul 14 22:21:48.283228 kernel: raid6: using algorithm avx2x2 gen() 28149 MB/s Jul 14 22:21:48.301212 kernel: raid6: .... xor() 19819 MB/s, rmw enabled Jul 14 22:21:48.301255 kernel: raid6: using avx2x2 recovery algorithm Jul 14 22:21:48.322118 kernel: xor: automatically using best checksumming function avx Jul 14 22:21:48.479129 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 14 22:21:48.492919 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 14 22:21:48.508262 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 22:21:48.524057 systemd-udevd[413]: Using default interface naming scheme 'v255'. Jul 14 22:21:48.530060 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 22:21:48.539259 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 14 22:21:48.553577 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Jul 14 22:21:48.587058 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 22:21:48.604303 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 22:21:48.672250 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 22:21:48.679304 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 14 22:21:48.696545 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 14 22:21:48.698451 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 22:21:48.700422 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 22:21:48.701724 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 22:21:48.711106 kernel: cryptd: max_cpu_qlen set to 1000 Jul 14 22:21:48.713312 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 14 22:21:48.723150 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 14 22:21:48.726753 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 14 22:21:48.725702 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 14 22:21:48.736610 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 14 22:21:48.736671 kernel: GPT:9289727 != 19775487 Jul 14 22:21:48.736693 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 14 22:21:48.736710 kernel: GPT:9289727 != 19775487 Jul 14 22:21:48.736730 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 14 22:21:48.736750 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:21:48.739104 kernel: libata version 3.00 loaded. Jul 14 22:21:48.742433 kernel: AVX2 version of gcm_enc/dec engaged. Jul 14 22:21:48.742453 kernel: AES CTR mode by8 optimization enabled Jul 14 22:21:48.746023 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 22:21:48.746162 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:21:48.749759 kernel: ahci 0000:00:1f.2: version 3.0 Jul 14 22:21:48.749956 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 14 22:21:48.749524 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 22:21:48.759579 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 14 22:21:48.759773 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 14 22:21:48.759916 kernel: scsi host0: ahci Jul 14 22:21:48.752351 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 22:21:48.752535 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:21:48.763435 kernel: scsi host1: ahci Jul 14 22:21:48.754104 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:21:48.766047 kernel: scsi host2: ahci Jul 14 22:21:48.766281 kernel: scsi host3: ahci Jul 14 22:21:48.767425 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:21:48.771395 kernel: BTRFS: device fsid d23b6972-ad36-4741-bf36-4d440b923127 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (463) Jul 14 22:21:48.771410 kernel: scsi host4: ahci Jul 14 22:21:48.780660 kernel: scsi host5: ahci Jul 14 22:21:48.783196 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jul 14 22:21:48.783215 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jul 14 22:21:48.783229 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jul 14 22:21:48.783242 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (470) Jul 14 22:21:48.783256 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jul 14 22:21:48.783269 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jul 14 22:21:48.785014 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jul 14 22:21:48.795663 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 14 22:21:48.822942 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:21:48.831107 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 14 22:21:48.831455 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 14 22:21:48.841279 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 14 22:21:48.847879 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 14 22:21:48.860294 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 14 22:21:48.861569 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 22:21:48.875619 disk-uuid[567]: Primary Header is updated. Jul 14 22:21:48.875619 disk-uuid[567]: Secondary Entries is updated. Jul 14 22:21:48.875619 disk-uuid[567]: Secondary Header is updated. Jul 14 22:21:48.880143 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:21:48.881117 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:21:48.888110 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:21:49.097517 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 14 22:21:49.097609 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 14 22:21:49.097624 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 14 22:21:49.099108 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 14 22:21:49.099147 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 14 22:21:49.100111 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 14 22:21:49.101452 kernel: ata3.00: applying bridge limits Jul 14 22:21:49.101473 kernel: ata3.00: configured for UDMA/100 Jul 14 22:21:49.102116 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 14 22:21:49.107112 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 14 22:21:49.141617 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 14 22:21:49.141970 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 14 22:21:49.158213 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 14 22:21:49.887181 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:21:49.887241 disk-uuid[575]: The operation has completed successfully. Jul 14 22:21:49.915701 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 14 22:21:49.915830 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 14 22:21:49.949335 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 14 22:21:49.953321 sh[591]: Success Jul 14 22:21:49.966119 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 14 22:21:50.000654 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 14 22:21:50.009650 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 14 22:21:50.012791 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 14 22:21:50.027156 kernel: BTRFS info (device dm-0): first mount of filesystem d23b6972-ad36-4741-bf36-4d440b923127 Jul 14 22:21:50.027195 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:21:50.027210 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 14 22:21:50.028127 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 14 22:21:50.029403 kernel: BTRFS info (device dm-0): using free space tree Jul 14 22:21:50.033989 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 14 22:21:50.035123 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 14 22:21:50.052307 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 14 22:21:50.054323 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 14 22:21:50.063340 kernel: BTRFS info (device vda6): first mount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:21:50.063372 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:21:50.063383 kernel: BTRFS info (device vda6): using free space tree Jul 14 22:21:50.066126 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 22:21:50.075611 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 14 22:21:50.077433 kernel: BTRFS info (device vda6): last unmount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:21:50.086900 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 14 22:21:50.093437 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 14 22:21:50.149294 ignition[679]: Ignition 2.19.0 Jul 14 22:21:50.149306 ignition[679]: Stage: fetch-offline Jul 14 22:21:50.149341 ignition[679]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:21:50.149352 ignition[679]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:21:50.149441 ignition[679]: parsed url from cmdline: "" Jul 14 22:21:50.149445 ignition[679]: no config URL provided Jul 14 22:21:50.149451 ignition[679]: reading system config file "/usr/lib/ignition/user.ign" Jul 14 22:21:50.149460 ignition[679]: no config at "/usr/lib/ignition/user.ign" Jul 14 22:21:50.149488 ignition[679]: op(1): [started] loading QEMU firmware config module Jul 14 22:21:50.149493 ignition[679]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 14 22:21:50.160230 ignition[679]: op(1): [finished] loading QEMU firmware config module Jul 14 22:21:50.160257 ignition[679]: QEMU firmware config was not found. Ignoring... Jul 14 22:21:50.162757 ignition[679]: parsing config with SHA512: 298708faf39a200a150a6681dd9b656f5a9605321afb93c48cea1f54b183a81d0213c761cfdf305c4e4ba552cfd16b1d4f6539b12ac5deddc858c8037c14b2ae Jul 14 22:21:50.165697 unknown[679]: fetched base config from "system" Jul 14 22:21:50.166067 ignition[679]: fetch-offline: fetch-offline passed Jul 14 22:21:50.165710 unknown[679]: fetched user config from "qemu" Jul 14 22:21:50.166185 ignition[679]: Ignition finished successfully Jul 14 22:21:50.168823 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 22:21:50.185261 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 22:21:50.195225 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 22:21:50.215844 systemd-networkd[781]: lo: Link UP Jul 14 22:21:50.215855 systemd-networkd[781]: lo: Gained carrier Jul 14 22:21:50.217396 systemd-networkd[781]: Enumeration completed Jul 14 22:21:50.217518 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 22:21:50.217793 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 22:21:50.217796 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 22:21:50.218573 systemd-networkd[781]: eth0: Link UP Jul 14 22:21:50.218577 systemd-networkd[781]: eth0: Gained carrier Jul 14 22:21:50.218584 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 22:21:50.220773 systemd[1]: Reached target network.target - Network. Jul 14 22:21:50.223793 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 14 22:21:50.231229 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 14 22:21:50.235126 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.145/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 22:21:50.246981 ignition[785]: Ignition 2.19.0 Jul 14 22:21:50.246999 ignition[785]: Stage: kargs Jul 14 22:21:50.247245 ignition[785]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:21:50.247262 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:21:50.248149 ignition[785]: kargs: kargs passed Jul 14 22:21:50.248202 ignition[785]: Ignition finished successfully Jul 14 22:21:50.251315 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 14 22:21:50.263265 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 14 22:21:50.274438 ignition[794]: Ignition 2.19.0 Jul 14 22:21:50.274449 ignition[794]: Stage: disks Jul 14 22:21:50.274631 ignition[794]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:21:50.274643 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:21:50.277527 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 14 22:21:50.275278 ignition[794]: disks: disks passed Jul 14 22:21:50.279225 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 14 22:21:50.275322 ignition[794]: Ignition finished successfully Jul 14 22:21:50.281031 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 14 22:21:50.282854 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 22:21:50.283099 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 22:21:50.283418 systemd[1]: Reached target basic.target - Basic System. Jul 14 22:21:50.290228 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 14 22:21:50.301127 systemd-resolved[231]: Detected conflict on linux IN A 10.0.0.145 Jul 14 22:21:50.301147 systemd-resolved[231]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Jul 14 22:21:50.303749 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 14 22:21:50.309022 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 14 22:21:50.321163 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 14 22:21:50.409123 kernel: EXT4-fs (vda9): mounted filesystem dda007d3-640b-4d11-976f-3b761ca7aabd r/w with ordered data mode. Quota mode: none. Jul 14 22:21:50.410037 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 14 22:21:50.412216 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 14 22:21:50.420199 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 22:21:50.422261 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 14 22:21:50.423040 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 14 22:21:50.428944 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (812) Jul 14 22:21:50.428977 kernel: BTRFS info (device vda6): first mount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:21:50.423112 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 14 22:21:50.435549 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:21:50.435573 kernel: BTRFS info (device vda6): using free space tree Jul 14 22:21:50.435588 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 22:21:50.423154 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 22:21:50.431430 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 14 22:21:50.436597 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 22:21:50.439678 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 14 22:21:50.475291 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Jul 14 22:21:50.480671 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Jul 14 22:21:50.485062 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Jul 14 22:21:50.489695 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Jul 14 22:21:50.577986 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 14 22:21:50.589191 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 14 22:21:50.590984 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 14 22:21:50.598111 kernel: BTRFS info (device vda6): last unmount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:21:50.657186 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 14 22:21:50.688288 ignition[931]: INFO : Ignition 2.19.0 Jul 14 22:21:50.688288 ignition[931]: INFO : Stage: mount Jul 14 22:21:50.690282 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:21:50.690282 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:21:50.690282 ignition[931]: INFO : mount: mount passed Jul 14 22:21:50.690282 ignition[931]: INFO : Ignition finished successfully Jul 14 22:21:50.690882 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 14 22:21:50.693017 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 14 22:21:51.026186 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 14 22:21:51.043218 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 22:21:51.098754 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Jul 14 22:21:51.098786 kernel: BTRFS info (device vda6): first mount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:21:51.098798 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:21:51.100229 kernel: BTRFS info (device vda6): using free space tree Jul 14 22:21:51.103103 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 22:21:51.104421 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 22:21:51.124715 ignition[957]: INFO : Ignition 2.19.0 Jul 14 22:21:51.124715 ignition[957]: INFO : Stage: files Jul 14 22:21:51.126386 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:21:51.126386 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:21:51.126386 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Jul 14 22:21:51.131179 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 14 22:21:51.131179 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 14 22:21:51.131179 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 14 22:21:51.131179 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 14 22:21:51.131179 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 14 22:21:51.129652 unknown[957]: wrote ssh authorized keys file for user: core Jul 14 22:21:51.138710 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jul 14 22:21:51.138710 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jul 14 22:21:51.138710 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 22:21:51.138710 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 22:21:51.138710 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 14 22:21:51.138710 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 14 22:21:51.138710 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 14 22:21:51.138710 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 14 22:21:51.610285 systemd-networkd[781]: eth0: Gained IPv6LL Jul 14 22:22:11.667186 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jul 14 22:22:12.646390 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 14 22:22:12.646390 ignition[957]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jul 14 22:22:12.650912 ignition[957]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 22:22:12.653687 ignition[957]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 22:22:12.653687 ignition[957]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jul 14 22:22:12.653687 ignition[957]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jul 14 22:22:12.682861 ignition[957]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 22:22:12.689999 ignition[957]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 22:22:12.691539 ignition[957]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jul 14 22:22:12.691539 ignition[957]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 14 22:22:12.691539 ignition[957]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 14 22:22:12.691539 ignition[957]: INFO : files: files passed Jul 14 22:22:12.691539 ignition[957]: INFO : Ignition finished successfully Jul 14 22:22:12.698339 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 14 22:22:12.711236 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 14 22:22:12.714491 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 14 22:22:12.718540 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 14 22:22:12.718686 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 14 22:22:12.725475 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Jul 14 22:22:12.730133 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:22:12.730133 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:22:12.733823 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:22:12.736169 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 22:22:12.739378 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 14 22:22:12.750334 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 14 22:22:12.776065 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 14 22:22:12.776224 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 14 22:22:12.776933 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 14 22:22:12.780342 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 14 22:22:12.780746 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 14 22:22:12.786261 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 14 22:22:12.801531 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 22:22:12.814310 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 14 22:22:12.826708 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 14 22:22:12.827518 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 22:22:12.827845 systemd[1]: Stopped target timers.target - Timer Units. Jul 14 22:22:12.828147 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 14 22:22:12.828300 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 22:22:12.828992 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 14 22:22:12.829491 systemd[1]: Stopped target basic.target - Basic System. Jul 14 22:22:12.865903 ignition[1012]: INFO : Ignition 2.19.0 Jul 14 22:22:12.865903 ignition[1012]: INFO : Stage: umount Jul 14 22:22:12.865903 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:22:12.865903 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:22:12.865903 ignition[1012]: INFO : umount: umount passed Jul 14 22:22:12.865903 ignition[1012]: INFO : Ignition finished successfully Jul 14 22:22:12.829849 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 14 22:22:12.830395 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 22:22:12.830743 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 14 22:22:12.831107 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 14 22:22:12.831474 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 22:22:12.831860 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 14 22:22:12.832189 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 14 22:22:12.832539 systemd[1]: Stopped target swap.target - Swaps. Jul 14 22:22:12.832819 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 14 22:22:12.832967 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 14 22:22:12.833871 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 14 22:22:12.834378 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 22:22:12.834712 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 14 22:22:12.834888 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 22:22:12.835375 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 14 22:22:12.835531 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 14 22:22:12.836195 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 14 22:22:12.836341 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 22:22:12.836806 systemd[1]: Stopped target paths.target - Path Units. Jul 14 22:22:12.837321 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 14 22:22:12.841177 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 22:22:12.841776 systemd[1]: Stopped target slices.target - Slice Units. Jul 14 22:22:12.842093 systemd[1]: Stopped target sockets.target - Socket Units. Jul 14 22:22:12.842391 systemd[1]: iscsid.socket: Deactivated successfully. Jul 14 22:22:12.842534 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 22:22:12.842889 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 14 22:22:12.843013 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 22:22:12.843553 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 14 22:22:12.843707 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 22:22:12.844179 systemd[1]: ignition-files.service: Deactivated successfully. Jul 14 22:22:12.844319 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 14 22:22:12.845811 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 14 22:22:12.847164 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 14 22:22:12.847472 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 14 22:22:12.847648 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 22:22:12.848057 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 14 22:22:12.848208 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 22:22:12.852611 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 14 22:22:12.852747 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 14 22:22:12.867513 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 14 22:22:12.867676 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 14 22:22:12.870138 systemd[1]: Stopped target network.target - Network. Jul 14 22:22:12.871430 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 14 22:22:12.871522 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 14 22:22:12.873298 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 14 22:22:12.873358 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 14 22:22:12.875183 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 14 22:22:12.875248 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 14 22:22:12.877263 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 14 22:22:12.877324 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 14 22:22:12.880207 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 14 22:22:12.882478 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 14 22:22:12.885155 systemd-networkd[781]: eth0: DHCPv6 lease lost Jul 14 22:22:12.885870 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 14 22:22:12.887877 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 14 22:22:12.888028 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 14 22:22:12.890057 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 14 22:22:12.890132 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 14 22:22:12.899233 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 14 22:22:12.900614 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 14 22:22:12.900690 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 22:22:12.903182 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 22:22:12.905674 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 14 22:22:12.905819 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 14 22:22:12.910584 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 22:22:12.910668 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:22:12.912227 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 14 22:22:12.912275 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 14 22:22:12.914139 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 14 22:22:12.914188 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 22:22:12.917845 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 14 22:22:12.918035 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 22:22:12.919784 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 14 22:22:12.919919 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 14 22:22:12.922148 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 14 22:22:12.922259 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 14 22:22:12.923490 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 14 22:22:12.923536 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 22:22:12.925703 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 14 22:22:12.925761 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 14 22:22:12.928367 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 14 22:22:12.928416 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 14 22:22:12.930072 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 22:22:12.930136 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:22:12.944297 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 14 22:22:12.945888 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 14 22:22:12.945970 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 22:22:12.948007 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 14 22:22:12.948059 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 22:22:12.950065 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 14 22:22:12.950129 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 22:22:12.952275 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 22:22:12.952327 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:22:12.954813 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 14 22:22:12.954951 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 14 22:22:13.353961 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 14 22:22:13.354943 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 14 22:22:13.356966 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 14 22:22:13.358917 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 14 22:22:13.358979 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 14 22:22:13.372367 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 14 22:22:13.378746 systemd[1]: Switching root. Jul 14 22:22:13.410873 systemd-journald[192]: Journal stopped Jul 14 22:22:15.069196 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jul 14 22:22:15.069264 kernel: SELinux: policy capability network_peer_controls=1 Jul 14 22:22:15.069281 kernel: SELinux: policy capability open_perms=1 Jul 14 22:22:15.069296 kernel: SELinux: policy capability extended_socket_class=1 Jul 14 22:22:15.069312 kernel: SELinux: policy capability always_check_network=0 Jul 14 22:22:15.069323 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 14 22:22:15.069335 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 14 22:22:15.069346 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 14 22:22:15.069357 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 14 22:22:15.069372 kernel: audit: type=1403 audit(1752531734.255:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 14 22:22:15.069388 systemd[1]: Successfully loaded SELinux policy in 39.583ms. Jul 14 22:22:15.069426 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.467ms. Jul 14 22:22:15.069443 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 14 22:22:15.069459 systemd[1]: Detected virtualization kvm. Jul 14 22:22:15.069474 systemd[1]: Detected architecture x86-64. Jul 14 22:22:15.069492 systemd[1]: Detected first boot. Jul 14 22:22:15.069507 systemd[1]: Initializing machine ID from VM UUID. Jul 14 22:22:15.069525 zram_generator::config[1056]: No configuration found. Jul 14 22:22:15.069542 systemd[1]: Populated /etc with preset unit settings. Jul 14 22:22:15.069557 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 14 22:22:15.069569 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 14 22:22:15.069582 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 14 22:22:15.069595 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 14 22:22:15.069607 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 14 22:22:15.069618 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 14 22:22:15.069633 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 14 22:22:15.069645 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 14 22:22:15.069657 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 14 22:22:15.069669 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 14 22:22:15.069680 systemd[1]: Created slice user.slice - User and Session Slice. Jul 14 22:22:15.069693 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 22:22:15.069705 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 22:22:15.069716 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 14 22:22:15.069728 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 14 22:22:15.069742 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 14 22:22:15.069755 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 22:22:15.069767 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 14 22:22:15.069778 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 22:22:15.069790 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 14 22:22:15.069802 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 14 22:22:15.069814 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 14 22:22:15.069828 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 14 22:22:15.069841 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 22:22:15.069853 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 22:22:15.069865 systemd[1]: Reached target slices.target - Slice Units. Jul 14 22:22:15.069876 systemd[1]: Reached target swap.target - Swaps. Jul 14 22:22:15.069888 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 14 22:22:15.069900 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 14 22:22:15.069911 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 22:22:15.069923 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 22:22:15.069935 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 22:22:15.069949 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 14 22:22:15.069960 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 14 22:22:15.069972 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 14 22:22:15.069984 systemd[1]: Mounting media.mount - External Media Directory... Jul 14 22:22:15.070000 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:22:15.070012 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 14 22:22:15.070025 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 14 22:22:15.071139 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 14 22:22:15.071165 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 14 22:22:15.071179 systemd[1]: Reached target machines.target - Containers. Jul 14 22:22:15.071191 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 14 22:22:15.071205 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 22:22:15.071218 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 22:22:15.071230 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 14 22:22:15.071242 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 22:22:15.071254 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 22:22:15.071266 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 22:22:15.071280 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 14 22:22:15.071292 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 22:22:15.071307 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 14 22:22:15.071322 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 14 22:22:15.071337 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 14 22:22:15.071353 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 14 22:22:15.071368 systemd[1]: Stopped systemd-fsck-usr.service. Jul 14 22:22:15.071383 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 22:22:15.071431 kernel: loop: module loaded Jul 14 22:22:15.071453 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 22:22:15.071472 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 14 22:22:15.071488 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 14 22:22:15.071503 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 22:22:15.071518 systemd[1]: verity-setup.service: Deactivated successfully. Jul 14 22:22:15.071530 systemd[1]: Stopped verity-setup.service. Jul 14 22:22:15.071544 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:22:15.071587 systemd-journald[1119]: Collecting audit messages is disabled. Jul 14 22:22:15.071624 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 14 22:22:15.071637 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 14 22:22:15.071649 systemd-journald[1119]: Journal started Jul 14 22:22:15.071671 systemd-journald[1119]: Runtime Journal (/run/log/journal/68f0848777684e5e93c88716d4866465) is 6.0M, max 48.4M, 42.3M free. Jul 14 22:22:14.817702 systemd[1]: Queued start job for default target multi-user.target. Jul 14 22:22:14.837359 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 14 22:22:14.837847 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 14 22:22:15.076271 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 22:22:15.077902 systemd[1]: Mounted media.mount - External Media Directory. Jul 14 22:22:15.079110 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 14 22:22:15.080467 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 14 22:22:15.098848 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 14 22:22:15.103130 kernel: ACPI: bus type drm_connector registered Jul 14 22:22:15.103280 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 22:22:15.105107 kernel: fuse: init (API version 7.39) Jul 14 22:22:15.105767 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 14 22:22:15.105944 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 14 22:22:15.107617 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:22:15.107794 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 22:22:15.109412 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 22:22:15.109635 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 22:22:15.111203 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:22:15.111370 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 22:22:15.112889 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 14 22:22:15.113060 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 14 22:22:15.114526 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:22:15.114692 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 22:22:15.116099 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 22:22:15.117477 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 14 22:22:15.119208 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 14 22:22:15.133142 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 14 22:22:15.143179 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 14 22:22:15.145545 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 14 22:22:15.146697 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 14 22:22:15.146724 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 22:22:15.148754 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 14 22:22:15.151310 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 14 22:22:15.154247 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 14 22:22:15.155344 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:22:15.158273 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 14 22:22:15.162471 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 14 22:22:15.164249 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 22:22:15.167195 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 14 22:22:15.168526 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 22:22:15.171016 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 22:22:15.180673 systemd-journald[1119]: Time spent on flushing to /var/log/journal/68f0848777684e5e93c88716d4866465 is 16.082ms for 935 entries. Jul 14 22:22:15.180673 systemd-journald[1119]: System Journal (/var/log/journal/68f0848777684e5e93c88716d4866465) is 8.0M, max 195.6M, 187.6M free. Jul 14 22:22:15.824593 systemd-journald[1119]: Received client request to flush runtime journal. Jul 14 22:22:15.824668 kernel: loop0: detected capacity change from 0 to 224512 Jul 14 22:22:15.824702 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 14 22:22:15.824734 kernel: loop1: detected capacity change from 0 to 142488 Jul 14 22:22:15.824756 kernel: loop2: detected capacity change from 0 to 140768 Jul 14 22:22:15.824780 kernel: loop3: detected capacity change from 0 to 224512 Jul 14 22:22:15.824805 kernel: loop4: detected capacity change from 0 to 142488 Jul 14 22:22:15.824829 kernel: loop5: detected capacity change from 0 to 140768 Jul 14 22:22:15.824854 zram_generator::config[1219]: No configuration found. Jul 14 22:22:15.824946 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 14 22:22:15.185290 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 14 22:22:15.187668 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 14 22:22:15.190722 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 14 22:22:15.193199 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 22:22:15.194815 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 14 22:22:15.196293 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 14 22:22:15.197848 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 14 22:22:15.211441 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 14 22:22:15.226339 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:22:15.227883 udevadm[1179]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 14 22:22:15.231141 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Jul 14 22:22:15.231156 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Jul 14 22:22:15.238091 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 22:22:15.246747 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 14 22:22:15.291686 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 14 22:22:15.300231 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 22:22:15.322234 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Jul 14 22:22:15.322250 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Jul 14 22:22:15.328298 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 22:22:15.383240 (sd-merge)[1190]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 14 22:22:15.384608 (sd-merge)[1190]: Merged extensions into '/usr'. Jul 14 22:22:15.390427 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Jul 14 22:22:15.390441 systemd[1]: Reloading... Jul 14 22:22:15.668299 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:22:15.717907 systemd[1]: Reloading finished in 327 ms. Jul 14 22:22:15.752553 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 14 22:22:15.754365 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 14 22:22:15.757846 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 14 22:22:15.775427 systemd[1]: Starting ensure-sysext.service... Jul 14 22:22:15.778313 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 14 22:22:15.784795 systemd[1]: Reloading requested from client PID 1251 ('systemctl') (unit ensure-sysext.service)... Jul 14 22:22:15.784808 systemd[1]: Reloading... Jul 14 22:22:15.844055 zram_generator::config[1277]: No configuration found. Jul 14 22:22:16.315826 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:22:16.366944 systemd[1]: Reloading finished in 581 ms. Jul 14 22:22:16.389128 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 14 22:22:16.390634 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 14 22:22:16.414205 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 22:22:16.416914 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:22:16.417157 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 22:22:16.428428 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 22:22:16.430869 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 22:22:16.433393 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 22:22:16.434471 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:22:16.434594 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:22:16.435840 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:22:16.436062 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 22:22:16.440058 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:22:16.440568 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 22:22:16.442113 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 22:22:16.443372 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:22:16.443520 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:22:16.444368 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:22:16.444580 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 22:22:16.446424 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:22:16.446628 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 22:22:16.449180 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:22:16.449396 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 22:22:16.455332 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:22:16.455606 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 22:22:16.456934 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 22:22:16.459175 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 22:22:16.461626 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 22:22:16.464063 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 22:22:16.465290 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:22:16.465485 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:22:16.466623 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:22:16.466835 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 22:22:16.468792 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:22:16.468992 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 22:22:16.469479 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 14 22:22:16.469868 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 14 22:22:16.470939 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 14 22:22:16.471539 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Jul 14 22:22:16.471621 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Jul 14 22:22:16.473775 systemd[1]: Finished ensure-sysext.service. Jul 14 22:22:16.475324 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 22:22:16.475506 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 22:22:16.475758 systemd-tmpfiles[1322]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 22:22:16.475769 systemd-tmpfiles[1322]: Skipping /boot Jul 14 22:22:16.476998 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:22:16.477204 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 22:22:16.482968 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 22:22:16.483033 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 22:22:16.487337 systemd-tmpfiles[1322]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 22:22:16.487358 systemd-tmpfiles[1322]: Skipping /boot Jul 14 22:22:16.612128 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 22:22:16.627238 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 14 22:22:16.629710 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 14 22:22:16.631817 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 14 22:22:16.636609 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 22:22:16.644195 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 14 22:22:16.649236 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 14 22:22:16.653211 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 14 22:22:16.661926 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 14 22:22:16.664549 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 14 22:22:16.666313 augenrules[1364]: No rules Jul 14 22:22:16.667810 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 14 22:22:16.694607 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 14 22:22:16.696737 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 22:22:16.699303 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 14 22:22:16.836817 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 14 22:22:16.838330 systemd[1]: Reached target time-set.target - System Time Set. Jul 14 22:22:16.842215 systemd-resolved[1347]: Positive Trust Anchors: Jul 14 22:22:16.842236 systemd-resolved[1347]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 22:22:16.842268 systemd-resolved[1347]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 22:22:16.846118 systemd-resolved[1347]: Defaulting to hostname 'linux'. Jul 14 22:22:16.847702 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 22:22:16.848860 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 22:22:16.972545 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 14 22:22:16.984332 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 22:22:17.009542 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 14 22:22:17.024273 systemd-udevd[1381]: Using default interface naming scheme 'v255'. Jul 14 22:22:17.067045 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 14 22:22:17.070159 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 14 22:22:17.074704 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 14 22:22:17.083542 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 22:22:17.091449 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 22:22:17.116922 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 14 22:22:17.149114 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1396) Jul 14 22:22:17.158162 systemd-networkd[1392]: lo: Link UP Jul 14 22:22:17.158178 systemd-networkd[1392]: lo: Gained carrier Jul 14 22:22:17.160568 systemd-networkd[1392]: Enumeration completed Jul 14 22:22:17.160985 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 22:22:17.162423 systemd[1]: Reached target network.target - Network. Jul 14 22:22:17.164123 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 22:22:17.164135 systemd-networkd[1392]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 22:22:17.165712 systemd-networkd[1392]: eth0: Link UP Jul 14 22:22:17.165723 systemd-networkd[1392]: eth0: Gained carrier Jul 14 22:22:17.165735 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 22:22:17.169264 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 14 22:22:17.181527 systemd-networkd[1392]: eth0: DHCPv4 address 10.0.0.145/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 22:22:17.182975 systemd-timesyncd[1355]: Network configuration changed, trying to establish connection. Jul 14 22:22:17.907140 systemd-timesyncd[1355]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 14 22:22:17.907186 systemd-timesyncd[1355]: Initial clock synchronization to Mon 2025-07-14 22:22:17.907049 UTC. Jul 14 22:22:17.907230 systemd-resolved[1347]: Clock change detected. Flushing caches. Jul 14 22:22:17.911684 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 14 22:22:17.915649 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 14 22:22:17.925821 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 14 22:22:17.931991 kernel: ACPI: button: Power Button [PWRF] Jul 14 22:22:17.932035 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 14 22:22:17.959007 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 14 22:22:18.016653 kernel: mousedev: PS/2 mouse device common for all mice Jul 14 22:22:18.016731 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 14 22:22:18.018083 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 14 22:22:18.018292 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 14 22:22:18.018119 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:22:18.034645 kernel: kvm_amd: TSC scaling supported Jul 14 22:22:18.034739 kernel: kvm_amd: Nested Virtualization enabled Jul 14 22:22:18.034765 kernel: kvm_amd: Nested Paging enabled Jul 14 22:22:18.034787 kernel: kvm_amd: LBR virtualization supported Jul 14 22:22:18.034807 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 14 22:22:18.034831 kernel: kvm_amd: Virtual GIF supported Jul 14 22:22:18.054638 kernel: EDAC MC: Ver: 3.0.0 Jul 14 22:22:18.093385 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 14 22:22:18.111691 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:22:18.123847 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 14 22:22:18.132202 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 22:22:18.168145 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 14 22:22:18.169641 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 22:22:18.170721 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 22:22:18.171834 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 14 22:22:18.173049 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 14 22:22:18.174422 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 14 22:22:18.175596 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 14 22:22:18.176930 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 14 22:22:18.178099 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 14 22:22:18.178123 systemd[1]: Reached target paths.target - Path Units. Jul 14 22:22:18.178980 systemd[1]: Reached target timers.target - Timer Units. Jul 14 22:22:18.180634 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 14 22:22:18.183136 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 14 22:22:18.193944 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 14 22:22:18.196184 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 14 22:22:18.197760 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 14 22:22:18.198857 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 22:22:18.199801 systemd[1]: Reached target basic.target - Basic System. Jul 14 22:22:18.200836 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 14 22:22:18.200872 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 14 22:22:18.201775 systemd[1]: Starting containerd.service - containerd container runtime... Jul 14 22:22:18.203743 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 14 22:22:18.207629 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 22:22:18.207960 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 14 22:22:18.211793 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 14 22:22:18.215539 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 14 22:22:18.216783 jq[1435]: false Jul 14 22:22:18.216869 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 14 22:22:18.220769 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 14 22:22:18.224770 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 14 22:22:18.231090 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 14 22:22:18.232553 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 14 22:22:18.233041 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 14 22:22:18.235995 systemd[1]: Starting update-engine.service - Update Engine... Jul 14 22:22:18.239624 extend-filesystems[1436]: Found loop3 Jul 14 22:22:18.239624 extend-filesystems[1436]: Found loop4 Jul 14 22:22:18.239624 extend-filesystems[1436]: Found loop5 Jul 14 22:22:18.239624 extend-filesystems[1436]: Found sr0 Jul 14 22:22:18.239624 extend-filesystems[1436]: Found vda Jul 14 22:22:18.239624 extend-filesystems[1436]: Found vda1 Jul 14 22:22:18.239624 extend-filesystems[1436]: Found vda2 Jul 14 22:22:18.252096 extend-filesystems[1436]: Found vda3 Jul 14 22:22:18.252096 extend-filesystems[1436]: Found usr Jul 14 22:22:18.252096 extend-filesystems[1436]: Found vda4 Jul 14 22:22:18.252096 extend-filesystems[1436]: Found vda6 Jul 14 22:22:18.252096 extend-filesystems[1436]: Found vda7 Jul 14 22:22:18.252096 extend-filesystems[1436]: Found vda9 Jul 14 22:22:18.252096 extend-filesystems[1436]: Checking size of /dev/vda9 Jul 14 22:22:18.240730 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 14 22:22:18.243049 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 14 22:22:18.258267 jq[1448]: true Jul 14 22:22:18.248538 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 14 22:22:18.249517 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 14 22:22:18.250492 systemd[1]: motdgen.service: Deactivated successfully. Jul 14 22:22:18.251580 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 14 22:22:18.256037 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 14 22:22:18.256861 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 14 22:22:18.267467 dbus-daemon[1434]: [system] SELinux support is enabled Jul 14 22:22:18.270186 update_engine[1444]: I20250714 22:22:18.269129 1444 main.cc:92] Flatcar Update Engine starting Jul 14 22:22:18.269723 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 14 22:22:18.272917 extend-filesystems[1436]: Resized partition /dev/vda9 Jul 14 22:22:18.274449 update_engine[1444]: I20250714 22:22:18.274392 1444 update_check_scheduler.cc:74] Next update check in 8m20s Jul 14 22:22:18.277813 jq[1454]: true Jul 14 22:22:18.281818 extend-filesystems[1460]: resize2fs 1.47.1 (20-May-2024) Jul 14 22:22:18.288685 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1398) Jul 14 22:22:18.288746 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 14 22:22:18.290994 (ntainerd)[1458]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 14 22:22:18.309788 systemd[1]: Started update-engine.service - Update Engine. Jul 14 22:22:18.311750 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 14 22:22:18.311792 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 14 22:22:18.313723 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 14 22:22:18.313750 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 14 22:22:18.317736 systemd-logind[1442]: Watching system buttons on /dev/input/event1 (Power Button) Jul 14 22:22:18.317766 systemd-logind[1442]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 14 22:22:18.319206 systemd-logind[1442]: New seat seat0. Jul 14 22:22:18.322392 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 14 22:22:18.323756 systemd[1]: Started systemd-logind.service - User Login Management. Jul 14 22:22:18.334643 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 14 22:22:18.357803 locksmithd[1477]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 14 22:22:18.361440 extend-filesystems[1460]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 14 22:22:18.361440 extend-filesystems[1460]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 14 22:22:18.361440 extend-filesystems[1460]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 14 22:22:18.367472 extend-filesystems[1436]: Resized filesystem in /dev/vda9 Jul 14 22:22:18.362447 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 14 22:22:18.362695 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 14 22:22:18.372669 bash[1484]: Updated "/home/core/.ssh/authorized_keys" Jul 14 22:22:18.373623 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 14 22:22:18.377372 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 14 22:22:18.379435 sshd_keygen[1455]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 14 22:22:18.406678 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 14 22:22:18.413949 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 14 22:22:18.424114 systemd[1]: issuegen.service: Deactivated successfully. Jul 14 22:22:18.424427 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 14 22:22:18.434150 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 14 22:22:18.446822 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 14 22:22:18.450414 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 14 22:22:18.453059 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 14 22:22:18.455028 systemd[1]: Reached target getty.target - Login Prompts. Jul 14 22:22:18.497862 containerd[1458]: time="2025-07-14T22:22:18.497674656Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 14 22:22:18.524659 containerd[1458]: time="2025-07-14T22:22:18.524562480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:22:18.526394 containerd[1458]: time="2025-07-14T22:22:18.526350293Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.97-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:22:18.526394 containerd[1458]: time="2025-07-14T22:22:18.526377544Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 14 22:22:18.526394 containerd[1458]: time="2025-07-14T22:22:18.526396760Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 14 22:22:18.526627 containerd[1458]: time="2025-07-14T22:22:18.526587769Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 14 22:22:18.526627 containerd[1458]: time="2025-07-14T22:22:18.526625459Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 14 22:22:18.526712 containerd[1458]: time="2025-07-14T22:22:18.526693808Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:22:18.526712 containerd[1458]: time="2025-07-14T22:22:18.526709337Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:22:18.526957 containerd[1458]: time="2025-07-14T22:22:18.526926865Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:22:18.526957 containerd[1458]: time="2025-07-14T22:22:18.526952493Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 14 22:22:18.527007 containerd[1458]: time="2025-07-14T22:22:18.526965407Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:22:18.527007 containerd[1458]: time="2025-07-14T22:22:18.526976147Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 14 22:22:18.527085 containerd[1458]: time="2025-07-14T22:22:18.527067759Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:22:18.527327 containerd[1458]: time="2025-07-14T22:22:18.527301578Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:22:18.527449 containerd[1458]: time="2025-07-14T22:22:18.527423286Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:22:18.527449 containerd[1458]: time="2025-07-14T22:22:18.527439055Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 14 22:22:18.527554 containerd[1458]: time="2025-07-14T22:22:18.527531659Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 14 22:22:18.527604 containerd[1458]: time="2025-07-14T22:22:18.527588676Z" level=info msg="metadata content store policy set" policy=shared Jul 14 22:22:18.680808 containerd[1458]: time="2025-07-14T22:22:18.680699358Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 14 22:22:18.680808 containerd[1458]: time="2025-07-14T22:22:18.680777885Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 14 22:22:18.680808 containerd[1458]: time="2025-07-14T22:22:18.680797773Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 14 22:22:18.680808 containerd[1458]: time="2025-07-14T22:22:18.680820375Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 14 22:22:18.680808 containerd[1458]: time="2025-07-14T22:22:18.680848257Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 14 22:22:18.681093 containerd[1458]: time="2025-07-14T22:22:18.681058401Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 14 22:22:18.681360 containerd[1458]: time="2025-07-14T22:22:18.681299594Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 14 22:22:18.681482 containerd[1458]: time="2025-07-14T22:22:18.681425901Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 14 22:22:18.681482 containerd[1458]: time="2025-07-14T22:22:18.681445087Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 14 22:22:18.681482 containerd[1458]: time="2025-07-14T22:22:18.681457460Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 14 22:22:18.681482 containerd[1458]: time="2025-07-14T22:22:18.681470605Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 14 22:22:18.681482 containerd[1458]: time="2025-07-14T22:22:18.681483729Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 14 22:22:18.681671 containerd[1458]: time="2025-07-14T22:22:18.681498577Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 14 22:22:18.681671 containerd[1458]: time="2025-07-14T22:22:18.681512643Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 14 22:22:18.681671 containerd[1458]: time="2025-07-14T22:22:18.681525908Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 14 22:22:18.681671 containerd[1458]: time="2025-07-14T22:22:18.681537720Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 14 22:22:18.681671 containerd[1458]: time="2025-07-14T22:22:18.681550975Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 14 22:22:18.681671 containerd[1458]: time="2025-07-14T22:22:18.681561826Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 14 22:22:18.681671 containerd[1458]: time="2025-07-14T22:22:18.681579609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 14 22:22:18.681671 containerd[1458]: time="2025-07-14T22:22:18.681593205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 14 22:22:18.681671 containerd[1458]: time="2025-07-14T22:22:18.681604836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 14 22:22:18.681671 containerd[1458]: time="2025-07-14T22:22:18.681641615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 14 22:22:18.681671 containerd[1458]: time="2025-07-14T22:22:18.681657996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 14 22:22:18.681671 containerd[1458]: time="2025-07-14T22:22:18.681672203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 14 22:22:18.682001 containerd[1458]: time="2025-07-14T22:22:18.681684005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 14 22:22:18.682001 containerd[1458]: time="2025-07-14T22:22:18.681696719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 14 22:22:18.682001 containerd[1458]: time="2025-07-14T22:22:18.681708962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 14 22:22:18.682001 containerd[1458]: time="2025-07-14T22:22:18.681722697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 14 22:22:18.682001 containerd[1458]: time="2025-07-14T22:22:18.681733798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 14 22:22:18.682001 containerd[1458]: time="2025-07-14T22:22:18.681745160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 14 22:22:18.682001 containerd[1458]: time="2025-07-14T22:22:18.681763654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 14 22:22:18.682001 containerd[1458]: time="2025-07-14T22:22:18.681777651Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 14 22:22:18.682001 containerd[1458]: time="2025-07-14T22:22:18.681795925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 14 22:22:18.682001 containerd[1458]: time="2025-07-14T22:22:18.681807196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 14 22:22:18.682001 containerd[1458]: time="2025-07-14T22:22:18.681817325Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 14 22:22:18.682001 containerd[1458]: time="2025-07-14T22:22:18.681886505Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 14 22:22:18.682001 containerd[1458]: time="2025-07-14T22:22:18.681910129Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 14 22:22:18.682001 containerd[1458]: time="2025-07-14T22:22:18.681923324Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 14 22:22:18.682361 containerd[1458]: time="2025-07-14T22:22:18.681935306Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 14 22:22:18.682361 containerd[1458]: time="2025-07-14T22:22:18.681944964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 14 22:22:18.682361 containerd[1458]: time="2025-07-14T22:22:18.681960564Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 14 22:22:18.682361 containerd[1458]: time="2025-07-14T22:22:18.681973618Z" level=info msg="NRI interface is disabled by configuration." Jul 14 22:22:18.682361 containerd[1458]: time="2025-07-14T22:22:18.681986512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 14 22:22:18.682506 containerd[1458]: time="2025-07-14T22:22:18.682272920Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 14 22:22:18.682506 containerd[1458]: time="2025-07-14T22:22:18.682323905Z" level=info msg="Connect containerd service" Jul 14 22:22:18.682506 containerd[1458]: time="2025-07-14T22:22:18.682353070Z" level=info msg="using legacy CRI server" Jul 14 22:22:18.682506 containerd[1458]: time="2025-07-14T22:22:18.682359362Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 14 22:22:18.682506 containerd[1458]: time="2025-07-14T22:22:18.682442427Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 14 22:22:18.683729 containerd[1458]: time="2025-07-14T22:22:18.683505752Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 22:22:18.683857 containerd[1458]: time="2025-07-14T22:22:18.683807097Z" level=info msg="Start subscribing containerd event" Jul 14 22:22:18.683890 containerd[1458]: time="2025-07-14T22:22:18.683858163Z" level=info msg="Start recovering state" Jul 14 22:22:18.683950 containerd[1458]: time="2025-07-14T22:22:18.683933194Z" level=info msg="Start event monitor" Jul 14 22:22:18.683993 containerd[1458]: time="2025-07-14T22:22:18.683950656Z" level=info msg="Start snapshots syncer" Jul 14 22:22:18.683993 containerd[1458]: time="2025-07-14T22:22:18.683960154Z" level=info msg="Start cni network conf syncer for default" Jul 14 22:22:18.683993 containerd[1458]: time="2025-07-14T22:22:18.683971616Z" level=info msg="Start streaming server" Jul 14 22:22:18.684059 containerd[1458]: time="2025-07-14T22:22:18.684036678Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 14 22:22:18.684111 containerd[1458]: time="2025-07-14T22:22:18.684093054Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 14 22:22:18.684172 containerd[1458]: time="2025-07-14T22:22:18.684155180Z" level=info msg="containerd successfully booted in 0.187970s" Jul 14 22:22:18.684264 systemd[1]: Started containerd.service - containerd container runtime. Jul 14 22:22:19.916818 systemd-networkd[1392]: eth0: Gained IPv6LL Jul 14 22:22:19.920665 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 14 22:22:19.922661 systemd[1]: Reached target network-online.target - Network is Online. Jul 14 22:22:19.935838 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 14 22:22:19.938409 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:22:19.940550 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 14 22:22:19.961350 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 14 22:22:19.961592 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 14 22:22:19.963228 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 14 22:22:19.963717 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 14 22:22:20.697476 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:22:20.699204 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 14 22:22:20.700576 systemd[1]: Startup finished in 706ms (kernel) + 26.563s (initrd) + 5.760s (userspace) = 33.030s. Jul 14 22:22:20.713198 (kubelet)[1541]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:22:21.150734 kubelet[1541]: E0714 22:22:21.150573 1541 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:22:21.154729 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:22:21.154942 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:22:21.155294 systemd[1]: kubelet.service: Consumed 1.055s CPU time. Jul 14 22:22:21.425380 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 14 22:22:21.426671 systemd[1]: Started sshd@0-10.0.0.145:22-10.0.0.1:41764.service - OpenSSH per-connection server daemon (10.0.0.1:41764). Jul 14 22:22:21.480097 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 41764 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:22:21.482249 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:22:21.490702 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 14 22:22:21.500871 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 14 22:22:21.502657 systemd-logind[1442]: New session 1 of user core. Jul 14 22:22:21.514924 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 14 22:22:21.518042 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 14 22:22:21.526808 (systemd)[1558]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:22:21.632889 systemd[1558]: Queued start job for default target default.target. Jul 14 22:22:21.646458 systemd[1558]: Created slice app.slice - User Application Slice. Jul 14 22:22:21.646491 systemd[1558]: Reached target paths.target - Paths. Jul 14 22:22:21.646506 systemd[1558]: Reached target timers.target - Timers. Jul 14 22:22:21.648438 systemd[1558]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 14 22:22:21.660678 systemd[1558]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 14 22:22:21.660876 systemd[1558]: Reached target sockets.target - Sockets. Jul 14 22:22:21.660902 systemd[1558]: Reached target basic.target - Basic System. Jul 14 22:22:21.660959 systemd[1558]: Reached target default.target - Main User Target. Jul 14 22:22:21.661007 systemd[1558]: Startup finished in 126ms. Jul 14 22:22:21.661317 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 14 22:22:21.673822 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 14 22:22:21.734297 systemd[1]: Started sshd@1-10.0.0.145:22-10.0.0.1:41766.service - OpenSSH per-connection server daemon (10.0.0.1:41766). Jul 14 22:22:21.777570 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 41766 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:22:21.779257 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:22:21.783170 systemd-logind[1442]: New session 2 of user core. Jul 14 22:22:21.792769 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 14 22:22:21.848786 sshd[1569]: pam_unix(sshd:session): session closed for user core Jul 14 22:22:21.861579 systemd[1]: sshd@1-10.0.0.145:22-10.0.0.1:41766.service: Deactivated successfully. Jul 14 22:22:21.863263 systemd[1]: session-2.scope: Deactivated successfully. Jul 14 22:22:21.864832 systemd-logind[1442]: Session 2 logged out. Waiting for processes to exit. Jul 14 22:22:21.871917 systemd[1]: Started sshd@2-10.0.0.145:22-10.0.0.1:41768.service - OpenSSH per-connection server daemon (10.0.0.1:41768). Jul 14 22:22:21.872976 systemd-logind[1442]: Removed session 2. Jul 14 22:22:21.907883 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 41768 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:22:21.909756 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:22:21.914732 systemd-logind[1442]: New session 3 of user core. Jul 14 22:22:21.924888 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 14 22:22:21.977565 sshd[1576]: pam_unix(sshd:session): session closed for user core Jul 14 22:22:21.989588 systemd[1]: sshd@2-10.0.0.145:22-10.0.0.1:41768.service: Deactivated successfully. Jul 14 22:22:21.991443 systemd[1]: session-3.scope: Deactivated successfully. Jul 14 22:22:21.993131 systemd-logind[1442]: Session 3 logged out. Waiting for processes to exit. Jul 14 22:22:21.994408 systemd[1]: Started sshd@3-10.0.0.145:22-10.0.0.1:41774.service - OpenSSH per-connection server daemon (10.0.0.1:41774). Jul 14 22:22:21.995122 systemd-logind[1442]: Removed session 3. Jul 14 22:22:22.049500 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 41774 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:22:22.051273 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:22:22.056036 systemd-logind[1442]: New session 4 of user core. Jul 14 22:22:22.065896 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 14 22:22:22.122523 sshd[1583]: pam_unix(sshd:session): session closed for user core Jul 14 22:22:22.135799 systemd[1]: sshd@3-10.0.0.145:22-10.0.0.1:41774.service: Deactivated successfully. Jul 14 22:22:22.137437 systemd[1]: session-4.scope: Deactivated successfully. Jul 14 22:22:22.139143 systemd-logind[1442]: Session 4 logged out. Waiting for processes to exit. Jul 14 22:22:22.155208 systemd[1]: Started sshd@4-10.0.0.145:22-10.0.0.1:41778.service - OpenSSH per-connection server daemon (10.0.0.1:41778). Jul 14 22:22:22.156152 systemd-logind[1442]: Removed session 4. Jul 14 22:22:22.188428 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 41778 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:22:22.190161 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:22:22.194442 systemd-logind[1442]: New session 5 of user core. Jul 14 22:22:22.200841 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 14 22:22:22.262491 sudo[1593]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 14 22:22:22.262853 sudo[1593]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:22:22.281892 sudo[1593]: pam_unix(sudo:session): session closed for user root Jul 14 22:22:22.284067 sshd[1590]: pam_unix(sshd:session): session closed for user core Jul 14 22:22:22.293599 systemd[1]: sshd@4-10.0.0.145:22-10.0.0.1:41778.service: Deactivated successfully. Jul 14 22:22:22.295218 systemd[1]: session-5.scope: Deactivated successfully. Jul 14 22:22:22.296463 systemd-logind[1442]: Session 5 logged out. Waiting for processes to exit. Jul 14 22:22:22.297778 systemd[1]: Started sshd@5-10.0.0.145:22-10.0.0.1:41782.service - OpenSSH per-connection server daemon (10.0.0.1:41782). Jul 14 22:22:22.298519 systemd-logind[1442]: Removed session 5. Jul 14 22:22:22.337306 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 41782 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:22:22.338938 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:22:22.342852 systemd-logind[1442]: New session 6 of user core. Jul 14 22:22:22.352758 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 14 22:22:22.407019 sudo[1602]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 14 22:22:22.407417 sudo[1602]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:22:22.410936 sudo[1602]: pam_unix(sudo:session): session closed for user root Jul 14 22:22:22.416652 sudo[1601]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 14 22:22:22.416984 sudo[1601]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:22:22.434887 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 14 22:22:22.436455 auditctl[1605]: No rules Jul 14 22:22:22.437694 systemd[1]: audit-rules.service: Deactivated successfully. Jul 14 22:22:22.437936 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 14 22:22:22.439549 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 14 22:22:22.469479 augenrules[1623]: No rules Jul 14 22:22:22.471409 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 14 22:22:22.472746 sudo[1601]: pam_unix(sudo:session): session closed for user root Jul 14 22:22:22.474516 sshd[1598]: pam_unix(sshd:session): session closed for user core Jul 14 22:22:22.491397 systemd[1]: sshd@5-10.0.0.145:22-10.0.0.1:41782.service: Deactivated successfully. Jul 14 22:22:22.493014 systemd[1]: session-6.scope: Deactivated successfully. Jul 14 22:22:22.494205 systemd-logind[1442]: Session 6 logged out. Waiting for processes to exit. Jul 14 22:22:22.495300 systemd[1]: Started sshd@6-10.0.0.145:22-10.0.0.1:41788.service - OpenSSH per-connection server daemon (10.0.0.1:41788). Jul 14 22:22:22.496159 systemd-logind[1442]: Removed session 6. Jul 14 22:22:22.532602 sshd[1631]: Accepted publickey for core from 10.0.0.1 port 41788 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:22:22.534357 sshd[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:22:22.537986 systemd-logind[1442]: New session 7 of user core. Jul 14 22:22:22.544745 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 14 22:22:22.597812 sudo[1634]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 14 22:22:22.598178 sudo[1634]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:22:22.618886 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 14 22:22:22.636766 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 14 22:22:22.637004 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 14 22:22:23.090105 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:22:23.090313 systemd[1]: kubelet.service: Consumed 1.055s CPU time. Jul 14 22:22:23.101978 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:22:23.123482 systemd[1]: Reloading requested from client PID 1675 ('systemctl') (unit session-7.scope)... Jul 14 22:22:23.123499 systemd[1]: Reloading... Jul 14 22:22:23.203654 zram_generator::config[1716]: No configuration found. Jul 14 22:22:23.444813 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:22:23.523035 systemd[1]: Reloading finished in 399 ms. Jul 14 22:22:23.574640 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 14 22:22:23.574761 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 14 22:22:23.575050 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:22:23.576670 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:22:23.746920 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:22:23.752675 (kubelet)[1762]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 22:22:24.063158 kubelet[1762]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:22:24.063158 kubelet[1762]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 14 22:22:24.063158 kubelet[1762]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:22:24.063722 kubelet[1762]: I0714 22:22:24.063194 1762 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 22:22:24.568625 kubelet[1762]: I0714 22:22:24.568552 1762 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 14 22:22:24.568625 kubelet[1762]: I0714 22:22:24.568590 1762 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 22:22:24.568906 kubelet[1762]: I0714 22:22:24.568879 1762 server.go:954] "Client rotation is on, will bootstrap in background" Jul 14 22:22:24.592126 kubelet[1762]: I0714 22:22:24.592082 1762 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 22:22:24.599692 kubelet[1762]: E0714 22:22:24.599647 1762 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 22:22:24.599692 kubelet[1762]: I0714 22:22:24.599691 1762 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 22:22:24.604701 kubelet[1762]: I0714 22:22:24.604646 1762 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 22:22:24.606023 kubelet[1762]: I0714 22:22:24.605968 1762 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 22:22:24.606223 kubelet[1762]: I0714 22:22:24.606011 1762 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.145","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 22:22:24.606223 kubelet[1762]: I0714 22:22:24.606216 1762 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 22:22:24.606355 kubelet[1762]: I0714 22:22:24.606228 1762 container_manager_linux.go:304] "Creating device plugin manager" Jul 14 22:22:24.606421 kubelet[1762]: I0714 22:22:24.606399 1762 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:22:24.609700 kubelet[1762]: I0714 22:22:24.609652 1762 kubelet.go:446] "Attempting to sync node with API server" Jul 14 22:22:24.609700 kubelet[1762]: I0714 22:22:24.609697 1762 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 22:22:24.609765 kubelet[1762]: I0714 22:22:24.609718 1762 kubelet.go:352] "Adding apiserver pod source" Jul 14 22:22:24.609765 kubelet[1762]: I0714 22:22:24.609737 1762 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 22:22:24.609914 kubelet[1762]: E0714 22:22:24.609877 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:22:24.609951 kubelet[1762]: E0714 22:22:24.609920 1762 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:22:24.612550 kubelet[1762]: I0714 22:22:24.612491 1762 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 14 22:22:24.612894 kubelet[1762]: I0714 22:22:24.612873 1762 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 22:22:24.613507 kubelet[1762]: W0714 22:22:24.613486 1762 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 14 22:22:24.616130 kubelet[1762]: I0714 22:22:24.615878 1762 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 14 22:22:24.616130 kubelet[1762]: I0714 22:22:24.615922 1762 server.go:1287] "Started kubelet" Jul 14 22:22:24.616698 kubelet[1762]: I0714 22:22:24.616650 1762 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 22:22:24.618410 kubelet[1762]: I0714 22:22:24.618388 1762 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 22:22:24.618759 kubelet[1762]: I0714 22:22:24.618734 1762 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 22:22:24.618874 kubelet[1762]: I0714 22:22:24.618856 1762 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 14 22:22:24.618997 kubelet[1762]: E0714 22:22:24.618969 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Jul 14 22:22:24.619236 kubelet[1762]: I0714 22:22:24.619211 1762 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 14 22:22:24.619304 kubelet[1762]: I0714 22:22:24.619270 1762 reconciler.go:26] "Reconciler: start to sync state" Jul 14 22:22:24.620965 kubelet[1762]: I0714 22:22:24.620920 1762 server.go:479] "Adding debug handlers to kubelet server" Jul 14 22:22:24.624423 kubelet[1762]: I0714 22:22:24.622885 1762 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 22:22:24.624423 kubelet[1762]: I0714 22:22:24.623061 1762 factory.go:221] Registration of the systemd container factory successfully Jul 14 22:22:24.624423 kubelet[1762]: I0714 22:22:24.623172 1762 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 22:22:24.624423 kubelet[1762]: I0714 22:22:24.623232 1762 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 22:22:24.625638 kubelet[1762]: E0714 22:22:24.625588 1762 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 22:22:24.625770 kubelet[1762]: I0714 22:22:24.625748 1762 factory.go:221] Registration of the containerd container factory successfully Jul 14 22:22:24.640641 kubelet[1762]: E0714 22:22:24.639648 1762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.145.18523e561f6d1b6c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.145,UID:10.0.0.145,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.145,},FirstTimestamp:2025-07-14 22:22:24.615897964 +0000 UTC m=+0.858993958,LastTimestamp:2025-07-14 22:22:24.615897964 +0000 UTC m=+0.858993958,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.145,}" Jul 14 22:22:24.640889 kubelet[1762]: I0714 22:22:24.640767 1762 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 14 22:22:24.640889 kubelet[1762]: I0714 22:22:24.640789 1762 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 14 22:22:24.640889 kubelet[1762]: I0714 22:22:24.640821 1762 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:22:24.645571 kubelet[1762]: W0714 22:22:24.645545 1762 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jul 14 22:22:24.645685 kubelet[1762]: E0714 22:22:24.645595 1762 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Jul 14 22:22:24.645685 kubelet[1762]: W0714 22:22:24.645547 1762 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.145" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jul 14 22:22:24.645685 kubelet[1762]: E0714 22:22:24.645641 1762 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.145\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jul 14 22:22:24.646243 kubelet[1762]: W0714 22:22:24.645938 1762 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jul 14 22:22:24.646243 kubelet[1762]: E0714 22:22:24.645955 1762 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jul 14 22:22:24.646243 kubelet[1762]: E0714 22:22:24.645999 1762 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.145\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jul 14 22:22:24.646725 kubelet[1762]: E0714 22:22:24.646548 1762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.145.18523e562000cd68 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.145,UID:10.0.0.145,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.145,},FirstTimestamp:2025-07-14 22:22:24.62557732 +0000 UTC m=+0.868673314,LastTimestamp:2025-07-14 22:22:24.62557732 +0000 UTC m=+0.868673314,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.145,}" Jul 14 22:22:24.650510 kubelet[1762]: E0714 22:22:24.650408 1762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.145.18523e5620cda985 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.145,UID:10.0.0.145,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.145 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.145,},FirstTimestamp:2025-07-14 22:22:24.639003013 +0000 UTC m=+0.882099007,LastTimestamp:2025-07-14 22:22:24.639003013 +0000 UTC m=+0.882099007,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.145,}" Jul 14 22:22:24.654397 kubelet[1762]: E0714 22:22:24.654293 1762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.145.18523e5620cdbdb6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.145,UID:10.0.0.145,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.145 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.145,},FirstTimestamp:2025-07-14 22:22:24.639008182 +0000 UTC m=+0.882104176,LastTimestamp:2025-07-14 22:22:24.639008182 +0000 UTC m=+0.882104176,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.145,}" Jul 14 22:22:24.658114 kubelet[1762]: E0714 22:22:24.657988 1762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.145.18523e5620cdc684 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.145,UID:10.0.0.145,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.145 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.145,},FirstTimestamp:2025-07-14 22:22:24.639010436 +0000 UTC m=+0.882106431,LastTimestamp:2025-07-14 22:22:24.639010436 +0000 UTC m=+0.882106431,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.145,}" Jul 14 22:22:24.719096 kubelet[1762]: E0714 22:22:24.719046 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Jul 14 22:22:24.819283 kubelet[1762]: E0714 22:22:24.819121 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Jul 14 22:22:24.850913 kubelet[1762]: E0714 22:22:24.850867 1762 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.145\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Jul 14 22:22:24.919438 kubelet[1762]: E0714 22:22:24.919350 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Jul 14 22:22:25.019992 kubelet[1762]: E0714 22:22:25.019919 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Jul 14 22:22:25.121112 kubelet[1762]: E0714 22:22:25.120960 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Jul 14 22:22:25.216229 kubelet[1762]: I0714 22:22:25.216187 1762 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 22:22:25.217442 kubelet[1762]: I0714 22:22:25.217408 1762 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 22:22:25.217535 kubelet[1762]: I0714 22:22:25.217451 1762 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 14 22:22:25.217535 kubelet[1762]: I0714 22:22:25.217475 1762 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 14 22:22:25.217535 kubelet[1762]: I0714 22:22:25.217484 1762 kubelet.go:2382] "Starting kubelet main sync loop" Jul 14 22:22:25.217634 kubelet[1762]: E0714 22:22:25.217543 1762 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 22:22:25.221122 kubelet[1762]: E0714 22:22:25.221095 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Jul 14 22:22:25.223430 kubelet[1762]: W0714 22:22:25.223399 1762 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Jul 14 22:22:25.223525 kubelet[1762]: E0714 22:22:25.223434 1762 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Jul 14 22:22:25.255834 kubelet[1762]: E0714 22:22:25.255784 1762 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.145\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Jul 14 22:22:25.293978 kubelet[1762]: I0714 22:22:25.293922 1762 policy_none.go:49] "None policy: Start" Jul 14 22:22:25.293978 kubelet[1762]: I0714 22:22:25.293977 1762 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 14 22:22:25.294048 kubelet[1762]: I0714 22:22:25.293998 1762 state_mem.go:35] "Initializing new in-memory state store" Jul 14 22:22:25.317956 kubelet[1762]: E0714 22:22:25.317890 1762 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 14 22:22:25.321234 kubelet[1762]: E0714 22:22:25.321178 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Jul 14 22:22:25.412741 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 14 22:22:25.421537 kubelet[1762]: E0714 22:22:25.421499 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Jul 14 22:22:25.423871 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 14 22:22:25.426910 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 14 22:22:25.445669 kubelet[1762]: I0714 22:22:25.445603 1762 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 22:22:25.446043 kubelet[1762]: I0714 22:22:25.445842 1762 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 22:22:25.446043 kubelet[1762]: I0714 22:22:25.445857 1762 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 22:22:25.446043 kubelet[1762]: I0714 22:22:25.446032 1762 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 22:22:25.446952 kubelet[1762]: E0714 22:22:25.446922 1762 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 14 22:22:25.447002 kubelet[1762]: E0714 22:22:25.446953 1762 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.145\" not found" Jul 14 22:22:25.451401 kubelet[1762]: E0714 22:22:25.451302 1762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.145.18523e5650fdc90f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.145,UID:10.0.0.145,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:10.0.0.145,},FirstTimestamp:2025-07-14 22:22:25.447463183 +0000 UTC m=+1.690559177,LastTimestamp:2025-07-14 22:22:25.447463183 +0000 UTC m=+1.690559177,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.145,}" Jul 14 22:22:25.548026 kubelet[1762]: I0714 22:22:25.547990 1762 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.145" Jul 14 22:22:25.552544 kubelet[1762]: E0714 22:22:25.552467 1762 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.145.18523e5620cda985\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.145.18523e5620cda985 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.145,UID:10.0.0.145,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.145 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.145,},FirstTimestamp:2025-07-14 22:22:24.639003013 +0000 UTC m=+0.882099007,LastTimestamp:2025-07-14 22:22:25.547950198 +0000 UTC m=+1.791046192,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.145,}" Jul 14 22:22:25.557127 kubelet[1762]: E0714 22:22:25.557002 1762 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.145.18523e5620cdbdb6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.145.18523e5620cdbdb6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.145,UID:10.0.0.145,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.145 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.145,},FirstTimestamp:2025-07-14 22:22:24.639008182 +0000 UTC m=+0.882104176,LastTimestamp:2025-07-14 22:22:25.547957271 +0000 UTC m=+1.791053265,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.145,}" Jul 14 22:22:25.557312 kubelet[1762]: E0714 22:22:25.557166 1762 kubelet_node_status.go:113] "Unable to register node with API server, error getting existing node" err="nodes \"10.0.0.145\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.145" Jul 14 22:22:25.560808 kubelet[1762]: E0714 22:22:25.560715 1762 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.145.18523e5620cdc684\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.145.18523e5620cdc684 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.145,UID:10.0.0.145,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.145 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.145,},FirstTimestamp:2025-07-14 22:22:24.639010436 +0000 UTC m=+0.882106431,LastTimestamp:2025-07-14 22:22:25.547959786 +0000 UTC m=+1.791055780,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.145,}" Jul 14 22:22:25.610425 kubelet[1762]: E0714 22:22:25.610386 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:22:25.616457 kubelet[1762]: W0714 22:22:25.616415 1762 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jul 14 22:22:25.616507 kubelet[1762]: E0714 22:22:25.616453 1762 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jul 14 22:22:25.757999 kubelet[1762]: I0714 22:22:25.757958 1762 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.145" Jul 14 22:22:25.826061 kubelet[1762]: E0714 22:22:25.825959 1762 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.145.18523e5620cda985\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.145.18523e5620cda985 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.145,UID:10.0.0.145,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.145 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.145,},FirstTimestamp:2025-07-14 22:22:24.639003013 +0000 UTC m=+0.882099007,LastTimestamp:2025-07-14 22:22:25.757871931 +0000 UTC m=+2.000967925,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.145,}" Jul 14 22:22:25.826964 kubelet[1762]: W0714 22:22:25.826945 1762 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.145" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jul 14 22:22:25.826999 kubelet[1762]: E0714 22:22:25.826976 1762 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.145\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jul 14 22:22:25.828553 kubelet[1762]: E0714 22:22:25.828451 1762 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.145.18523e5620cdbdb6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.145.18523e5620cdbdb6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.145,UID:10.0.0.145,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.145 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.145,},FirstTimestamp:2025-07-14 22:22:24.639008182 +0000 UTC m=+0.882104176,LastTimestamp:2025-07-14 22:22:25.757885647 +0000 UTC m=+2.000981631,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.145,}" Jul 14 22:22:25.829986 kubelet[1762]: E0714 22:22:25.829957 1762 kubelet_node_status.go:113] "Unable to register node with API server, error getting existing node" err="nodes \"10.0.0.145\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.145" Jul 14 22:22:25.830124 kubelet[1762]: E0714 22:22:25.830008 1762 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.145.18523e5620cdc684\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.145.18523e5620cdc684 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.145,UID:10.0.0.145,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.145 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.145,},FirstTimestamp:2025-07-14 22:22:24.639010436 +0000 UTC m=+0.882106431,LastTimestamp:2025-07-14 22:22:25.757888682 +0000 UTC m=+2.000984676,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.145,}" Jul 14 22:22:26.057550 kubelet[1762]: E0714 22:22:26.057444 1762 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.145\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="1.6s" Jul 14 22:22:26.231303 kubelet[1762]: I0714 22:22:26.231276 1762 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.145" Jul 14 22:22:26.385147 kubelet[1762]: I0714 22:22:26.385029 1762 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.145" Jul 14 22:22:26.385147 kubelet[1762]: E0714 22:22:26.385063 1762 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.0.0.145\": node \"10.0.0.145\" not found" Jul 14 22:22:26.537966 kubelet[1762]: E0714 22:22:26.537923 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Jul 14 22:22:26.573127 kubelet[1762]: I0714 22:22:26.573070 1762 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jul 14 22:22:26.573262 kubelet[1762]: W0714 22:22:26.573242 1762 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jul 14 22:22:26.573286 kubelet[1762]: W0714 22:22:26.573242 1762 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jul 14 22:22:26.611473 kubelet[1762]: E0714 22:22:26.611442 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:22:26.638981 kubelet[1762]: E0714 22:22:26.638873 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Jul 14 22:22:26.739396 kubelet[1762]: E0714 22:22:26.739302 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Jul 14 22:22:26.840180 kubelet[1762]: E0714 22:22:26.840117 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Jul 14 22:22:26.941010 kubelet[1762]: E0714 22:22:26.940848 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Jul 14 22:22:27.041797 kubelet[1762]: E0714 22:22:27.041723 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Jul 14 22:22:27.141924 kubelet[1762]: E0714 22:22:27.141861 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Jul 14 22:22:27.159461 sudo[1634]: pam_unix(sudo:session): session closed for user root Jul 14 22:22:27.161113 sshd[1631]: pam_unix(sshd:session): session closed for user core Jul 14 22:22:27.164660 systemd[1]: sshd@6-10.0.0.145:22-10.0.0.1:41788.service: Deactivated successfully. Jul 14 22:22:27.166598 systemd[1]: session-7.scope: Deactivated successfully. Jul 14 22:22:27.167298 systemd-logind[1442]: Session 7 logged out. Waiting for processes to exit. Jul 14 22:22:27.168286 systemd-logind[1442]: Removed session 7. Jul 14 22:22:27.242083 kubelet[1762]: E0714 22:22:27.242017 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Jul 14 22:22:27.342817 kubelet[1762]: E0714 22:22:27.342687 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Jul 14 22:22:27.443456 kubelet[1762]: E0714 22:22:27.443365 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Jul 14 22:22:27.544316 kubelet[1762]: E0714 22:22:27.544186 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Jul 14 22:22:27.611956 kubelet[1762]: E0714 22:22:27.611878 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:22:27.644867 kubelet[1762]: E0714 22:22:27.644799 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Jul 14 22:22:27.745811 kubelet[1762]: E0714 22:22:27.745736 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Jul 14 22:22:27.846637 kubelet[1762]: E0714 22:22:27.846460 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Jul 14 22:22:27.947289 kubelet[1762]: E0714 22:22:27.947217 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Jul 14 22:22:28.048032 kubelet[1762]: E0714 22:22:28.047955 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Jul 14 22:22:28.148571 kubelet[1762]: E0714 22:22:28.148359 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Jul 14 22:22:28.249396 kubelet[1762]: E0714 22:22:28.249345 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Jul 14 22:22:28.349583 kubelet[1762]: E0714 22:22:28.349492 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Jul 14 22:22:28.450346 kubelet[1762]: E0714 22:22:28.450292 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Jul 14 22:22:28.551192 kubelet[1762]: E0714 22:22:28.551114 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Jul 14 22:22:28.612860 kubelet[1762]: E0714 22:22:28.612792 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:22:28.651751 kubelet[1762]: E0714 22:22:28.651683 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Jul 14 22:22:28.752660 kubelet[1762]: E0714 22:22:28.752441 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Jul 14 22:22:28.853358 kubelet[1762]: E0714 22:22:28.853282 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Jul 14 22:22:28.953971 kubelet[1762]: E0714 22:22:28.953899 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Jul 14 22:22:29.055305 kubelet[1762]: I0714 22:22:29.055174 1762 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jul 14 22:22:29.055649 containerd[1458]: time="2025-07-14T22:22:29.055576332Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 14 22:22:29.056073 kubelet[1762]: I0714 22:22:29.055784 1762 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jul 14 22:22:29.613537 kubelet[1762]: I0714 22:22:29.613475 1762 apiserver.go:52] "Watching apiserver" Jul 14 22:22:29.613991 kubelet[1762]: E0714 22:22:29.613545 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:22:29.669070 kubelet[1762]: E0714 22:22:29.668915 1762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t7zr7" podUID="12c1a6b9-dbe4-46bb-b922-bd804d0944b3" Jul 14 22:22:29.675387 systemd[1]: Created slice kubepods-besteffort-pod54846615_8b27_496a_92c5_7b5122f732d0.slice - libcontainer container kubepods-besteffort-pod54846615_8b27_496a_92c5_7b5122f732d0.slice. Jul 14 22:22:29.688377 systemd[1]: Created slice kubepods-besteffort-pod25c545cf_e23a_4a99_9fbb_46a713c4e937.slice - libcontainer container kubepods-besteffort-pod25c545cf_e23a_4a99_9fbb_46a713c4e937.slice. Jul 14 22:22:29.720383 kubelet[1762]: I0714 22:22:29.720307 1762 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 14 22:22:29.748472 kubelet[1762]: I0714 22:22:29.748397 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25c545cf-e23a-4a99-9fbb-46a713c4e937-xtables-lock\") pod \"calico-node-pst5s\" (UID: \"25c545cf-e23a-4a99-9fbb-46a713c4e937\") " pod="calico-system/calico-node-pst5s" Jul 14 22:22:29.748472 kubelet[1762]: I0714 22:22:29.748446 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/54846615-8b27-496a-92c5-7b5122f732d0-kube-proxy\") pod \"kube-proxy-n8979\" (UID: \"54846615-8b27-496a-92c5-7b5122f732d0\") " pod="kube-system/kube-proxy-n8979" Jul 14 22:22:29.748472 kubelet[1762]: I0714 22:22:29.748466 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/54846615-8b27-496a-92c5-7b5122f732d0-lib-modules\") pod \"kube-proxy-n8979\" (UID: \"54846615-8b27-496a-92c5-7b5122f732d0\") " pod="kube-system/kube-proxy-n8979" Jul 14 22:22:29.748472 kubelet[1762]: I0714 22:22:29.748483 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/25c545cf-e23a-4a99-9fbb-46a713c4e937-cni-log-dir\") pod \"calico-node-pst5s\" (UID: \"25c545cf-e23a-4a99-9fbb-46a713c4e937\") " pod="calico-system/calico-node-pst5s" Jul 14 22:22:29.748748 kubelet[1762]: I0714 22:22:29.748501 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/25c545cf-e23a-4a99-9fbb-46a713c4e937-flexvol-driver-host\") pod \"calico-node-pst5s\" (UID: \"25c545cf-e23a-4a99-9fbb-46a713c4e937\") " pod="calico-system/calico-node-pst5s" Jul 14 22:22:29.748748 kubelet[1762]: I0714 22:22:29.748530 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/25c545cf-e23a-4a99-9fbb-46a713c4e937-node-certs\") pod \"calico-node-pst5s\" (UID: \"25c545cf-e23a-4a99-9fbb-46a713c4e937\") " pod="calico-system/calico-node-pst5s" Jul 14 22:22:29.748748 kubelet[1762]: I0714 22:22:29.748548 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25c545cf-e23a-4a99-9fbb-46a713c4e937-tigera-ca-bundle\") pod \"calico-node-pst5s\" (UID: \"25c545cf-e23a-4a99-9fbb-46a713c4e937\") " pod="calico-system/calico-node-pst5s" Jul 14 22:22:29.748748 kubelet[1762]: I0714 22:22:29.748566 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzg9s\" (UniqueName: \"kubernetes.io/projected/54846615-8b27-496a-92c5-7b5122f732d0-kube-api-access-fzg9s\") pod \"kube-proxy-n8979\" (UID: \"54846615-8b27-496a-92c5-7b5122f732d0\") " pod="kube-system/kube-proxy-n8979" Jul 14 22:22:29.748748 kubelet[1762]: I0714 22:22:29.748654 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/12c1a6b9-dbe4-46bb-b922-bd804d0944b3-socket-dir\") pod \"csi-node-driver-t7zr7\" (UID: \"12c1a6b9-dbe4-46bb-b922-bd804d0944b3\") " pod="calico-system/csi-node-driver-t7zr7" Jul 14 22:22:29.748875 kubelet[1762]: I0714 22:22:29.748714 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/25c545cf-e23a-4a99-9fbb-46a713c4e937-cni-bin-dir\") pod \"calico-node-pst5s\" (UID: \"25c545cf-e23a-4a99-9fbb-46a713c4e937\") " pod="calico-system/calico-node-pst5s" Jul 14 22:22:29.748875 kubelet[1762]: I0714 22:22:29.748764 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25c545cf-e23a-4a99-9fbb-46a713c4e937-lib-modules\") pod \"calico-node-pst5s\" (UID: \"25c545cf-e23a-4a99-9fbb-46a713c4e937\") " pod="calico-system/calico-node-pst5s" Jul 14 22:22:29.748875 kubelet[1762]: I0714 22:22:29.748806 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/25c545cf-e23a-4a99-9fbb-46a713c4e937-var-run-calico\") pod \"calico-node-pst5s\" (UID: \"25c545cf-e23a-4a99-9fbb-46a713c4e937\") " pod="calico-system/calico-node-pst5s" Jul 14 22:22:29.748875 kubelet[1762]: I0714 22:22:29.748830 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4gpb\" (UniqueName: \"kubernetes.io/projected/25c545cf-e23a-4a99-9fbb-46a713c4e937-kube-api-access-j4gpb\") pod \"calico-node-pst5s\" (UID: \"25c545cf-e23a-4a99-9fbb-46a713c4e937\") " pod="calico-system/calico-node-pst5s" Jul 14 22:22:29.748875 kubelet[1762]: I0714 22:22:29.748854 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/12c1a6b9-dbe4-46bb-b922-bd804d0944b3-registration-dir\") pod \"csi-node-driver-t7zr7\" (UID: \"12c1a6b9-dbe4-46bb-b922-bd804d0944b3\") " pod="calico-system/csi-node-driver-t7zr7" Jul 14 22:22:29.749024 kubelet[1762]: I0714 22:22:29.748878 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/12c1a6b9-dbe4-46bb-b922-bd804d0944b3-varrun\") pod \"csi-node-driver-t7zr7\" (UID: \"12c1a6b9-dbe4-46bb-b922-bd804d0944b3\") " pod="calico-system/csi-node-driver-t7zr7" Jul 14 22:22:29.749024 kubelet[1762]: I0714 22:22:29.748897 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc7hm\" (UniqueName: \"kubernetes.io/projected/12c1a6b9-dbe4-46bb-b922-bd804d0944b3-kube-api-access-qc7hm\") pod \"csi-node-driver-t7zr7\" (UID: \"12c1a6b9-dbe4-46bb-b922-bd804d0944b3\") " pod="calico-system/csi-node-driver-t7zr7" Jul 14 22:22:29.749024 kubelet[1762]: I0714 22:22:29.748917 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54846615-8b27-496a-92c5-7b5122f732d0-xtables-lock\") pod \"kube-proxy-n8979\" (UID: \"54846615-8b27-496a-92c5-7b5122f732d0\") " pod="kube-system/kube-proxy-n8979" Jul 14 22:22:29.749024 kubelet[1762]: I0714 22:22:29.748937 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/25c545cf-e23a-4a99-9fbb-46a713c4e937-cni-net-dir\") pod \"calico-node-pst5s\" (UID: \"25c545cf-e23a-4a99-9fbb-46a713c4e937\") " pod="calico-system/calico-node-pst5s" Jul 14 22:22:29.749024 kubelet[1762]: I0714 22:22:29.748958 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/25c545cf-e23a-4a99-9fbb-46a713c4e937-policysync\") pod \"calico-node-pst5s\" (UID: \"25c545cf-e23a-4a99-9fbb-46a713c4e937\") " pod="calico-system/calico-node-pst5s" Jul 14 22:22:29.749159 kubelet[1762]: I0714 22:22:29.748979 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/25c545cf-e23a-4a99-9fbb-46a713c4e937-var-lib-calico\") pod \"calico-node-pst5s\" (UID: \"25c545cf-e23a-4a99-9fbb-46a713c4e937\") " pod="calico-system/calico-node-pst5s" Jul 14 22:22:29.749159 kubelet[1762]: I0714 22:22:29.748999 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/12c1a6b9-dbe4-46bb-b922-bd804d0944b3-kubelet-dir\") pod \"csi-node-driver-t7zr7\" (UID: \"12c1a6b9-dbe4-46bb-b922-bd804d0944b3\") " pod="calico-system/csi-node-driver-t7zr7" Jul 14 22:22:29.850844 kubelet[1762]: E0714 22:22:29.850809 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:29.850844 kubelet[1762]: W0714 22:22:29.850835 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:29.850998 kubelet[1762]: E0714 22:22:29.850869 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:29.854098 kubelet[1762]: E0714 22:22:29.854071 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:29.854098 kubelet[1762]: W0714 22:22:29.854091 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:29.854210 kubelet[1762]: E0714 22:22:29.854107 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:29.896206 kubelet[1762]: E0714 22:22:29.896085 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:29.896206 kubelet[1762]: W0714 22:22:29.896114 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:29.896206 kubelet[1762]: E0714 22:22:29.896138 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:29.897448 kubelet[1762]: E0714 22:22:29.897076 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:29.897448 kubelet[1762]: W0714 22:22:29.897092 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:29.897448 kubelet[1762]: E0714 22:22:29.897119 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:29.897448 kubelet[1762]: E0714 22:22:29.897366 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:29.897448 kubelet[1762]: W0714 22:22:29.897377 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:29.897448 kubelet[1762]: E0714 22:22:29.897388 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:29.986285 kubelet[1762]: E0714 22:22:29.986247 1762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:29.986861 containerd[1458]: time="2025-07-14T22:22:29.986815761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n8979,Uid:54846615-8b27-496a-92c5-7b5122f732d0,Namespace:kube-system,Attempt:0,}" Jul 14 22:22:29.991309 containerd[1458]: time="2025-07-14T22:22:29.991284172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pst5s,Uid:25c545cf-e23a-4a99-9fbb-46a713c4e937,Namespace:calico-system,Attempt:0,}" Jul 14 22:22:30.614306 kubelet[1762]: E0714 22:22:30.614262 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:22:31.218503 kubelet[1762]: E0714 22:22:31.218453 1762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t7zr7" podUID="12c1a6b9-dbe4-46bb-b922-bd804d0944b3" Jul 14 22:22:31.615460 kubelet[1762]: E0714 22:22:31.615316 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:22:32.616281 kubelet[1762]: E0714 22:22:32.616210 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:22:33.218532 kubelet[1762]: E0714 22:22:33.218455 1762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t7zr7" podUID="12c1a6b9-dbe4-46bb-b922-bd804d0944b3" Jul 14 22:22:33.616675 kubelet[1762]: E0714 22:22:33.616518 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:22:34.617152 kubelet[1762]: E0714 22:22:34.617098 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:22:35.217997 kubelet[1762]: E0714 22:22:35.217961 1762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t7zr7" podUID="12c1a6b9-dbe4-46bb-b922-bd804d0944b3" Jul 14 22:22:35.618022 kubelet[1762]: E0714 22:22:35.617891 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:22:36.618702 kubelet[1762]: E0714 22:22:36.618664 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:22:37.218331 kubelet[1762]: E0714 22:22:37.218146 1762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t7zr7" podUID="12c1a6b9-dbe4-46bb-b922-bd804d0944b3" Jul 14 22:22:37.619244 kubelet[1762]: E0714 22:22:37.619124 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:22:38.619855 kubelet[1762]: E0714 22:22:38.619781 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:22:39.218261 kubelet[1762]: E0714 22:22:39.218176 1762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t7zr7" podUID="12c1a6b9-dbe4-46bb-b922-bd804d0944b3" Jul 14 22:22:39.620338 kubelet[1762]: E0714 22:22:39.620176 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:22:40.620979 kubelet[1762]: E0714 22:22:40.620899 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:22:40.972659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1363042365.mount: Deactivated successfully. Jul 14 22:22:40.981409 containerd[1458]: time="2025-07-14T22:22:40.981346688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:22:40.982502 containerd[1458]: time="2025-07-14T22:22:40.982456650Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:22:40.983063 containerd[1458]: time="2025-07-14T22:22:40.983018614Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 14 22:22:40.985007 containerd[1458]: time="2025-07-14T22:22:40.984951660Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 14 22:22:40.985346 containerd[1458]: time="2025-07-14T22:22:40.985280797Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:22:40.988530 containerd[1458]: time="2025-07-14T22:22:40.988489987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:22:40.989343 containerd[1458]: time="2025-07-14T22:22:40.989304194Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 11.002407852s" Jul 14 22:22:40.991509 containerd[1458]: time="2025-07-14T22:22:40.991476689Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 11.000120091s" Jul 14 22:22:41.096541 containerd[1458]: time="2025-07-14T22:22:41.096459597Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:22:41.096541 containerd[1458]: time="2025-07-14T22:22:41.096497378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:22:41.096541 containerd[1458]: time="2025-07-14T22:22:41.096507437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:22:41.096795 containerd[1458]: time="2025-07-14T22:22:41.096561999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:22:41.096795 containerd[1458]: time="2025-07-14T22:22:41.096284388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:22:41.096795 containerd[1458]: time="2025-07-14T22:22:41.096344852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:22:41.096795 containerd[1458]: time="2025-07-14T22:22:41.096357606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:22:41.096795 containerd[1458]: time="2025-07-14T22:22:41.096462412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:22:41.164762 systemd[1]: Started cri-containerd-95910299089f452f3b122c85abf1aea3294781304d3229995d87fdea9bc22c5d.scope - libcontainer container 95910299089f452f3b122c85abf1aea3294781304d3229995d87fdea9bc22c5d. Jul 14 22:22:41.166513 systemd[1]: Started cri-containerd-a9e63adc239ef940b24e2bdf4df369ace1d1e7ba9e59c9585bc20e8e69113270.scope - libcontainer container a9e63adc239ef940b24e2bdf4df369ace1d1e7ba9e59c9585bc20e8e69113270. Jul 14 22:22:41.189081 containerd[1458]: time="2025-07-14T22:22:41.189042696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n8979,Uid:54846615-8b27-496a-92c5-7b5122f732d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9e63adc239ef940b24e2bdf4df369ace1d1e7ba9e59c9585bc20e8e69113270\"" Jul 14 22:22:41.191022 kubelet[1762]: E0714 22:22:41.190864 1762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:41.191780 containerd[1458]: time="2025-07-14T22:22:41.191751698Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" Jul 14 22:22:41.192150 containerd[1458]: time="2025-07-14T22:22:41.192128074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pst5s,Uid:25c545cf-e23a-4a99-9fbb-46a713c4e937,Namespace:calico-system,Attempt:0,} returns sandbox id \"95910299089f452f3b122c85abf1aea3294781304d3229995d87fdea9bc22c5d\"" Jul 14 22:22:41.218665 kubelet[1762]: E0714 22:22:41.218623 1762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t7zr7" podUID="12c1a6b9-dbe4-46bb-b922-bd804d0944b3" Jul 14 22:22:41.621462 kubelet[1762]: E0714 22:22:41.621390 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:22:42.254123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3963763893.mount: Deactivated successfully. Jul 14 22:22:42.561408 containerd[1458]: time="2025-07-14T22:22:42.561261236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:42.562199 containerd[1458]: time="2025-07-14T22:22:42.562154181Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917856" Jul 14 22:22:42.563428 containerd[1458]: time="2025-07-14T22:22:42.563351848Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:42.565415 containerd[1458]: time="2025-07-14T22:22:42.565369813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:42.565948 containerd[1458]: time="2025-07-14T22:22:42.565894186Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 1.374115618s" Jul 14 22:22:42.565948 containerd[1458]: time="2025-07-14T22:22:42.565939451Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" Jul 14 22:22:42.567056 containerd[1458]: time="2025-07-14T22:22:42.567032021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 14 22:22:42.568310 containerd[1458]: time="2025-07-14T22:22:42.568276435Z" level=info msg="CreateContainer within sandbox \"a9e63adc239ef940b24e2bdf4df369ace1d1e7ba9e59c9585bc20e8e69113270\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 14 22:22:42.585146 containerd[1458]: time="2025-07-14T22:22:42.585083149Z" level=info msg="CreateContainer within sandbox \"a9e63adc239ef940b24e2bdf4df369ace1d1e7ba9e59c9585bc20e8e69113270\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"96680769d5cbb45d07cbaf1f81b439d6576e0f7f4be3f6586fd9d99a917750ac\"" Jul 14 22:22:42.585732 containerd[1458]: time="2025-07-14T22:22:42.585709384Z" level=info msg="StartContainer for \"96680769d5cbb45d07cbaf1f81b439d6576e0f7f4be3f6586fd9d99a917750ac\"" Jul 14 22:22:42.616752 systemd[1]: Started cri-containerd-96680769d5cbb45d07cbaf1f81b439d6576e0f7f4be3f6586fd9d99a917750ac.scope - libcontainer container 96680769d5cbb45d07cbaf1f81b439d6576e0f7f4be3f6586fd9d99a917750ac. Jul 14 22:22:42.621951 kubelet[1762]: E0714 22:22:42.621906 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:22:42.644226 containerd[1458]: time="2025-07-14T22:22:42.644188744Z" level=info msg="StartContainer for \"96680769d5cbb45d07cbaf1f81b439d6576e0f7f4be3f6586fd9d99a917750ac\" returns successfully" Jul 14 22:22:43.218332 kubelet[1762]: E0714 22:22:43.218289 1762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t7zr7" podUID="12c1a6b9-dbe4-46bb-b922-bd804d0944b3" Jul 14 22:22:43.244626 kubelet[1762]: E0714 22:22:43.244577 1762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:43.254544 kubelet[1762]: I0714 22:22:43.254466 1762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n8979" podStartSLOduration=15.878993287 podStartE2EDuration="17.25444971s" podCreationTimestamp="2025-07-14 22:22:26 +0000 UTC" firstStartedPulling="2025-07-14 22:22:41.191362568 +0000 UTC m=+17.434458562" lastFinishedPulling="2025-07-14 22:22:42.566818991 +0000 UTC m=+18.809914985" observedRunningTime="2025-07-14 22:22:43.254383896 +0000 UTC m=+19.497479890" watchObservedRunningTime="2025-07-14 22:22:43.25444971 +0000 UTC m=+19.497545704" Jul 14 22:22:43.310327 kubelet[1762]: E0714 22:22:43.310285 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:43.310327 kubelet[1762]: W0714 22:22:43.310314 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:43.310327 kubelet[1762]: E0714 22:22:43.310336 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:43.310659 kubelet[1762]: E0714 22:22:43.310604 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:43.310659 kubelet[1762]: W0714 22:22:43.310654 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:43.310732 kubelet[1762]: E0714 22:22:43.310666 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:43.310944 kubelet[1762]: E0714 22:22:43.310915 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:43.310944 kubelet[1762]: W0714 22:22:43.310928 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:43.310944 kubelet[1762]: E0714 22:22:43.310937 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:43.311227 kubelet[1762]: E0714 22:22:43.311209 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:43.311227 kubelet[1762]: W0714 22:22:43.311222 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:43.311295 kubelet[1762]: E0714 22:22:43.311233 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:43.311594 kubelet[1762]: E0714 22:22:43.311564 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:43.311594 kubelet[1762]: W0714 22:22:43.311580 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:43.311594 kubelet[1762]: E0714 22:22:43.311590 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:43.311866 kubelet[1762]: E0714 22:22:43.311837 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:43.311866 kubelet[1762]: W0714 22:22:43.311849 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:43.311866 kubelet[1762]: E0714 22:22:43.311859 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:43.312089 kubelet[1762]: E0714 22:22:43.312070 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:43.312089 kubelet[1762]: W0714 22:22:43.312084 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:43.312182 kubelet[1762]: E0714 22:22:43.312095 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:43.312346 kubelet[1762]: E0714 22:22:43.312328 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:43.312346 kubelet[1762]: W0714 22:22:43.312341 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:43.312412 kubelet[1762]: E0714 22:22:43.312354 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:43.312596 kubelet[1762]: E0714 22:22:43.312577 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:43.312596 kubelet[1762]: W0714 22:22:43.312591 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:43.312676 kubelet[1762]: E0714 22:22:43.312604 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:43.312864 kubelet[1762]: E0714 22:22:43.312845 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:43.312864 kubelet[1762]: W0714 22:22:43.312860 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:43.312927 kubelet[1762]: E0714 22:22:43.312871 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:43.313167 kubelet[1762]: E0714 22:22:43.313137 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:43.313167 kubelet[1762]: W0714 22:22:43.313153 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:43.313167 kubelet[1762]: E0714 22:22:43.313165 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:43.313417 kubelet[1762]: E0714 22:22:43.313397 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:43.313417 kubelet[1762]: W0714 22:22:43.313412 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:43.313480 kubelet[1762]: E0714 22:22:43.313423 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:43.313726 kubelet[1762]: E0714 22:22:43.313707 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:43.313726 kubelet[1762]: W0714 22:22:43.313721 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:43.313841 kubelet[1762]: E0714 22:22:43.313733 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:43.313976 kubelet[1762]: E0714 22:22:43.313958 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:43.313976 kubelet[1762]: W0714 22:22:43.313969 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:43.313976 kubelet[1762]: E0714 22:22:43.313977 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:43.314298 kubelet[1762]: E0714 22:22:43.314265 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:43.314351 kubelet[1762]: W0714 22:22:43.314295 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:43.314351 kubelet[1762]: E0714 22:22:43.314321 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:43.314823 kubelet[1762]: E0714 22:22:43.314623 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:43.314823 kubelet[1762]: W0714 22:22:43.314648 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:43.314823 kubelet[1762]: E0714 22:22:43.314673 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:43.315011 kubelet[1762]: E0714 22:22:43.314985 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:43.315011 kubelet[1762]: W0714 22:22:43.314999 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:43.315011 kubelet[1762]: E0714 22:22:43.315012 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:43.315269 kubelet[1762]: E0714 22:22:43.315245 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:43.315269 kubelet[1762]: W0714 22:22:43.315259 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:43.315269 kubelet[1762]: E0714 22:22:43.315270 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:43.315493 kubelet[1762]: E0714 22:22:43.315474 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:43.315493 kubelet[1762]: W0714 22:22:43.315488 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:43.315556 kubelet[1762]: E0714 22:22:43.315498 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:43.315750 kubelet[1762]: E0714 22:22:43.315730 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:43.315750 kubelet[1762]: W0714 22:22:43.315744 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:43.315817 kubelet[1762]: E0714 22:22:43.315757 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:43.342358 kubelet[1762]: E0714 22:22:43.342267 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:43.342358 kubelet[1762]: W0714 22:22:43.342295 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:43.342358 kubelet[1762]: E0714 22:22:43.342312 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:43.342651 kubelet[1762]: E0714 22:22:43.342554 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:43.342651 kubelet[1762]: W0714 22:22:43.342563 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:43.342651 kubelet[1762]: E0714 22:22:43.342576 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:43.343003 kubelet[1762]: E0714 22:22:43.342947 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:43.343003 kubelet[1762]: W0714 22:22:43.342978 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:43.343094 kubelet[1762]: E0714 22:22:43.343007 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:43.343259 kubelet[1762]: E0714 22:22:43.343229 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:43.343259 kubelet[1762]: W0714 22:22:43.343242 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:43.343259 kubelet[1762]: E0714 22:22:43.343254 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:43.343491 kubelet[1762]: E0714 22:22:43.343457 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:43.343491 kubelet[1762]: W0714 22:22:43.343469 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:43.343491 kubelet[1762]: E0714 22:22:43.343480 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:43.343761 kubelet[1762]: E0714 22:22:43.343725 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:43.343761 kubelet[1762]: W0714 22:22:43.343738 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:43.343761 kubelet[1762]: E0714 22:22:43.343750 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:43.344071 kubelet[1762]: E0714 22:22:43.344030 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:43.344071 kubelet[1762]: W0714 22:22:43.344048 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:43.344071 kubelet[1762]: E0714 22:22:43.344065 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:43.344301 kubelet[1762]: E0714 22:22:43.344274 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:43.344301 kubelet[1762]: W0714 22:22:43.344285 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:43.344301 kubelet[1762]: E0714 22:22:43.344299 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:43.344499 kubelet[1762]: E0714 22:22:43.344475 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:43.344499 kubelet[1762]: W0714 22:22:43.344485 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:43.344499 kubelet[1762]: E0714 22:22:43.344497 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:43.344725 kubelet[1762]: E0714 22:22:43.344701 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:43.344725 kubelet[1762]: W0714 22:22:43.344712 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:43.344725 kubelet[1762]: E0714 22:22:43.344724 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:43.345004 kubelet[1762]: E0714 22:22:43.344963 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:43.345004 kubelet[1762]: W0714 22:22:43.344978 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:43.345004 kubelet[1762]: E0714 22:22:43.344992 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:43.345224 kubelet[1762]: E0714 22:22:43.345199 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:43.345224 kubelet[1762]: W0714 22:22:43.345211 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:43.345224 kubelet[1762]: E0714 22:22:43.345219 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:43.622891 kubelet[1762]: E0714 22:22:43.622768 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:22:44.246147 kubelet[1762]: E0714 22:22:44.246099 1762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:44.322704 kubelet[1762]: E0714 22:22:44.322656 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:44.322704 kubelet[1762]: W0714 22:22:44.322682 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:44.322704 kubelet[1762]: E0714 22:22:44.322701 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:44.322975 kubelet[1762]: E0714 22:22:44.322954 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:44.322975 kubelet[1762]: W0714 22:22:44.322963 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:44.322975 kubelet[1762]: E0714 22:22:44.322970 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:44.323209 kubelet[1762]: E0714 22:22:44.323198 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:44.323209 kubelet[1762]: W0714 22:22:44.323206 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:44.323278 kubelet[1762]: E0714 22:22:44.323214 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:44.323392 kubelet[1762]: E0714 22:22:44.323380 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:44.323392 kubelet[1762]: W0714 22:22:44.323389 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:44.323452 kubelet[1762]: E0714 22:22:44.323396 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:44.323573 kubelet[1762]: E0714 22:22:44.323562 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:44.323573 kubelet[1762]: W0714 22:22:44.323570 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:44.323638 kubelet[1762]: E0714 22:22:44.323577 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:44.323773 kubelet[1762]: E0714 22:22:44.323751 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:44.323773 kubelet[1762]: W0714 22:22:44.323759 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:44.323773 kubelet[1762]: E0714 22:22:44.323766 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:44.323942 kubelet[1762]: E0714 22:22:44.323928 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:44.323942 kubelet[1762]: W0714 22:22:44.323937 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:44.323984 kubelet[1762]: E0714 22:22:44.323944 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:44.324154 kubelet[1762]: E0714 22:22:44.324131 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:44.324154 kubelet[1762]: W0714 22:22:44.324141 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:44.324154 kubelet[1762]: E0714 22:22:44.324148 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:44.324328 kubelet[1762]: E0714 22:22:44.324313 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:44.324328 kubelet[1762]: W0714 22:22:44.324322 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:44.324373 kubelet[1762]: E0714 22:22:44.324329 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:44.324501 kubelet[1762]: E0714 22:22:44.324487 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:44.324501 kubelet[1762]: W0714 22:22:44.324495 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:44.324547 kubelet[1762]: E0714 22:22:44.324502 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:44.324678 kubelet[1762]: E0714 22:22:44.324666 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:44.324678 kubelet[1762]: W0714 22:22:44.324674 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:44.324723 kubelet[1762]: E0714 22:22:44.324683 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:44.324844 kubelet[1762]: E0714 22:22:44.324831 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:44.324844 kubelet[1762]: W0714 22:22:44.324839 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:44.324890 kubelet[1762]: E0714 22:22:44.324846 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:44.325041 kubelet[1762]: E0714 22:22:44.325028 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:44.325041 kubelet[1762]: W0714 22:22:44.325036 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:44.325094 kubelet[1762]: E0714 22:22:44.325043 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:44.325219 kubelet[1762]: E0714 22:22:44.325205 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:44.325219 kubelet[1762]: W0714 22:22:44.325213 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:44.325268 kubelet[1762]: E0714 22:22:44.325220 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:44.325404 kubelet[1762]: E0714 22:22:44.325392 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:44.325404 kubelet[1762]: W0714 22:22:44.325400 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:44.325449 kubelet[1762]: E0714 22:22:44.325406 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:44.325585 kubelet[1762]: E0714 22:22:44.325572 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:44.325585 kubelet[1762]: W0714 22:22:44.325580 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:44.325643 kubelet[1762]: E0714 22:22:44.325587 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:44.325795 kubelet[1762]: E0714 22:22:44.325782 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:44.325795 kubelet[1762]: W0714 22:22:44.325790 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:44.325848 kubelet[1762]: E0714 22:22:44.325797 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:44.325966 kubelet[1762]: E0714 22:22:44.325954 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:44.325966 kubelet[1762]: W0714 22:22:44.325962 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:44.326032 kubelet[1762]: E0714 22:22:44.325968 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:44.326165 kubelet[1762]: E0714 22:22:44.326151 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:44.326165 kubelet[1762]: W0714 22:22:44.326160 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:44.326209 kubelet[1762]: E0714 22:22:44.326167 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:44.326346 kubelet[1762]: E0714 22:22:44.326333 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:44.326346 kubelet[1762]: W0714 22:22:44.326341 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:44.326385 kubelet[1762]: E0714 22:22:44.326348 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:44.350000 kubelet[1762]: E0714 22:22:44.349956 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:44.350000 kubelet[1762]: W0714 22:22:44.349981 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:44.350000 kubelet[1762]: E0714 22:22:44.350001 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:44.350279 kubelet[1762]: E0714 22:22:44.350250 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:44.350279 kubelet[1762]: W0714 22:22:44.350266 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:44.350279 kubelet[1762]: E0714 22:22:44.350281 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:44.350556 kubelet[1762]: E0714 22:22:44.350540 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:44.350556 kubelet[1762]: W0714 22:22:44.350551 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:44.350665 kubelet[1762]: E0714 22:22:44.350566 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:44.350817 kubelet[1762]: E0714 22:22:44.350801 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:44.350817 kubelet[1762]: W0714 22:22:44.350812 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:44.350889 kubelet[1762]: E0714 22:22:44.350826 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:44.351098 kubelet[1762]: E0714 22:22:44.351051 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:44.351098 kubelet[1762]: W0714 22:22:44.351065 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:44.351098 kubelet[1762]: E0714 22:22:44.351087 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:44.351356 kubelet[1762]: E0714 22:22:44.351342 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:44.351356 kubelet[1762]: W0714 22:22:44.351352 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:44.351415 kubelet[1762]: E0714 22:22:44.351395 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:44.351562 kubelet[1762]: E0714 22:22:44.351547 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:44.351562 kubelet[1762]: W0714 22:22:44.351558 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:44.351622 kubelet[1762]: E0714 22:22:44.351567 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:44.351801 kubelet[1762]: E0714 22:22:44.351785 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:44.351801 kubelet[1762]: W0714 22:22:44.351796 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:44.351854 kubelet[1762]: E0714 22:22:44.351808 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:44.352024 kubelet[1762]: E0714 22:22:44.352008 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:44.352024 kubelet[1762]: W0714 22:22:44.352018 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:44.352091 kubelet[1762]: E0714 22:22:44.352030 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:44.352250 kubelet[1762]: E0714 22:22:44.352234 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:44.352275 kubelet[1762]: W0714 22:22:44.352249 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:44.352275 kubelet[1762]: E0714 22:22:44.352266 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:44.352476 kubelet[1762]: E0714 22:22:44.352465 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:44.352476 kubelet[1762]: W0714 22:22:44.352474 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:44.352523 kubelet[1762]: E0714 22:22:44.352483 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:44.352837 kubelet[1762]: E0714 22:22:44.352819 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:22:44.352837 kubelet[1762]: W0714 22:22:44.352829 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:22:44.352837 kubelet[1762]: E0714 22:22:44.352837 1762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:22:44.490207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount611123763.mount: Deactivated successfully. Jul 14 22:22:44.545759 containerd[1458]: time="2025-07-14T22:22:44.545631397Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:44.546899 containerd[1458]: time="2025-07-14T22:22:44.546863018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=5939797" Jul 14 22:22:44.548478 containerd[1458]: time="2025-07-14T22:22:44.548451197Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:44.550595 containerd[1458]: time="2025-07-14T22:22:44.550552378Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:44.551143 containerd[1458]: time="2025-07-14T22:22:44.551113310Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.984051704s" Jul 14 22:22:44.551182 containerd[1458]: time="2025-07-14T22:22:44.551144288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 14 22:22:44.555315 containerd[1458]: time="2025-07-14T22:22:44.555278903Z" level=info msg="CreateContainer within sandbox \"95910299089f452f3b122c85abf1aea3294781304d3229995d87fdea9bc22c5d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 14 22:22:44.568827 containerd[1458]: time="2025-07-14T22:22:44.568777172Z" level=info msg="CreateContainer within sandbox \"95910299089f452f3b122c85abf1aea3294781304d3229995d87fdea9bc22c5d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"99c23c667c732e7b41b4c2a74b486f64d3fb3b2f77a2c0e0cdcbff255959a198\"" Jul 14 22:22:44.569314 containerd[1458]: time="2025-07-14T22:22:44.569284845Z" level=info msg="StartContainer for \"99c23c667c732e7b41b4c2a74b486f64d3fb3b2f77a2c0e0cdcbff255959a198\"" Jul 14 22:22:44.599828 systemd[1]: Started cri-containerd-99c23c667c732e7b41b4c2a74b486f64d3fb3b2f77a2c0e0cdcbff255959a198.scope - libcontainer container 99c23c667c732e7b41b4c2a74b486f64d3fb3b2f77a2c0e0cdcbff255959a198. Jul 14 22:22:44.610799 kubelet[1762]: E0714 22:22:44.610750 1762 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:22:44.623652 kubelet[1762]: E0714 22:22:44.623578 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:22:44.638624 systemd[1]: cri-containerd-99c23c667c732e7b41b4c2a74b486f64d3fb3b2f77a2c0e0cdcbff255959a198.scope: Deactivated successfully. Jul 14 22:22:44.686169 containerd[1458]: time="2025-07-14T22:22:44.686114103Z" level=info msg="StartContainer for \"99c23c667c732e7b41b4c2a74b486f64d3fb3b2f77a2c0e0cdcbff255959a198\" returns successfully" Jul 14 22:22:45.162559 containerd[1458]: time="2025-07-14T22:22:45.162494146Z" level=info msg="shim disconnected" id=99c23c667c732e7b41b4c2a74b486f64d3fb3b2f77a2c0e0cdcbff255959a198 namespace=k8s.io Jul 14 22:22:45.162559 containerd[1458]: time="2025-07-14T22:22:45.162555070Z" level=warning msg="cleaning up after shim disconnected" id=99c23c667c732e7b41b4c2a74b486f64d3fb3b2f77a2c0e0cdcbff255959a198 namespace=k8s.io Jul 14 22:22:45.162559 containerd[1458]: time="2025-07-14T22:22:45.162564067Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:22:45.220554 kubelet[1762]: E0714 22:22:45.220519 1762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t7zr7" podUID="12c1a6b9-dbe4-46bb-b922-bd804d0944b3" Jul 14 22:22:45.249439 containerd[1458]: time="2025-07-14T22:22:45.249395588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 14 22:22:45.470090 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99c23c667c732e7b41b4c2a74b486f64d3fb3b2f77a2c0e0cdcbff255959a198-rootfs.mount: Deactivated successfully. Jul 14 22:22:45.624406 kubelet[1762]: E0714 22:22:45.624365 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:22:46.624706 kubelet[1762]: E0714 22:22:46.624634 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:22:47.218053 kubelet[1762]: E0714 22:22:47.218006 1762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t7zr7" podUID="12c1a6b9-dbe4-46bb-b922-bd804d0944b3" Jul 14 22:22:47.625125 kubelet[1762]: E0714 22:22:47.624969 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:22:48.597804 containerd[1458]: time="2025-07-14T22:22:48.597729816Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:48.598557 containerd[1458]: time="2025-07-14T22:22:48.598501093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 14 22:22:48.599881 containerd[1458]: time="2025-07-14T22:22:48.599832269Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:48.602275 containerd[1458]: time="2025-07-14T22:22:48.602193318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:48.603016 containerd[1458]: time="2025-07-14T22:22:48.602972760Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.353535544s" Jul 14 22:22:48.603016 containerd[1458]: time="2025-07-14T22:22:48.603007756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 14 22:22:48.604978 containerd[1458]: time="2025-07-14T22:22:48.604928969Z" level=info msg="CreateContainer within sandbox \"95910299089f452f3b122c85abf1aea3294781304d3229995d87fdea9bc22c5d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 14 22:22:48.620738 containerd[1458]: time="2025-07-14T22:22:48.620696755Z" level=info msg="CreateContainer within sandbox \"95910299089f452f3b122c85abf1aea3294781304d3229995d87fdea9bc22c5d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"629359482bb6227e2ab63626d95ec3764aa5a18b87ca6a702aaf840e1da4a35d\"" Jul 14 22:22:48.621164 containerd[1458]: time="2025-07-14T22:22:48.621135087Z" level=info msg="StartContainer for \"629359482bb6227e2ab63626d95ec3764aa5a18b87ca6a702aaf840e1da4a35d\"" Jul 14 22:22:48.625268 kubelet[1762]: E0714 22:22:48.625234 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:22:48.650752 systemd[1]: Started cri-containerd-629359482bb6227e2ab63626d95ec3764aa5a18b87ca6a702aaf840e1da4a35d.scope - libcontainer container 629359482bb6227e2ab63626d95ec3764aa5a18b87ca6a702aaf840e1da4a35d. Jul 14 22:22:48.679143 containerd[1458]: time="2025-07-14T22:22:48.679096757Z" level=info msg="StartContainer for \"629359482bb6227e2ab63626d95ec3764aa5a18b87ca6a702aaf840e1da4a35d\" returns successfully" Jul 14 22:22:49.218122 kubelet[1762]: E0714 22:22:49.218059 1762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t7zr7" podUID="12c1a6b9-dbe4-46bb-b922-bd804d0944b3" Jul 14 22:22:49.626181 kubelet[1762]: E0714 22:22:49.625994 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:22:50.626726 kubelet[1762]: E0714 22:22:50.626683 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:22:50.758951 containerd[1458]: time="2025-07-14T22:22:50.758888736Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 22:22:50.762421 systemd[1]: cri-containerd-629359482bb6227e2ab63626d95ec3764aa5a18b87ca6a702aaf840e1da4a35d.scope: Deactivated successfully. Jul 14 22:22:50.783543 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-629359482bb6227e2ab63626d95ec3764aa5a18b87ca6a702aaf840e1da4a35d-rootfs.mount: Deactivated successfully. Jul 14 22:22:50.852580 kubelet[1762]: I0714 22:22:50.852538 1762 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 14 22:22:50.990159 containerd[1458]: time="2025-07-14T22:22:50.990084053Z" level=info msg="shim disconnected" id=629359482bb6227e2ab63626d95ec3764aa5a18b87ca6a702aaf840e1da4a35d namespace=k8s.io Jul 14 22:22:50.990159 containerd[1458]: time="2025-07-14T22:22:50.990137165Z" level=warning msg="cleaning up after shim disconnected" id=629359482bb6227e2ab63626d95ec3764aa5a18b87ca6a702aaf840e1da4a35d namespace=k8s.io Jul 14 22:22:50.990159 containerd[1458]: time="2025-07-14T22:22:50.990145992Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:22:51.224520 systemd[1]: Created slice kubepods-besteffort-pod12c1a6b9_dbe4_46bb_b922_bd804d0944b3.slice - libcontainer container kubepods-besteffort-pod12c1a6b9_dbe4_46bb_b922_bd804d0944b3.slice. Jul 14 22:22:51.226407 containerd[1458]: time="2025-07-14T22:22:51.226375710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t7zr7,Uid:12c1a6b9-dbe4-46bb-b922-bd804d0944b3,Namespace:calico-system,Attempt:0,}" Jul 14 22:22:51.261757 containerd[1458]: time="2025-07-14T22:22:51.261630614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 14 22:22:51.287429 containerd[1458]: time="2025-07-14T22:22:51.287368393Z" level=error msg="Failed to destroy network for sandbox \"3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:22:51.287806 containerd[1458]: time="2025-07-14T22:22:51.287771971Z" level=error msg="encountered an error cleaning up failed sandbox \"3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:22:51.287842 containerd[1458]: time="2025-07-14T22:22:51.287821197Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t7zr7,Uid:12c1a6b9-dbe4-46bb-b922-bd804d0944b3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:22:51.288063 kubelet[1762]: E0714 22:22:51.288002 1762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:22:51.288063 kubelet[1762]: E0714 22:22:51.288061 1762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t7zr7" Jul 14 22:22:51.288226 kubelet[1762]: E0714 22:22:51.288080 1762 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t7zr7" Jul 14 22:22:51.288226 kubelet[1762]: E0714 22:22:51.288122 1762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-t7zr7_calico-system(12c1a6b9-dbe4-46bb-b922-bd804d0944b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-t7zr7_calico-system(12c1a6b9-dbe4-46bb-b922-bd804d0944b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t7zr7" podUID="12c1a6b9-dbe4-46bb-b922-bd804d0944b3" Jul 14 22:22:51.289233 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7-shm.mount: Deactivated successfully. Jul 14 22:22:51.627168 kubelet[1762]: E0714 22:22:51.627062 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:22:52.262983 kubelet[1762]: I0714 22:22:52.262938 1762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" Jul 14 22:22:52.263565 containerd[1458]: time="2025-07-14T22:22:52.263518002Z" level=info msg="StopPodSandbox for \"3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7\"" Jul 14 22:22:52.263905 containerd[1458]: time="2025-07-14T22:22:52.263769466Z" level=info msg="Ensure that sandbox 3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7 in task-service has been cleanup successfully" Jul 14 22:22:52.286695 containerd[1458]: time="2025-07-14T22:22:52.286638873Z" level=error msg="StopPodSandbox for \"3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7\" failed" error="failed to destroy network for sandbox \"3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:22:52.286922 kubelet[1762]: E0714 22:22:52.286878 1762 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" Jul 14 22:22:52.287000 kubelet[1762]: E0714 22:22:52.286948 1762 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7"} Jul 14 22:22:52.287035 kubelet[1762]: E0714 22:22:52.287022 1762 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"12c1a6b9-dbe4-46bb-b922-bd804d0944b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:22:52.287103 kubelet[1762]: E0714 22:22:52.287046 1762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"12c1a6b9-dbe4-46bb-b922-bd804d0944b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t7zr7" podUID="12c1a6b9-dbe4-46bb-b922-bd804d0944b3" Jul 14 22:22:52.629767 kubelet[1762]: E0714 22:22:52.628126 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:22:53.628292 kubelet[1762]: E0714 22:22:53.628237 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:22:54.629414 kubelet[1762]: E0714 22:22:54.629350 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:22:55.223241 systemd[1]: Created slice kubepods-besteffort-pod5feda60e_2be9_4f18_8467_5cae0b041b1f.slice - libcontainer container kubepods-besteffort-pod5feda60e_2be9_4f18_8467_5cae0b041b1f.slice. Jul 14 22:22:55.318324 kubelet[1762]: I0714 22:22:55.318285 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7727\" (UniqueName: \"kubernetes.io/projected/5feda60e-2be9-4f18-8467-5cae0b041b1f-kube-api-access-d7727\") pod \"nginx-deployment-7fcdb87857-dhqhx\" (UID: \"5feda60e-2be9-4f18-8467-5cae0b041b1f\") " pod="default/nginx-deployment-7fcdb87857-dhqhx" Jul 14 22:22:55.526195 containerd[1458]: time="2025-07-14T22:22:55.526068723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-dhqhx,Uid:5feda60e-2be9-4f18-8467-5cae0b041b1f,Namespace:default,Attempt:0,}" Jul 14 22:22:55.628445 containerd[1458]: time="2025-07-14T22:22:55.628366586Z" level=error msg="Failed to destroy network for sandbox \"d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:22:55.628797 containerd[1458]: time="2025-07-14T22:22:55.628774247Z" level=error msg="encountered an error cleaning up failed sandbox \"d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:22:55.628868 containerd[1458]: time="2025-07-14T22:22:55.628837649Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-dhqhx,Uid:5feda60e-2be9-4f18-8467-5cae0b041b1f,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:22:55.629958 kubelet[1762]: E0714 22:22:55.629828 1762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:22:55.629958 kubelet[1762]: E0714 22:22:55.629889 1762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-dhqhx" Jul 14 22:22:55.629958 kubelet[1762]: E0714 22:22:55.629911 1762 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-dhqhx" Jul 14 22:22:55.630348 kubelet[1762]: E0714 22:22:55.629952 1762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-dhqhx_default(5feda60e-2be9-4f18-8467-5cae0b041b1f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-dhqhx_default(5feda60e-2be9-4f18-8467-5cae0b041b1f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-dhqhx" podUID="5feda60e-2be9-4f18-8467-5cae0b041b1f" Jul 14 22:22:55.630883 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610-shm.mount: Deactivated successfully. Jul 14 22:22:55.630978 kubelet[1762]: E0714 22:22:55.630877 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:22:56.270898 kubelet[1762]: I0714 22:22:56.270625 1762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" Jul 14 22:22:56.271542 containerd[1458]: time="2025-07-14T22:22:56.271368649Z" level=info msg="StopPodSandbox for \"d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610\"" Jul 14 22:22:56.271704 containerd[1458]: time="2025-07-14T22:22:56.271668343Z" level=info msg="Ensure that sandbox d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610 in task-service has been cleanup successfully" Jul 14 22:22:56.297411 containerd[1458]: time="2025-07-14T22:22:56.297322955Z" level=error msg="StopPodSandbox for \"d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610\" failed" error="failed to destroy network for sandbox \"d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:22:56.297662 kubelet[1762]: E0714 22:22:56.297596 1762 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" Jul 14 22:22:56.297715 kubelet[1762]: E0714 22:22:56.297671 1762 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610"} Jul 14 22:22:56.297894 kubelet[1762]: E0714 22:22:56.297710 1762 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5feda60e-2be9-4f18-8467-5cae0b041b1f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:22:56.297894 kubelet[1762]: E0714 22:22:56.297742 1762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5feda60e-2be9-4f18-8467-5cae0b041b1f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-dhqhx" podUID="5feda60e-2be9-4f18-8467-5cae0b041b1f" Jul 14 22:22:56.632157 kubelet[1762]: E0714 22:22:56.632001 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:22:57.084258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1831555723.mount: Deactivated successfully. Jul 14 22:22:57.633000 kubelet[1762]: E0714 22:22:57.632922 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:22:57.848490 containerd[1458]: time="2025-07-14T22:22:57.848417015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:57.849454 containerd[1458]: time="2025-07-14T22:22:57.849373353Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 14 22:22:57.850732 containerd[1458]: time="2025-07-14T22:22:57.850686553Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:57.852947 containerd[1458]: time="2025-07-14T22:22:57.852904671Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:57.853485 containerd[1458]: time="2025-07-14T22:22:57.853445154Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 6.591774313s" Jul 14 22:22:57.853485 containerd[1458]: time="2025-07-14T22:22:57.853476925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 14 22:22:57.862549 containerd[1458]: time="2025-07-14T22:22:57.862498284Z" level=info msg="CreateContainer within sandbox \"95910299089f452f3b122c85abf1aea3294781304d3229995d87fdea9bc22c5d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 14 22:22:57.878170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2519058770.mount: Deactivated successfully. Jul 14 22:22:57.883818 containerd[1458]: time="2025-07-14T22:22:57.883712362Z" level=info msg="CreateContainer within sandbox \"95910299089f452f3b122c85abf1aea3294781304d3229995d87fdea9bc22c5d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"852a06de7d259da83ccd65f42be74e55fce15597469bf794eb231e7f150de556\"" Jul 14 22:22:57.884340 containerd[1458]: time="2025-07-14T22:22:57.884302970Z" level=info msg="StartContainer for \"852a06de7d259da83ccd65f42be74e55fce15597469bf794eb231e7f150de556\"" Jul 14 22:22:57.915760 systemd[1]: Started cri-containerd-852a06de7d259da83ccd65f42be74e55fce15597469bf794eb231e7f150de556.scope - libcontainer container 852a06de7d259da83ccd65f42be74e55fce15597469bf794eb231e7f150de556. Jul 14 22:22:57.948040 containerd[1458]: time="2025-07-14T22:22:57.947980590Z" level=info msg="StartContainer for \"852a06de7d259da83ccd65f42be74e55fce15597469bf794eb231e7f150de556\" returns successfully" Jul 14 22:22:58.025877 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 14 22:22:58.025988 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 14 22:22:58.289367 kubelet[1762]: I0714 22:22:58.289311 1762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-pst5s" podStartSLOduration=15.628064006 podStartE2EDuration="32.289296738s" podCreationTimestamp="2025-07-14 22:22:26 +0000 UTC" firstStartedPulling="2025-07-14 22:22:41.193077815 +0000 UTC m=+17.436173809" lastFinishedPulling="2025-07-14 22:22:57.854310547 +0000 UTC m=+34.097406541" observedRunningTime="2025-07-14 22:22:58.289150039 +0000 UTC m=+34.532246033" watchObservedRunningTime="2025-07-14 22:22:58.289296738 +0000 UTC m=+34.532392722" Jul 14 22:22:58.633809 kubelet[1762]: E0714 22:22:58.633648 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:22:59.504647 kernel: bpftool[2628]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 14 22:22:59.633953 kubelet[1762]: E0714 22:22:59.633897 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:22:59.728591 systemd-networkd[1392]: vxlan.calico: Link UP Jul 14 22:22:59.728603 systemd-networkd[1392]: vxlan.calico: Gained carrier Jul 14 22:23:00.634314 kubelet[1762]: E0714 22:23:00.634250 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:01.635205 kubelet[1762]: E0714 22:23:01.635145 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:01.772842 systemd-networkd[1392]: vxlan.calico: Gained IPv6LL Jul 14 22:23:02.636089 kubelet[1762]: E0714 22:23:02.636022 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:03.219241 containerd[1458]: time="2025-07-14T22:23:03.219193918Z" level=info msg="StopPodSandbox for \"3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7\"" Jul 14 22:23:03.293752 containerd[1458]: 2025-07-14 22:23:03.258 [INFO][2713] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" Jul 14 22:23:03.293752 containerd[1458]: 2025-07-14 22:23:03.258 [INFO][2713] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" iface="eth0" netns="/var/run/netns/cni-0a1bd4d3-7959-7fd2-69b2-a11631f5e2a4" Jul 14 22:23:03.293752 containerd[1458]: 2025-07-14 22:23:03.258 [INFO][2713] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" iface="eth0" netns="/var/run/netns/cni-0a1bd4d3-7959-7fd2-69b2-a11631f5e2a4" Jul 14 22:23:03.293752 containerd[1458]: 2025-07-14 22:23:03.259 [INFO][2713] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" iface="eth0" netns="/var/run/netns/cni-0a1bd4d3-7959-7fd2-69b2-a11631f5e2a4" Jul 14 22:23:03.293752 containerd[1458]: 2025-07-14 22:23:03.259 [INFO][2713] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" Jul 14 22:23:03.293752 containerd[1458]: 2025-07-14 22:23:03.259 [INFO][2713] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" Jul 14 22:23:03.293752 containerd[1458]: 2025-07-14 22:23:03.279 [INFO][2722] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" HandleID="k8s-pod-network.3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" Workload="10.0.0.145-k8s-csi--node--driver--t7zr7-eth0" Jul 14 22:23:03.293752 containerd[1458]: 2025-07-14 22:23:03.279 [INFO][2722] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:23:03.293752 containerd[1458]: 2025-07-14 22:23:03.279 [INFO][2722] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:23:03.293752 containerd[1458]: 2025-07-14 22:23:03.284 [WARNING][2722] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" HandleID="k8s-pod-network.3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" Workload="10.0.0.145-k8s-csi--node--driver--t7zr7-eth0" Jul 14 22:23:03.293752 containerd[1458]: 2025-07-14 22:23:03.284 [INFO][2722] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" HandleID="k8s-pod-network.3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" Workload="10.0.0.145-k8s-csi--node--driver--t7zr7-eth0" Jul 14 22:23:03.293752 containerd[1458]: 2025-07-14 22:23:03.286 [INFO][2722] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:23:03.293752 containerd[1458]: 2025-07-14 22:23:03.290 [INFO][2713] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" Jul 14 22:23:03.294192 containerd[1458]: time="2025-07-14T22:23:03.293905018Z" level=info msg="TearDown network for sandbox \"3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7\" successfully" Jul 14 22:23:03.294192 containerd[1458]: time="2025-07-14T22:23:03.293928543Z" level=info msg="StopPodSandbox for \"3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7\" returns successfully" Jul 14 22:23:03.295728 systemd[1]: run-netns-cni\x2d0a1bd4d3\x2d7959\x2d7fd2\x2d69b2\x2da11631f5e2a4.mount: Deactivated successfully. Jul 14 22:23:03.296162 containerd[1458]: time="2025-07-14T22:23:03.295894998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t7zr7,Uid:12c1a6b9-dbe4-46bb-b922-bd804d0944b3,Namespace:calico-system,Attempt:1,}" Jul 14 22:23:03.401105 systemd-networkd[1392]: cali816f3e32e11: Link UP Jul 14 22:23:03.401546 systemd-networkd[1392]: cali816f3e32e11: Gained carrier Jul 14 22:23:03.415680 containerd[1458]: 2025-07-14 22:23:03.339 [INFO][2730] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.145-k8s-csi--node--driver--t7zr7-eth0 csi-node-driver- calico-system 12c1a6b9-dbe4-46bb-b922-bd804d0944b3 1290 0 2025-07-14 22:22:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.145 csi-node-driver-t7zr7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali816f3e32e11 [] [] }} ContainerID="14a17ed789a0b0bd3850af1b3ab280f0c744337c0be3bbb00d5d8784f9e97853" Namespace="calico-system" Pod="csi-node-driver-t7zr7" WorkloadEndpoint="10.0.0.145-k8s-csi--node--driver--t7zr7-" Jul 14 22:23:03.415680 containerd[1458]: 2025-07-14 22:23:03.339 [INFO][2730] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="14a17ed789a0b0bd3850af1b3ab280f0c744337c0be3bbb00d5d8784f9e97853" Namespace="calico-system" Pod="csi-node-driver-t7zr7" WorkloadEndpoint="10.0.0.145-k8s-csi--node--driver--t7zr7-eth0" Jul 14 22:23:03.415680 containerd[1458]: 2025-07-14 22:23:03.366 [INFO][2744] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="14a17ed789a0b0bd3850af1b3ab280f0c744337c0be3bbb00d5d8784f9e97853" HandleID="k8s-pod-network.14a17ed789a0b0bd3850af1b3ab280f0c744337c0be3bbb00d5d8784f9e97853" Workload="10.0.0.145-k8s-csi--node--driver--t7zr7-eth0" Jul 14 22:23:03.415680 containerd[1458]: 2025-07-14 22:23:03.366 [INFO][2744] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="14a17ed789a0b0bd3850af1b3ab280f0c744337c0be3bbb00d5d8784f9e97853" HandleID="k8s-pod-network.14a17ed789a0b0bd3850af1b3ab280f0c744337c0be3bbb00d5d8784f9e97853" Workload="10.0.0.145-k8s-csi--node--driver--t7zr7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7630), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.145", "pod":"csi-node-driver-t7zr7", "timestamp":"2025-07-14 22:23:03.366060343 +0000 UTC"}, Hostname:"10.0.0.145", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:23:03.415680 containerd[1458]: 2025-07-14 22:23:03.366 [INFO][2744] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:23:03.415680 containerd[1458]: 2025-07-14 22:23:03.366 [INFO][2744] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:23:03.415680 containerd[1458]: 2025-07-14 22:23:03.366 [INFO][2744] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.145' Jul 14 22:23:03.415680 containerd[1458]: 2025-07-14 22:23:03.373 [INFO][2744] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.14a17ed789a0b0bd3850af1b3ab280f0c744337c0be3bbb00d5d8784f9e97853" host="10.0.0.145" Jul 14 22:23:03.415680 containerd[1458]: 2025-07-14 22:23:03.377 [INFO][2744] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.145" Jul 14 22:23:03.415680 containerd[1458]: 2025-07-14 22:23:03.381 [INFO][2744] ipam/ipam.go 511: Trying affinity for 192.168.31.0/26 host="10.0.0.145" Jul 14 22:23:03.415680 containerd[1458]: 2025-07-14 22:23:03.383 [INFO][2744] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.0/26 host="10.0.0.145" Jul 14 22:23:03.415680 containerd[1458]: 2025-07-14 22:23:03.385 [INFO][2744] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.0/26 host="10.0.0.145" Jul 14 22:23:03.415680 containerd[1458]: 2025-07-14 22:23:03.385 [INFO][2744] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.31.0/26 handle="k8s-pod-network.14a17ed789a0b0bd3850af1b3ab280f0c744337c0be3bbb00d5d8784f9e97853" host="10.0.0.145" Jul 14 22:23:03.415680 containerd[1458]: 2025-07-14 22:23:03.386 [INFO][2744] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.14a17ed789a0b0bd3850af1b3ab280f0c744337c0be3bbb00d5d8784f9e97853 Jul 14 22:23:03.415680 containerd[1458]: 2025-07-14 22:23:03.391 [INFO][2744] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.31.0/26 handle="k8s-pod-network.14a17ed789a0b0bd3850af1b3ab280f0c744337c0be3bbb00d5d8784f9e97853" host="10.0.0.145" Jul 14 22:23:03.415680 containerd[1458]: 2025-07-14 22:23:03.395 [INFO][2744] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.31.1/26] block=192.168.31.0/26 handle="k8s-pod-network.14a17ed789a0b0bd3850af1b3ab280f0c744337c0be3bbb00d5d8784f9e97853" host="10.0.0.145" Jul 14 22:23:03.415680 containerd[1458]: 2025-07-14 22:23:03.395 [INFO][2744] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.1/26] handle="k8s-pod-network.14a17ed789a0b0bd3850af1b3ab280f0c744337c0be3bbb00d5d8784f9e97853" host="10.0.0.145" Jul 14 22:23:03.415680 containerd[1458]: 2025-07-14 22:23:03.395 [INFO][2744] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:23:03.415680 containerd[1458]: 2025-07-14 22:23:03.395 [INFO][2744] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.1/26] IPv6=[] ContainerID="14a17ed789a0b0bd3850af1b3ab280f0c744337c0be3bbb00d5d8784f9e97853" HandleID="k8s-pod-network.14a17ed789a0b0bd3850af1b3ab280f0c744337c0be3bbb00d5d8784f9e97853" Workload="10.0.0.145-k8s-csi--node--driver--t7zr7-eth0" Jul 14 22:23:03.416440 containerd[1458]: 2025-07-14 22:23:03.398 [INFO][2730] cni-plugin/k8s.go 418: Populated endpoint ContainerID="14a17ed789a0b0bd3850af1b3ab280f0c744337c0be3bbb00d5d8784f9e97853" Namespace="calico-system" Pod="csi-node-driver-t7zr7" WorkloadEndpoint="10.0.0.145-k8s-csi--node--driver--t7zr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.145-k8s-csi--node--driver--t7zr7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"12c1a6b9-dbe4-46bb-b922-bd804d0944b3", ResourceVersion:"1290", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 22, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.145", ContainerID:"", Pod:"csi-node-driver-t7zr7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.31.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali816f3e32e11", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:23:03.416440 containerd[1458]: 2025-07-14 22:23:03.398 [INFO][2730] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.1/32] ContainerID="14a17ed789a0b0bd3850af1b3ab280f0c744337c0be3bbb00d5d8784f9e97853" Namespace="calico-system" Pod="csi-node-driver-t7zr7" WorkloadEndpoint="10.0.0.145-k8s-csi--node--driver--t7zr7-eth0" Jul 14 22:23:03.416440 containerd[1458]: 2025-07-14 22:23:03.398 [INFO][2730] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali816f3e32e11 ContainerID="14a17ed789a0b0bd3850af1b3ab280f0c744337c0be3bbb00d5d8784f9e97853" Namespace="calico-system" Pod="csi-node-driver-t7zr7" WorkloadEndpoint="10.0.0.145-k8s-csi--node--driver--t7zr7-eth0" Jul 14 22:23:03.416440 containerd[1458]: 2025-07-14 22:23:03.401 [INFO][2730] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="14a17ed789a0b0bd3850af1b3ab280f0c744337c0be3bbb00d5d8784f9e97853" Namespace="calico-system" Pod="csi-node-driver-t7zr7" WorkloadEndpoint="10.0.0.145-k8s-csi--node--driver--t7zr7-eth0" Jul 14 22:23:03.416440 containerd[1458]: 2025-07-14 22:23:03.402 [INFO][2730] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="14a17ed789a0b0bd3850af1b3ab280f0c744337c0be3bbb00d5d8784f9e97853" Namespace="calico-system" Pod="csi-node-driver-t7zr7" WorkloadEndpoint="10.0.0.145-k8s-csi--node--driver--t7zr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.145-k8s-csi--node--driver--t7zr7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"12c1a6b9-dbe4-46bb-b922-bd804d0944b3", ResourceVersion:"1290", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 22, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.145", ContainerID:"14a17ed789a0b0bd3850af1b3ab280f0c744337c0be3bbb00d5d8784f9e97853", Pod:"csi-node-driver-t7zr7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.31.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali816f3e32e11", MAC:"e2:e7:0b:8a:d4:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:23:03.416440 containerd[1458]: 2025-07-14 22:23:03.410 [INFO][2730] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="14a17ed789a0b0bd3850af1b3ab280f0c744337c0be3bbb00d5d8784f9e97853" Namespace="calico-system" Pod="csi-node-driver-t7zr7" WorkloadEndpoint="10.0.0.145-k8s-csi--node--driver--t7zr7-eth0" Jul 14 22:23:03.432985 containerd[1458]: time="2025-07-14T22:23:03.432854495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:23:03.432985 containerd[1458]: time="2025-07-14T22:23:03.432914308Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:23:03.432985 containerd[1458]: time="2025-07-14T22:23:03.432932293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:23:03.433807 containerd[1458]: time="2025-07-14T22:23:03.433754093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:23:03.455767 systemd[1]: Started cri-containerd-14a17ed789a0b0bd3850af1b3ab280f0c744337c0be3bbb00d5d8784f9e97853.scope - libcontainer container 14a17ed789a0b0bd3850af1b3ab280f0c744337c0be3bbb00d5d8784f9e97853. Jul 14 22:23:03.465648 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:23:03.476549 containerd[1458]: time="2025-07-14T22:23:03.476458464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t7zr7,Uid:12c1a6b9-dbe4-46bb-b922-bd804d0944b3,Namespace:calico-system,Attempt:1,} returns sandbox id \"14a17ed789a0b0bd3850af1b3ab280f0c744337c0be3bbb00d5d8784f9e97853\"" Jul 14 22:23:03.478027 containerd[1458]: time="2025-07-14T22:23:03.477999551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 14 22:23:03.636674 kubelet[1762]: E0714 22:23:03.636567 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:03.670076 update_engine[1444]: I20250714 22:23:03.669993 1444 update_attempter.cc:509] Updating boot flags... Jul 14 22:23:03.694672 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2811) Jul 14 22:23:03.729694 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2812) Jul 14 22:23:03.750646 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2812) Jul 14 22:23:04.610865 kubelet[1762]: E0714 22:23:04.610808 1762 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:04.637204 kubelet[1762]: E0714 22:23:04.637159 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:04.776166 containerd[1458]: time="2025-07-14T22:23:04.776108704Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:23:04.777013 containerd[1458]: time="2025-07-14T22:23:04.776953878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 14 22:23:04.778269 containerd[1458]: time="2025-07-14T22:23:04.778232564Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:23:04.780294 containerd[1458]: time="2025-07-14T22:23:04.780262317Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:23:04.780877 containerd[1458]: time="2025-07-14T22:23:04.780844412Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.302742105s" Jul 14 22:23:04.780970 containerd[1458]: time="2025-07-14T22:23:04.780875111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 14 22:23:04.782754 containerd[1458]: time="2025-07-14T22:23:04.782703240Z" level=info msg="CreateContainer within sandbox \"14a17ed789a0b0bd3850af1b3ab280f0c744337c0be3bbb00d5d8784f9e97853\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 14 22:23:04.797561 containerd[1458]: time="2025-07-14T22:23:04.797513672Z" level=info msg="CreateContainer within sandbox \"14a17ed789a0b0bd3850af1b3ab280f0c744337c0be3bbb00d5d8784f9e97853\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4df350fcd49b483938939d04e2a86a4eb3d0c7d8e43edccb4796e4129b71af6f\"" Jul 14 22:23:04.798033 containerd[1458]: time="2025-07-14T22:23:04.797991218Z" level=info msg="StartContainer for \"4df350fcd49b483938939d04e2a86a4eb3d0c7d8e43edccb4796e4129b71af6f\"" Jul 14 22:23:04.820971 systemd[1]: run-containerd-runc-k8s.io-4df350fcd49b483938939d04e2a86a4eb3d0c7d8e43edccb4796e4129b71af6f-runc.DlDo9x.mount: Deactivated successfully. Jul 14 22:23:04.826751 systemd[1]: Started cri-containerd-4df350fcd49b483938939d04e2a86a4eb3d0c7d8e43edccb4796e4129b71af6f.scope - libcontainer container 4df350fcd49b483938939d04e2a86a4eb3d0c7d8e43edccb4796e4129b71af6f. Jul 14 22:23:04.859308 containerd[1458]: time="2025-07-14T22:23:04.859259782Z" level=info msg="StartContainer for \"4df350fcd49b483938939d04e2a86a4eb3d0c7d8e43edccb4796e4129b71af6f\" returns successfully" Jul 14 22:23:04.860471 containerd[1458]: time="2025-07-14T22:23:04.860449310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 14 22:23:05.420742 systemd-networkd[1392]: cali816f3e32e11: Gained IPv6LL Jul 14 22:23:05.637446 kubelet[1762]: E0714 22:23:05.637396 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:06.375640 containerd[1458]: time="2025-07-14T22:23:06.375576458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:23:06.376375 containerd[1458]: time="2025-07-14T22:23:06.376324827Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 14 22:23:06.377571 containerd[1458]: time="2025-07-14T22:23:06.377547524Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:23:06.379483 containerd[1458]: time="2025-07-14T22:23:06.379444160Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:23:06.380080 containerd[1458]: time="2025-07-14T22:23:06.380030150Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.519553048s" Jul 14 22:23:06.380125 containerd[1458]: time="2025-07-14T22:23:06.380077189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 14 22:23:06.381956 containerd[1458]: time="2025-07-14T22:23:06.381915824Z" level=info msg="CreateContainer within sandbox \"14a17ed789a0b0bd3850af1b3ab280f0c744337c0be3bbb00d5d8784f9e97853\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 14 22:23:06.395846 containerd[1458]: time="2025-07-14T22:23:06.395801423Z" level=info msg="CreateContainer within sandbox \"14a17ed789a0b0bd3850af1b3ab280f0c744337c0be3bbb00d5d8784f9e97853\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5956c4495609477839fafee90f86fd340030d345b142302fc544276f14661039\"" Jul 14 22:23:06.396296 containerd[1458]: time="2025-07-14T22:23:06.396265833Z" level=info msg="StartContainer for \"5956c4495609477839fafee90f86fd340030d345b142302fc544276f14661039\"" Jul 14 22:23:06.430748 systemd[1]: Started cri-containerd-5956c4495609477839fafee90f86fd340030d345b142302fc544276f14661039.scope - libcontainer container 5956c4495609477839fafee90f86fd340030d345b142302fc544276f14661039. Jul 14 22:23:06.458361 containerd[1458]: time="2025-07-14T22:23:06.458317645Z" level=info msg="StartContainer for \"5956c4495609477839fafee90f86fd340030d345b142302fc544276f14661039\" returns successfully" Jul 14 22:23:06.467338 kubelet[1762]: I0714 22:23:06.467309 1762 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 14 22:23:06.467338 kubelet[1762]: I0714 22:23:06.467336 1762 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 14 22:23:06.638127 kubelet[1762]: E0714 22:23:06.638013 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:07.381800 kubelet[1762]: I0714 22:23:07.381712 1762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-t7zr7" podStartSLOduration=38.47876224 podStartE2EDuration="41.381693036s" podCreationTimestamp="2025-07-14 22:22:26 +0000 UTC" firstStartedPulling="2025-07-14 22:23:03.477774594 +0000 UTC m=+39.720870578" lastFinishedPulling="2025-07-14 22:23:06.38070538 +0000 UTC m=+42.623801374" observedRunningTime="2025-07-14 22:23:07.381589139 +0000 UTC m=+43.624685133" watchObservedRunningTime="2025-07-14 22:23:07.381693036 +0000 UTC m=+43.624789050" Jul 14 22:23:07.639329 kubelet[1762]: E0714 22:23:07.639170 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:08.640083 kubelet[1762]: E0714 22:23:08.640018 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:09.641224 kubelet[1762]: E0714 22:23:09.641160 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:10.219253 containerd[1458]: time="2025-07-14T22:23:10.219205648Z" level=info msg="StopPodSandbox for \"d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610\"" Jul 14 22:23:10.296993 containerd[1458]: 2025-07-14 22:23:10.258 [INFO][2926] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" Jul 14 22:23:10.296993 containerd[1458]: 2025-07-14 22:23:10.258 [INFO][2926] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" iface="eth0" netns="/var/run/netns/cni-d4388fd1-603b-16c6-a333-4186e3e000b2" Jul 14 22:23:10.296993 containerd[1458]: 2025-07-14 22:23:10.258 [INFO][2926] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" iface="eth0" netns="/var/run/netns/cni-d4388fd1-603b-16c6-a333-4186e3e000b2" Jul 14 22:23:10.296993 containerd[1458]: 2025-07-14 22:23:10.259 [INFO][2926] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" iface="eth0" netns="/var/run/netns/cni-d4388fd1-603b-16c6-a333-4186e3e000b2" Jul 14 22:23:10.296993 containerd[1458]: 2025-07-14 22:23:10.259 [INFO][2926] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" Jul 14 22:23:10.296993 containerd[1458]: 2025-07-14 22:23:10.259 [INFO][2926] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" Jul 14 22:23:10.296993 containerd[1458]: 2025-07-14 22:23:10.283 [INFO][2934] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" HandleID="k8s-pod-network.d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" Workload="10.0.0.145-k8s-nginx--deployment--7fcdb87857--dhqhx-eth0" Jul 14 22:23:10.296993 containerd[1458]: 2025-07-14 22:23:10.283 [INFO][2934] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:23:10.296993 containerd[1458]: 2025-07-14 22:23:10.283 [INFO][2934] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:23:10.296993 containerd[1458]: 2025-07-14 22:23:10.289 [WARNING][2934] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" HandleID="k8s-pod-network.d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" Workload="10.0.0.145-k8s-nginx--deployment--7fcdb87857--dhqhx-eth0" Jul 14 22:23:10.296993 containerd[1458]: 2025-07-14 22:23:10.289 [INFO][2934] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" HandleID="k8s-pod-network.d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" Workload="10.0.0.145-k8s-nginx--deployment--7fcdb87857--dhqhx-eth0" Jul 14 22:23:10.296993 containerd[1458]: 2025-07-14 22:23:10.291 [INFO][2934] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:23:10.296993 containerd[1458]: 2025-07-14 22:23:10.294 [INFO][2926] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" Jul 14 22:23:10.297383 containerd[1458]: time="2025-07-14T22:23:10.297236207Z" level=info msg="TearDown network for sandbox \"d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610\" successfully" Jul 14 22:23:10.297383 containerd[1458]: time="2025-07-14T22:23:10.297265942Z" level=info msg="StopPodSandbox for \"d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610\" returns successfully" Jul 14 22:23:10.297970 containerd[1458]: time="2025-07-14T22:23:10.297942802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-dhqhx,Uid:5feda60e-2be9-4f18-8467-5cae0b041b1f,Namespace:default,Attempt:1,}" Jul 14 22:23:10.299043 systemd[1]: run-netns-cni\x2dd4388fd1\x2d603b\x2d16c6\x2da333\x2d4186e3e000b2.mount: Deactivated successfully. Jul 14 22:23:10.448852 systemd-networkd[1392]: calidb83ef3c1f1: Link UP Jul 14 22:23:10.449844 systemd-networkd[1392]: calidb83ef3c1f1: Gained carrier Jul 14 22:23:10.458435 containerd[1458]: 2025-07-14 22:23:10.342 [INFO][2942] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.145-k8s-nginx--deployment--7fcdb87857--dhqhx-eth0 nginx-deployment-7fcdb87857- default 5feda60e-2be9-4f18-8467-5cae0b041b1f 1321 0 2025-07-14 22:22:54 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.145 nginx-deployment-7fcdb87857-dhqhx eth0 default [] [] [kns.default ksa.default.default] calidb83ef3c1f1 [] [] }} ContainerID="0c896484560d2f5727a17f56f51928225b559e9641e5f442467030733798edb3" Namespace="default" Pod="nginx-deployment-7fcdb87857-dhqhx" WorkloadEndpoint="10.0.0.145-k8s-nginx--deployment--7fcdb87857--dhqhx-" Jul 14 22:23:10.458435 containerd[1458]: 2025-07-14 22:23:10.342 [INFO][2942] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0c896484560d2f5727a17f56f51928225b559e9641e5f442467030733798edb3" Namespace="default" Pod="nginx-deployment-7fcdb87857-dhqhx" WorkloadEndpoint="10.0.0.145-k8s-nginx--deployment--7fcdb87857--dhqhx-eth0" Jul 14 22:23:10.458435 containerd[1458]: 2025-07-14 22:23:10.368 [INFO][2957] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0c896484560d2f5727a17f56f51928225b559e9641e5f442467030733798edb3" HandleID="k8s-pod-network.0c896484560d2f5727a17f56f51928225b559e9641e5f442467030733798edb3" Workload="10.0.0.145-k8s-nginx--deployment--7fcdb87857--dhqhx-eth0" Jul 14 22:23:10.458435 containerd[1458]: 2025-07-14 22:23:10.368 [INFO][2957] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0c896484560d2f5727a17f56f51928225b559e9641e5f442467030733798edb3" HandleID="k8s-pod-network.0c896484560d2f5727a17f56f51928225b559e9641e5f442467030733798edb3" Workload="10.0.0.145-k8s-nginx--deployment--7fcdb87857--dhqhx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d79e0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.145", "pod":"nginx-deployment-7fcdb87857-dhqhx", "timestamp":"2025-07-14 22:23:10.368586173 +0000 UTC"}, Hostname:"10.0.0.145", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:23:10.458435 containerd[1458]: 2025-07-14 22:23:10.368 [INFO][2957] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:23:10.458435 containerd[1458]: 2025-07-14 22:23:10.368 [INFO][2957] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:23:10.458435 containerd[1458]: 2025-07-14 22:23:10.368 [INFO][2957] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.145' Jul 14 22:23:10.458435 containerd[1458]: 2025-07-14 22:23:10.421 [INFO][2957] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0c896484560d2f5727a17f56f51928225b559e9641e5f442467030733798edb3" host="10.0.0.145" Jul 14 22:23:10.458435 containerd[1458]: 2025-07-14 22:23:10.425 [INFO][2957] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.145" Jul 14 22:23:10.458435 containerd[1458]: 2025-07-14 22:23:10.430 [INFO][2957] ipam/ipam.go 511: Trying affinity for 192.168.31.0/26 host="10.0.0.145" Jul 14 22:23:10.458435 containerd[1458]: 2025-07-14 22:23:10.432 [INFO][2957] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.0/26 host="10.0.0.145" Jul 14 22:23:10.458435 containerd[1458]: 2025-07-14 22:23:10.433 [INFO][2957] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.0/26 host="10.0.0.145" Jul 14 22:23:10.458435 containerd[1458]: 2025-07-14 22:23:10.433 [INFO][2957] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.31.0/26 handle="k8s-pod-network.0c896484560d2f5727a17f56f51928225b559e9641e5f442467030733798edb3" host="10.0.0.145" Jul 14 22:23:10.458435 containerd[1458]: 2025-07-14 22:23:10.435 [INFO][2957] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0c896484560d2f5727a17f56f51928225b559e9641e5f442467030733798edb3 Jul 14 22:23:10.458435 containerd[1458]: 2025-07-14 22:23:10.438 [INFO][2957] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.31.0/26 handle="k8s-pod-network.0c896484560d2f5727a17f56f51928225b559e9641e5f442467030733798edb3" host="10.0.0.145" Jul 14 22:23:10.458435 containerd[1458]: 2025-07-14 22:23:10.443 [INFO][2957] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.31.2/26] block=192.168.31.0/26 handle="k8s-pod-network.0c896484560d2f5727a17f56f51928225b559e9641e5f442467030733798edb3" host="10.0.0.145" Jul 14 22:23:10.458435 containerd[1458]: 2025-07-14 22:23:10.443 [INFO][2957] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.2/26] handle="k8s-pod-network.0c896484560d2f5727a17f56f51928225b559e9641e5f442467030733798edb3" host="10.0.0.145" Jul 14 22:23:10.458435 containerd[1458]: 2025-07-14 22:23:10.443 [INFO][2957] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:23:10.458435 containerd[1458]: 2025-07-14 22:23:10.443 [INFO][2957] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.2/26] IPv6=[] ContainerID="0c896484560d2f5727a17f56f51928225b559e9641e5f442467030733798edb3" HandleID="k8s-pod-network.0c896484560d2f5727a17f56f51928225b559e9641e5f442467030733798edb3" Workload="10.0.0.145-k8s-nginx--deployment--7fcdb87857--dhqhx-eth0" Jul 14 22:23:10.459365 containerd[1458]: 2025-07-14 22:23:10.447 [INFO][2942] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0c896484560d2f5727a17f56f51928225b559e9641e5f442467030733798edb3" Namespace="default" Pod="nginx-deployment-7fcdb87857-dhqhx" WorkloadEndpoint="10.0.0.145-k8s-nginx--deployment--7fcdb87857--dhqhx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.145-k8s-nginx--deployment--7fcdb87857--dhqhx-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"5feda60e-2be9-4f18-8467-5cae0b041b1f", ResourceVersion:"1321", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 22, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.145", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-dhqhx", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.31.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calidb83ef3c1f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:23:10.459365 containerd[1458]: 2025-07-14 22:23:10.447 [INFO][2942] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.2/32] ContainerID="0c896484560d2f5727a17f56f51928225b559e9641e5f442467030733798edb3" Namespace="default" Pod="nginx-deployment-7fcdb87857-dhqhx" WorkloadEndpoint="10.0.0.145-k8s-nginx--deployment--7fcdb87857--dhqhx-eth0" Jul 14 22:23:10.459365 containerd[1458]: 2025-07-14 22:23:10.447 [INFO][2942] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb83ef3c1f1 ContainerID="0c896484560d2f5727a17f56f51928225b559e9641e5f442467030733798edb3" Namespace="default" Pod="nginx-deployment-7fcdb87857-dhqhx" WorkloadEndpoint="10.0.0.145-k8s-nginx--deployment--7fcdb87857--dhqhx-eth0" Jul 14 22:23:10.459365 containerd[1458]: 2025-07-14 22:23:10.448 [INFO][2942] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0c896484560d2f5727a17f56f51928225b559e9641e5f442467030733798edb3" Namespace="default" Pod="nginx-deployment-7fcdb87857-dhqhx" WorkloadEndpoint="10.0.0.145-k8s-nginx--deployment--7fcdb87857--dhqhx-eth0" Jul 14 22:23:10.459365 containerd[1458]: 2025-07-14 22:23:10.449 [INFO][2942] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0c896484560d2f5727a17f56f51928225b559e9641e5f442467030733798edb3" Namespace="default" Pod="nginx-deployment-7fcdb87857-dhqhx" WorkloadEndpoint="10.0.0.145-k8s-nginx--deployment--7fcdb87857--dhqhx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.145-k8s-nginx--deployment--7fcdb87857--dhqhx-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"5feda60e-2be9-4f18-8467-5cae0b041b1f", ResourceVersion:"1321", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 22, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.145", ContainerID:"0c896484560d2f5727a17f56f51928225b559e9641e5f442467030733798edb3", Pod:"nginx-deployment-7fcdb87857-dhqhx", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.31.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calidb83ef3c1f1", MAC:"36:ba:95:6d:ff:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:23:10.459365 containerd[1458]: 2025-07-14 22:23:10.455 [INFO][2942] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0c896484560d2f5727a17f56f51928225b559e9641e5f442467030733798edb3" Namespace="default" Pod="nginx-deployment-7fcdb87857-dhqhx" WorkloadEndpoint="10.0.0.145-k8s-nginx--deployment--7fcdb87857--dhqhx-eth0" Jul 14 22:23:10.476746 containerd[1458]: time="2025-07-14T22:23:10.476390250Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:23:10.476746 containerd[1458]: time="2025-07-14T22:23:10.476451747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:23:10.476746 containerd[1458]: time="2025-07-14T22:23:10.476464270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:23:10.477761 containerd[1458]: time="2025-07-14T22:23:10.476551024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:23:10.503762 systemd[1]: Started cri-containerd-0c896484560d2f5727a17f56f51928225b559e9641e5f442467030733798edb3.scope - libcontainer container 0c896484560d2f5727a17f56f51928225b559e9641e5f442467030733798edb3. Jul 14 22:23:10.514258 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:23:10.536469 containerd[1458]: time="2025-07-14T22:23:10.536437775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-dhqhx,Uid:5feda60e-2be9-4f18-8467-5cae0b041b1f,Namespace:default,Attempt:1,} returns sandbox id \"0c896484560d2f5727a17f56f51928225b559e9641e5f442467030733798edb3\"" Jul 14 22:23:10.537559 containerd[1458]: time="2025-07-14T22:23:10.537528366Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 14 22:23:10.641933 kubelet[1762]: E0714 22:23:10.641896 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:11.642497 kubelet[1762]: E0714 22:23:11.642444 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:11.756825 systemd-networkd[1392]: calidb83ef3c1f1: Gained IPv6LL Jul 14 22:23:12.643406 kubelet[1762]: E0714 22:23:12.643366 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:13.644277 kubelet[1762]: E0714 22:23:13.644226 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:13.870877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3006134909.mount: Deactivated successfully. Jul 14 22:23:14.644971 kubelet[1762]: E0714 22:23:14.644890 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:15.243054 containerd[1458]: time="2025-07-14T22:23:15.242983208Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:23:15.243902 containerd[1458]: time="2025-07-14T22:23:15.243831910Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73309401" Jul 14 22:23:15.245275 containerd[1458]: time="2025-07-14T22:23:15.245212013Z" level=info msg="ImageCreate event name:\"sha256:f6422896ca84c9af48d5417d6b7a573bf6b38f81edc15538907d987fc658d909\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:23:15.248281 containerd[1458]: time="2025-07-14T22:23:15.248237141Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:30bb68e656e0665bce700e67d2756f68bdca3345fa1099a32bfdb8febcf621cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:23:15.249511 containerd[1458]: time="2025-07-14T22:23:15.249459717Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:f6422896ca84c9af48d5417d6b7a573bf6b38f81edc15538907d987fc658d909\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:30bb68e656e0665bce700e67d2756f68bdca3345fa1099a32bfdb8febcf621cd\", size \"73309279\" in 4.711891184s" Jul 14 22:23:15.249511 containerd[1458]: time="2025-07-14T22:23:15.249492309Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:f6422896ca84c9af48d5417d6b7a573bf6b38f81edc15538907d987fc658d909\"" Jul 14 22:23:15.251741 containerd[1458]: time="2025-07-14T22:23:15.251704051Z" level=info msg="CreateContainer within sandbox \"0c896484560d2f5727a17f56f51928225b559e9641e5f442467030733798edb3\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jul 14 22:23:15.269040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2628528027.mount: Deactivated successfully. Jul 14 22:23:15.269566 containerd[1458]: time="2025-07-14T22:23:15.269137874Z" level=info msg="CreateContainer within sandbox \"0c896484560d2f5727a17f56f51928225b559e9641e5f442467030733798edb3\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"7ab4a60bbe73de1ab098fb045c2c11c7d08a81f5dbc017a8a06c0aac11fcaaa1\"" Jul 14 22:23:15.270145 containerd[1458]: time="2025-07-14T22:23:15.269825842Z" level=info msg="StartContainer for \"7ab4a60bbe73de1ab098fb045c2c11c7d08a81f5dbc017a8a06c0aac11fcaaa1\"" Jul 14 22:23:15.346760 systemd[1]: Started cri-containerd-7ab4a60bbe73de1ab098fb045c2c11c7d08a81f5dbc017a8a06c0aac11fcaaa1.scope - libcontainer container 7ab4a60bbe73de1ab098fb045c2c11c7d08a81f5dbc017a8a06c0aac11fcaaa1. Jul 14 22:23:15.490714 containerd[1458]: time="2025-07-14T22:23:15.490665615Z" level=info msg="StartContainer for \"7ab4a60bbe73de1ab098fb045c2c11c7d08a81f5dbc017a8a06c0aac11fcaaa1\" returns successfully" Jul 14 22:23:15.645685 kubelet[1762]: E0714 22:23:15.645535 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:16.320589 kubelet[1762]: I0714 22:23:16.320525 1762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-dhqhx" podStartSLOduration=17.607220491 podStartE2EDuration="22.320508514s" podCreationTimestamp="2025-07-14 22:22:54 +0000 UTC" firstStartedPulling="2025-07-14 22:23:10.537169097 +0000 UTC m=+46.780265091" lastFinishedPulling="2025-07-14 22:23:15.25045712 +0000 UTC m=+51.493553114" observedRunningTime="2025-07-14 22:23:16.320454061 +0000 UTC m=+52.563550055" watchObservedRunningTime="2025-07-14 22:23:16.320508514 +0000 UTC m=+52.563604509" Jul 14 22:23:16.646655 kubelet[1762]: E0714 22:23:16.646426 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:17.647117 kubelet[1762]: E0714 22:23:17.647049 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:18.647885 kubelet[1762]: E0714 22:23:18.647826 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:19.648913 kubelet[1762]: E0714 22:23:19.648854 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:20.649021 kubelet[1762]: E0714 22:23:20.648951 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:21.649301 kubelet[1762]: E0714 22:23:21.649238 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:22.650053 kubelet[1762]: E0714 22:23:22.649995 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:23.650427 kubelet[1762]: E0714 22:23:23.650369 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:24.610373 kubelet[1762]: E0714 22:23:24.610307 1762 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:24.626722 containerd[1458]: time="2025-07-14T22:23:24.626673262Z" level=info msg="StopPodSandbox for \"3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7\"" Jul 14 22:23:24.651180 kubelet[1762]: E0714 22:23:24.651120 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:24.693353 containerd[1458]: 2025-07-14 22:23:24.661 [WARNING][3122] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.145-k8s-csi--node--driver--t7zr7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"12c1a6b9-dbe4-46bb-b922-bd804d0944b3", ResourceVersion:"1312", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 22, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.145", ContainerID:"14a17ed789a0b0bd3850af1b3ab280f0c744337c0be3bbb00d5d8784f9e97853", Pod:"csi-node-driver-t7zr7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.31.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali816f3e32e11", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:23:24.693353 containerd[1458]: 2025-07-14 22:23:24.661 [INFO][3122] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" Jul 14 22:23:24.693353 containerd[1458]: 2025-07-14 22:23:24.661 [INFO][3122] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" iface="eth0" netns="" Jul 14 22:23:24.693353 containerd[1458]: 2025-07-14 22:23:24.661 [INFO][3122] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" Jul 14 22:23:24.693353 containerd[1458]: 2025-07-14 22:23:24.661 [INFO][3122] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" Jul 14 22:23:24.693353 containerd[1458]: 2025-07-14 22:23:24.680 [INFO][3131] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" HandleID="k8s-pod-network.3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" Workload="10.0.0.145-k8s-csi--node--driver--t7zr7-eth0" Jul 14 22:23:24.693353 containerd[1458]: 2025-07-14 22:23:24.680 [INFO][3131] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:23:24.693353 containerd[1458]: 2025-07-14 22:23:24.680 [INFO][3131] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:23:24.693353 containerd[1458]: 2025-07-14 22:23:24.686 [WARNING][3131] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" HandleID="k8s-pod-network.3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" Workload="10.0.0.145-k8s-csi--node--driver--t7zr7-eth0" Jul 14 22:23:24.693353 containerd[1458]: 2025-07-14 22:23:24.686 [INFO][3131] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" HandleID="k8s-pod-network.3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" Workload="10.0.0.145-k8s-csi--node--driver--t7zr7-eth0" Jul 14 22:23:24.693353 containerd[1458]: 2025-07-14 22:23:24.688 [INFO][3131] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:23:24.693353 containerd[1458]: 2025-07-14 22:23:24.690 [INFO][3122] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" Jul 14 22:23:24.693825 containerd[1458]: time="2025-07-14T22:23:24.693388272Z" level=info msg="TearDown network for sandbox \"3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7\" successfully" Jul 14 22:23:24.693825 containerd[1458]: time="2025-07-14T22:23:24.693416304Z" level=info msg="StopPodSandbox for \"3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7\" returns successfully" Jul 14 22:23:24.694023 containerd[1458]: time="2025-07-14T22:23:24.693985857Z" level=info msg="RemovePodSandbox for \"3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7\"" Jul 14 22:23:24.694023 containerd[1458]: time="2025-07-14T22:23:24.694016925Z" level=info msg="Forcibly stopping sandbox \"3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7\"" Jul 14 22:23:24.757585 containerd[1458]: 2025-07-14 22:23:24.726 [WARNING][3149] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.145-k8s-csi--node--driver--t7zr7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"12c1a6b9-dbe4-46bb-b922-bd804d0944b3", ResourceVersion:"1312", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 22, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.145", ContainerID:"14a17ed789a0b0bd3850af1b3ab280f0c744337c0be3bbb00d5d8784f9e97853", Pod:"csi-node-driver-t7zr7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.31.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali816f3e32e11", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:23:24.757585 containerd[1458]: 2025-07-14 22:23:24.727 [INFO][3149] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" Jul 14 22:23:24.757585 containerd[1458]: 2025-07-14 22:23:24.727 [INFO][3149] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" iface="eth0" netns="" Jul 14 22:23:24.757585 containerd[1458]: 2025-07-14 22:23:24.727 [INFO][3149] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" Jul 14 22:23:24.757585 containerd[1458]: 2025-07-14 22:23:24.727 [INFO][3149] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" Jul 14 22:23:24.757585 containerd[1458]: 2025-07-14 22:23:24.745 [INFO][3158] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" HandleID="k8s-pod-network.3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" Workload="10.0.0.145-k8s-csi--node--driver--t7zr7-eth0" Jul 14 22:23:24.757585 containerd[1458]: 2025-07-14 22:23:24.745 [INFO][3158] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:23:24.757585 containerd[1458]: 2025-07-14 22:23:24.745 [INFO][3158] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:23:24.757585 containerd[1458]: 2025-07-14 22:23:24.751 [WARNING][3158] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" HandleID="k8s-pod-network.3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" Workload="10.0.0.145-k8s-csi--node--driver--t7zr7-eth0" Jul 14 22:23:24.757585 containerd[1458]: 2025-07-14 22:23:24.751 [INFO][3158] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" HandleID="k8s-pod-network.3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" Workload="10.0.0.145-k8s-csi--node--driver--t7zr7-eth0" Jul 14 22:23:24.757585 containerd[1458]: 2025-07-14 22:23:24.752 [INFO][3158] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:23:24.757585 containerd[1458]: 2025-07-14 22:23:24.755 [INFO][3149] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7" Jul 14 22:23:24.758129 containerd[1458]: time="2025-07-14T22:23:24.757659802Z" level=info msg="TearDown network for sandbox \"3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7\" successfully" Jul 14 22:23:24.761326 containerd[1458]: time="2025-07-14T22:23:24.761287089Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:23:24.761389 containerd[1458]: time="2025-07-14T22:23:24.761325250Z" level=info msg="RemovePodSandbox \"3e5a6a7d789d195b3d40b7558cbdb4f57cd8a567751028a4250558d40d530bb7\" returns successfully" Jul 14 22:23:24.761913 containerd[1458]: time="2025-07-14T22:23:24.761874904Z" level=info msg="StopPodSandbox for \"d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610\"" Jul 14 22:23:24.824116 containerd[1458]: 2025-07-14 22:23:24.792 [WARNING][3178] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.145-k8s-nginx--deployment--7fcdb87857--dhqhx-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"5feda60e-2be9-4f18-8467-5cae0b041b1f", ResourceVersion:"1339", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 22, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.145", ContainerID:"0c896484560d2f5727a17f56f51928225b559e9641e5f442467030733798edb3", Pod:"nginx-deployment-7fcdb87857-dhqhx", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.31.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calidb83ef3c1f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:23:24.824116 containerd[1458]: 2025-07-14 22:23:24.792 [INFO][3178] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" Jul 14 22:23:24.824116 containerd[1458]: 2025-07-14 22:23:24.792 [INFO][3178] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" iface="eth0" netns="" Jul 14 22:23:24.824116 containerd[1458]: 2025-07-14 22:23:24.792 [INFO][3178] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" Jul 14 22:23:24.824116 containerd[1458]: 2025-07-14 22:23:24.792 [INFO][3178] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" Jul 14 22:23:24.824116 containerd[1458]: 2025-07-14 22:23:24.812 [INFO][3187] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" HandleID="k8s-pod-network.d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" Workload="10.0.0.145-k8s-nginx--deployment--7fcdb87857--dhqhx-eth0" Jul 14 22:23:24.824116 containerd[1458]: 2025-07-14 22:23:24.812 [INFO][3187] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:23:24.824116 containerd[1458]: 2025-07-14 22:23:24.812 [INFO][3187] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:23:24.824116 containerd[1458]: 2025-07-14 22:23:24.817 [WARNING][3187] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" HandleID="k8s-pod-network.d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" Workload="10.0.0.145-k8s-nginx--deployment--7fcdb87857--dhqhx-eth0" Jul 14 22:23:24.824116 containerd[1458]: 2025-07-14 22:23:24.817 [INFO][3187] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" HandleID="k8s-pod-network.d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" Workload="10.0.0.145-k8s-nginx--deployment--7fcdb87857--dhqhx-eth0" Jul 14 22:23:24.824116 containerd[1458]: 2025-07-14 22:23:24.819 [INFO][3187] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:23:24.824116 containerd[1458]: 2025-07-14 22:23:24.821 [INFO][3178] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" Jul 14 22:23:24.824116 containerd[1458]: time="2025-07-14T22:23:24.824118619Z" level=info msg="TearDown network for sandbox \"d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610\" successfully" Jul 14 22:23:24.824116 containerd[1458]: time="2025-07-14T22:23:24.824142784Z" level=info msg="StopPodSandbox for \"d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610\" returns successfully" Jul 14 22:23:24.824646 containerd[1458]: time="2025-07-14T22:23:24.824579577Z" level=info msg="RemovePodSandbox for \"d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610\"" Jul 14 22:23:24.824646 containerd[1458]: time="2025-07-14T22:23:24.824600927Z" level=info msg="Forcibly stopping sandbox \"d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610\"" Jul 14 22:23:24.884120 containerd[1458]: 2025-07-14 22:23:24.855 [WARNING][3205] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.145-k8s-nginx--deployment--7fcdb87857--dhqhx-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"5feda60e-2be9-4f18-8467-5cae0b041b1f", ResourceVersion:"1339", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 22, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.145", ContainerID:"0c896484560d2f5727a17f56f51928225b559e9641e5f442467030733798edb3", Pod:"nginx-deployment-7fcdb87857-dhqhx", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.31.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calidb83ef3c1f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:23:24.884120 containerd[1458]: 2025-07-14 22:23:24.855 [INFO][3205] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" Jul 14 22:23:24.884120 containerd[1458]: 2025-07-14 22:23:24.855 [INFO][3205] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" iface="eth0" netns="" Jul 14 22:23:24.884120 containerd[1458]: 2025-07-14 22:23:24.855 [INFO][3205] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" Jul 14 22:23:24.884120 containerd[1458]: 2025-07-14 22:23:24.855 [INFO][3205] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" Jul 14 22:23:24.884120 containerd[1458]: 2025-07-14 22:23:24.872 [INFO][3214] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" HandleID="k8s-pod-network.d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" Workload="10.0.0.145-k8s-nginx--deployment--7fcdb87857--dhqhx-eth0" Jul 14 22:23:24.884120 containerd[1458]: 2025-07-14 22:23:24.873 [INFO][3214] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:23:24.884120 containerd[1458]: 2025-07-14 22:23:24.873 [INFO][3214] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:23:24.884120 containerd[1458]: 2025-07-14 22:23:24.878 [WARNING][3214] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" HandleID="k8s-pod-network.d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" Workload="10.0.0.145-k8s-nginx--deployment--7fcdb87857--dhqhx-eth0" Jul 14 22:23:24.884120 containerd[1458]: 2025-07-14 22:23:24.878 [INFO][3214] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" HandleID="k8s-pod-network.d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" Workload="10.0.0.145-k8s-nginx--deployment--7fcdb87857--dhqhx-eth0" Jul 14 22:23:24.884120 containerd[1458]: 2025-07-14 22:23:24.879 [INFO][3214] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:23:24.884120 containerd[1458]: 2025-07-14 22:23:24.881 [INFO][3205] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610" Jul 14 22:23:24.884120 containerd[1458]: time="2025-07-14T22:23:24.884084497Z" level=info msg="TearDown network for sandbox \"d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610\" successfully" Jul 14 22:23:24.886780 containerd[1458]: time="2025-07-14T22:23:24.886742669Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:23:24.886847 containerd[1458]: time="2025-07-14T22:23:24.886781423Z" level=info msg="RemovePodSandbox \"d149296ccb1030c3c1d693235ce26107a714665fc80a8afa03c38935f787d610\" returns successfully" Jul 14 22:23:25.652041 kubelet[1762]: E0714 22:23:25.651981 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:26.652907 kubelet[1762]: E0714 22:23:26.652837 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:27.653142 kubelet[1762]: E0714 22:23:27.653089 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:28.653908 kubelet[1762]: E0714 22:23:28.653853 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:29.654508 kubelet[1762]: E0714 22:23:29.654456 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:30.655489 kubelet[1762]: E0714 22:23:30.655448 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:31.655632 kubelet[1762]: E0714 22:23:31.655540 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:32.656679 kubelet[1762]: E0714 22:23:32.656601 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:33.657588 kubelet[1762]: E0714 22:23:33.657542 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:34.657953 kubelet[1762]: E0714 22:23:34.657907 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:35.658597 kubelet[1762]: E0714 22:23:35.658537 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:36.658794 kubelet[1762]: E0714 22:23:36.658740 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:37.659123 kubelet[1762]: E0714 22:23:37.659068 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:38.659464 kubelet[1762]: E0714 22:23:38.659397 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:39.660432 kubelet[1762]: E0714 22:23:39.660366 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:40.661503 kubelet[1762]: E0714 22:23:40.661439 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:41.662160 kubelet[1762]: E0714 22:23:41.662094 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:41.989132 systemd[1]: Created slice kubepods-besteffort-pod43635248_1445_4145_9855_ad6aa6535320.slice - libcontainer container kubepods-besteffort-pod43635248_1445_4145_9855_ad6aa6535320.slice. Jul 14 22:23:42.085432 kubelet[1762]: I0714 22:23:42.085375 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/43635248-1445-4145-9855-ad6aa6535320-data\") pod \"nfs-server-provisioner-0\" (UID: \"43635248-1445-4145-9855-ad6aa6535320\") " pod="default/nfs-server-provisioner-0" Jul 14 22:23:42.085432 kubelet[1762]: I0714 22:23:42.085427 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq96x\" (UniqueName: \"kubernetes.io/projected/43635248-1445-4145-9855-ad6aa6535320-kube-api-access-wq96x\") pod \"nfs-server-provisioner-0\" (UID: \"43635248-1445-4145-9855-ad6aa6535320\") " pod="default/nfs-server-provisioner-0" Jul 14 22:23:42.293193 containerd[1458]: time="2025-07-14T22:23:42.293063483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:43635248-1445-4145-9855-ad6aa6535320,Namespace:default,Attempt:0,}" Jul 14 22:23:42.389514 systemd-networkd[1392]: cali60e51b789ff: Link UP Jul 14 22:23:42.389756 systemd-networkd[1392]: cali60e51b789ff: Gained carrier Jul 14 22:23:42.401295 containerd[1458]: 2025-07-14 22:23:42.329 [INFO][3260] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.145-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 43635248-1445-4145-9855-ad6aa6535320 1422 0 2025-07-14 22:23:41 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.145 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="254aedffea2322ee3649771e6e11f4901edb5b90c4fd6dc1b2be08b67f8ca19d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.145-k8s-nfs--server--provisioner--0-" Jul 14 22:23:42.401295 containerd[1458]: 2025-07-14 22:23:42.330 [INFO][3260] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="254aedffea2322ee3649771e6e11f4901edb5b90c4fd6dc1b2be08b67f8ca19d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.145-k8s-nfs--server--provisioner--0-eth0" Jul 14 22:23:42.401295 containerd[1458]: 2025-07-14 22:23:42.352 [INFO][3274] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="254aedffea2322ee3649771e6e11f4901edb5b90c4fd6dc1b2be08b67f8ca19d" HandleID="k8s-pod-network.254aedffea2322ee3649771e6e11f4901edb5b90c4fd6dc1b2be08b67f8ca19d" Workload="10.0.0.145-k8s-nfs--server--provisioner--0-eth0" Jul 14 22:23:42.401295 containerd[1458]: 2025-07-14 22:23:42.352 [INFO][3274] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="254aedffea2322ee3649771e6e11f4901edb5b90c4fd6dc1b2be08b67f8ca19d" HandleID="k8s-pod-network.254aedffea2322ee3649771e6e11f4901edb5b90c4fd6dc1b2be08b67f8ca19d" Workload="10.0.0.145-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f590), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.145", "pod":"nfs-server-provisioner-0", "timestamp":"2025-07-14 22:23:42.352776954 +0000 UTC"}, Hostname:"10.0.0.145", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:23:42.401295 containerd[1458]: 2025-07-14 22:23:42.352 [INFO][3274] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:23:42.401295 containerd[1458]: 2025-07-14 22:23:42.353 [INFO][3274] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:23:42.401295 containerd[1458]: 2025-07-14 22:23:42.353 [INFO][3274] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.145' Jul 14 22:23:42.401295 containerd[1458]: 2025-07-14 22:23:42.359 [INFO][3274] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.254aedffea2322ee3649771e6e11f4901edb5b90c4fd6dc1b2be08b67f8ca19d" host="10.0.0.145" Jul 14 22:23:42.401295 containerd[1458]: 2025-07-14 22:23:42.364 [INFO][3274] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.145" Jul 14 22:23:42.401295 containerd[1458]: 2025-07-14 22:23:42.368 [INFO][3274] ipam/ipam.go 511: Trying affinity for 192.168.31.0/26 host="10.0.0.145" Jul 14 22:23:42.401295 containerd[1458]: 2025-07-14 22:23:42.370 [INFO][3274] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.0/26 host="10.0.0.145" Jul 14 22:23:42.401295 containerd[1458]: 2025-07-14 22:23:42.372 [INFO][3274] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.0/26 host="10.0.0.145" Jul 14 22:23:42.401295 containerd[1458]: 2025-07-14 22:23:42.372 [INFO][3274] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.31.0/26 handle="k8s-pod-network.254aedffea2322ee3649771e6e11f4901edb5b90c4fd6dc1b2be08b67f8ca19d" host="10.0.0.145" Jul 14 22:23:42.401295 containerd[1458]: 2025-07-14 22:23:42.374 [INFO][3274] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.254aedffea2322ee3649771e6e11f4901edb5b90c4fd6dc1b2be08b67f8ca19d Jul 14 22:23:42.401295 containerd[1458]: 2025-07-14 22:23:42.377 [INFO][3274] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.31.0/26 handle="k8s-pod-network.254aedffea2322ee3649771e6e11f4901edb5b90c4fd6dc1b2be08b67f8ca19d" host="10.0.0.145" Jul 14 22:23:42.401295 containerd[1458]: 2025-07-14 22:23:42.383 [INFO][3274] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.31.3/26] block=192.168.31.0/26 handle="k8s-pod-network.254aedffea2322ee3649771e6e11f4901edb5b90c4fd6dc1b2be08b67f8ca19d" host="10.0.0.145" Jul 14 22:23:42.401295 containerd[1458]: 2025-07-14 22:23:42.383 [INFO][3274] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.3/26] handle="k8s-pod-network.254aedffea2322ee3649771e6e11f4901edb5b90c4fd6dc1b2be08b67f8ca19d" host="10.0.0.145" Jul 14 22:23:42.401295 containerd[1458]: 2025-07-14 22:23:42.383 [INFO][3274] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:23:42.401295 containerd[1458]: 2025-07-14 22:23:42.383 [INFO][3274] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.3/26] IPv6=[] ContainerID="254aedffea2322ee3649771e6e11f4901edb5b90c4fd6dc1b2be08b67f8ca19d" HandleID="k8s-pod-network.254aedffea2322ee3649771e6e11f4901edb5b90c4fd6dc1b2be08b67f8ca19d" Workload="10.0.0.145-k8s-nfs--server--provisioner--0-eth0" Jul 14 22:23:42.402029 containerd[1458]: 2025-07-14 22:23:42.386 [INFO][3260] cni-plugin/k8s.go 418: Populated endpoint ContainerID="254aedffea2322ee3649771e6e11f4901edb5b90c4fd6dc1b2be08b67f8ca19d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.145-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.145-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"43635248-1445-4145-9855-ad6aa6535320", ResourceVersion:"1422", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 23, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.145", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.31.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:23:42.402029 containerd[1458]: 2025-07-14 22:23:42.386 [INFO][3260] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.3/32] ContainerID="254aedffea2322ee3649771e6e11f4901edb5b90c4fd6dc1b2be08b67f8ca19d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.145-k8s-nfs--server--provisioner--0-eth0" Jul 14 22:23:42.402029 containerd[1458]: 2025-07-14 22:23:42.386 [INFO][3260] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="254aedffea2322ee3649771e6e11f4901edb5b90c4fd6dc1b2be08b67f8ca19d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.145-k8s-nfs--server--provisioner--0-eth0" Jul 14 22:23:42.402029 containerd[1458]: 2025-07-14 22:23:42.389 [INFO][3260] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="254aedffea2322ee3649771e6e11f4901edb5b90c4fd6dc1b2be08b67f8ca19d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.145-k8s-nfs--server--provisioner--0-eth0" Jul 14 22:23:42.402177 containerd[1458]: 2025-07-14 22:23:42.389 [INFO][3260] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="254aedffea2322ee3649771e6e11f4901edb5b90c4fd6dc1b2be08b67f8ca19d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.145-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.145-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"43635248-1445-4145-9855-ad6aa6535320", ResourceVersion:"1422", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 23, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.145", ContainerID:"254aedffea2322ee3649771e6e11f4901edb5b90c4fd6dc1b2be08b67f8ca19d", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.31.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"b2:a8:50:e5:0a:80", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:23:42.402177 containerd[1458]: 2025-07-14 22:23:42.398 [INFO][3260] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="254aedffea2322ee3649771e6e11f4901edb5b90c4fd6dc1b2be08b67f8ca19d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.145-k8s-nfs--server--provisioner--0-eth0" Jul 14 22:23:42.420342 containerd[1458]: time="2025-07-14T22:23:42.419650288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:23:42.420342 containerd[1458]: time="2025-07-14T22:23:42.420313723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:23:42.420564 containerd[1458]: time="2025-07-14T22:23:42.420327750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:23:42.420564 containerd[1458]: time="2025-07-14T22:23:42.420418069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:23:42.444852 systemd[1]: Started cri-containerd-254aedffea2322ee3649771e6e11f4901edb5b90c4fd6dc1b2be08b67f8ca19d.scope - libcontainer container 254aedffea2322ee3649771e6e11f4901edb5b90c4fd6dc1b2be08b67f8ca19d. Jul 14 22:23:42.456834 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:23:42.480805 containerd[1458]: time="2025-07-14T22:23:42.480762154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:43635248-1445-4145-9855-ad6aa6535320,Namespace:default,Attempt:0,} returns sandbox id \"254aedffea2322ee3649771e6e11f4901edb5b90c4fd6dc1b2be08b67f8ca19d\"" Jul 14 22:23:42.482388 containerd[1458]: time="2025-07-14T22:23:42.482362649Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jul 14 22:23:42.662576 kubelet[1762]: E0714 22:23:42.662459 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:43.628822 systemd-networkd[1392]: cali60e51b789ff: Gained IPv6LL Jul 14 22:23:43.663176 kubelet[1762]: E0714 22:23:43.663126 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:44.305416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount905764779.mount: Deactivated successfully. Jul 14 22:23:44.610813 kubelet[1762]: E0714 22:23:44.610697 1762 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:44.664146 kubelet[1762]: E0714 22:23:44.664093 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:45.665043 kubelet[1762]: E0714 22:23:45.664978 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:46.394356 containerd[1458]: time="2025-07-14T22:23:46.394282705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:23:46.395372 containerd[1458]: time="2025-07-14T22:23:46.395320334Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jul 14 22:23:46.396737 containerd[1458]: time="2025-07-14T22:23:46.396683703Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:23:46.399504 containerd[1458]: time="2025-07-14T22:23:46.399469172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:23:46.400391 containerd[1458]: time="2025-07-14T22:23:46.400327873Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 3.917931421s" Jul 14 22:23:46.400391 containerd[1458]: time="2025-07-14T22:23:46.400380743Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jul 14 22:23:46.402676 containerd[1458]: time="2025-07-14T22:23:46.402644121Z" level=info msg="CreateContainer within sandbox \"254aedffea2322ee3649771e6e11f4901edb5b90c4fd6dc1b2be08b67f8ca19d\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jul 14 22:23:46.418439 containerd[1458]: time="2025-07-14T22:23:46.418377757Z" level=info msg="CreateContainer within sandbox \"254aedffea2322ee3649771e6e11f4901edb5b90c4fd6dc1b2be08b67f8ca19d\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"2ae678d1f2328504b55bc79cf931f2926d38dbe305070b8736f1922bd7126588\"" Jul 14 22:23:46.418927 containerd[1458]: time="2025-07-14T22:23:46.418892173Z" level=info msg="StartContainer for \"2ae678d1f2328504b55bc79cf931f2926d38dbe305070b8736f1922bd7126588\"" Jul 14 22:23:46.455845 systemd[1]: Started cri-containerd-2ae678d1f2328504b55bc79cf931f2926d38dbe305070b8736f1922bd7126588.scope - libcontainer container 2ae678d1f2328504b55bc79cf931f2926d38dbe305070b8736f1922bd7126588. Jul 14 22:23:46.481817 containerd[1458]: time="2025-07-14T22:23:46.481761941Z" level=info msg="StartContainer for \"2ae678d1f2328504b55bc79cf931f2926d38dbe305070b8736f1922bd7126588\" returns successfully" Jul 14 22:23:46.665568 kubelet[1762]: E0714 22:23:46.665405 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:47.666127 kubelet[1762]: E0714 22:23:47.666049 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:48.667023 kubelet[1762]: E0714 22:23:48.666957 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:49.667775 kubelet[1762]: E0714 22:23:49.667695 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:50.668415 kubelet[1762]: E0714 22:23:50.668371 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:51.669209 kubelet[1762]: E0714 22:23:51.669162 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:52.669626 kubelet[1762]: E0714 22:23:52.669570 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:53.670274 kubelet[1762]: E0714 22:23:53.670194 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:54.671201 kubelet[1762]: E0714 22:23:54.671144 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:55.671482 kubelet[1762]: E0714 22:23:55.671427 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:55.838220 kubelet[1762]: I0714 22:23:55.838149 1762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=10.918834821 podStartE2EDuration="14.838130834s" podCreationTimestamp="2025-07-14 22:23:41 +0000 UTC" firstStartedPulling="2025-07-14 22:23:42.482053428 +0000 UTC m=+78.725149422" lastFinishedPulling="2025-07-14 22:23:46.401349441 +0000 UTC m=+82.644445435" observedRunningTime="2025-07-14 22:23:47.377121989 +0000 UTC m=+83.620217993" watchObservedRunningTime="2025-07-14 22:23:55.838130834 +0000 UTC m=+92.081226828" Jul 14 22:23:55.844227 systemd[1]: Created slice kubepods-besteffort-podd5968094_6d2c_40f9_9c97_9fe42377b4e2.slice - libcontainer container kubepods-besteffort-podd5968094_6d2c_40f9_9c97_9fe42377b4e2.slice. Jul 14 22:23:55.864258 kubelet[1762]: I0714 22:23:55.864224 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkzpr\" (UniqueName: \"kubernetes.io/projected/d5968094-6d2c-40f9-9c97-9fe42377b4e2-kube-api-access-dkzpr\") pod \"test-pod-1\" (UID: \"d5968094-6d2c-40f9-9c97-9fe42377b4e2\") " pod="default/test-pod-1" Jul 14 22:23:55.864352 kubelet[1762]: I0714 22:23:55.864262 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b1119b5a-81d1-49bd-a932-f141b7e0d1c8\" (UniqueName: \"kubernetes.io/nfs/d5968094-6d2c-40f9-9c97-9fe42377b4e2-pvc-b1119b5a-81d1-49bd-a932-f141b7e0d1c8\") pod \"test-pod-1\" (UID: \"d5968094-6d2c-40f9-9c97-9fe42377b4e2\") " pod="default/test-pod-1" Jul 14 22:23:55.988664 kernel: FS-Cache: Loaded Jul 14 22:23:56.055682 kernel: RPC: Registered named UNIX socket transport module. Jul 14 22:23:56.055789 kernel: RPC: Registered udp transport module. Jul 14 22:23:56.055815 kernel: RPC: Registered tcp transport module. Jul 14 22:23:56.056899 kernel: RPC: Registered tcp-with-tls transport module. Jul 14 22:23:56.056946 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jul 14 22:23:56.314103 kernel: NFS: Registering the id_resolver key type Jul 14 22:23:56.314218 kernel: Key type id_resolver registered Jul 14 22:23:56.314240 kernel: Key type id_legacy registered Jul 14 22:23:56.338752 nfsidmap[3458]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jul 14 22:23:56.343032 nfsidmap[3461]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jul 14 22:23:56.447067 containerd[1458]: time="2025-07-14T22:23:56.447018202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d5968094-6d2c-40f9-9c97-9fe42377b4e2,Namespace:default,Attempt:0,}" Jul 14 22:23:56.547295 systemd-networkd[1392]: cali5ec59c6bf6e: Link UP Jul 14 22:23:56.548252 systemd-networkd[1392]: cali5ec59c6bf6e: Gained carrier Jul 14 22:23:56.558263 containerd[1458]: 2025-07-14 22:23:56.491 [INFO][3465] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.145-k8s-test--pod--1-eth0 default d5968094-6d2c-40f9-9c97-9fe42377b4e2 1480 0 2025-07-14 22:23:42 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.145 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="63babaa2ce42025976b02839c1ac6ca100c3e560920e718ac39b60e3ba6d9cd8" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.145-k8s-test--pod--1-" Jul 14 22:23:56.558263 containerd[1458]: 2025-07-14 22:23:56.492 [INFO][3465] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="63babaa2ce42025976b02839c1ac6ca100c3e560920e718ac39b60e3ba6d9cd8" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.145-k8s-test--pod--1-eth0" Jul 14 22:23:56.558263 containerd[1458]: 2025-07-14 22:23:56.515 [INFO][3478] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="63babaa2ce42025976b02839c1ac6ca100c3e560920e718ac39b60e3ba6d9cd8" HandleID="k8s-pod-network.63babaa2ce42025976b02839c1ac6ca100c3e560920e718ac39b60e3ba6d9cd8" Workload="10.0.0.145-k8s-test--pod--1-eth0" Jul 14 22:23:56.558263 containerd[1458]: 2025-07-14 22:23:56.515 [INFO][3478] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="63babaa2ce42025976b02839c1ac6ca100c3e560920e718ac39b60e3ba6d9cd8" HandleID="k8s-pod-network.63babaa2ce42025976b02839c1ac6ca100c3e560920e718ac39b60e3ba6d9cd8" Workload="10.0.0.145-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ea50), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.145", "pod":"test-pod-1", "timestamp":"2025-07-14 22:23:56.515295614 +0000 UTC"}, Hostname:"10.0.0.145", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:23:56.558263 containerd[1458]: 2025-07-14 22:23:56.515 [INFO][3478] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:23:56.558263 containerd[1458]: 2025-07-14 22:23:56.515 [INFO][3478] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:23:56.558263 containerd[1458]: 2025-07-14 22:23:56.515 [INFO][3478] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.145' Jul 14 22:23:56.558263 containerd[1458]: 2025-07-14 22:23:56.522 [INFO][3478] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.63babaa2ce42025976b02839c1ac6ca100c3e560920e718ac39b60e3ba6d9cd8" host="10.0.0.145" Jul 14 22:23:56.558263 containerd[1458]: 2025-07-14 22:23:56.525 [INFO][3478] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.145" Jul 14 22:23:56.558263 containerd[1458]: 2025-07-14 22:23:56.529 [INFO][3478] ipam/ipam.go 511: Trying affinity for 192.168.31.0/26 host="10.0.0.145" Jul 14 22:23:56.558263 containerd[1458]: 2025-07-14 22:23:56.530 [INFO][3478] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.0/26 host="10.0.0.145" Jul 14 22:23:56.558263 containerd[1458]: 2025-07-14 22:23:56.532 [INFO][3478] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.0/26 host="10.0.0.145" Jul 14 22:23:56.558263 containerd[1458]: 2025-07-14 22:23:56.532 [INFO][3478] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.31.0/26 handle="k8s-pod-network.63babaa2ce42025976b02839c1ac6ca100c3e560920e718ac39b60e3ba6d9cd8" host="10.0.0.145" Jul 14 22:23:56.558263 containerd[1458]: 2025-07-14 22:23:56.533 [INFO][3478] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.63babaa2ce42025976b02839c1ac6ca100c3e560920e718ac39b60e3ba6d9cd8 Jul 14 22:23:56.558263 containerd[1458]: 2025-07-14 22:23:56.537 [INFO][3478] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.31.0/26 handle="k8s-pod-network.63babaa2ce42025976b02839c1ac6ca100c3e560920e718ac39b60e3ba6d9cd8" host="10.0.0.145" Jul 14 22:23:56.558263 containerd[1458]: 2025-07-14 22:23:56.542 [INFO][3478] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.31.4/26] block=192.168.31.0/26 handle="k8s-pod-network.63babaa2ce42025976b02839c1ac6ca100c3e560920e718ac39b60e3ba6d9cd8" host="10.0.0.145" Jul 14 22:23:56.558263 containerd[1458]: 2025-07-14 22:23:56.542 [INFO][3478] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.4/26] handle="k8s-pod-network.63babaa2ce42025976b02839c1ac6ca100c3e560920e718ac39b60e3ba6d9cd8" host="10.0.0.145" Jul 14 22:23:56.558263 containerd[1458]: 2025-07-14 22:23:56.542 [INFO][3478] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:23:56.558263 containerd[1458]: 2025-07-14 22:23:56.542 [INFO][3478] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.4/26] IPv6=[] ContainerID="63babaa2ce42025976b02839c1ac6ca100c3e560920e718ac39b60e3ba6d9cd8" HandleID="k8s-pod-network.63babaa2ce42025976b02839c1ac6ca100c3e560920e718ac39b60e3ba6d9cd8" Workload="10.0.0.145-k8s-test--pod--1-eth0" Jul 14 22:23:56.558263 containerd[1458]: 2025-07-14 22:23:56.545 [INFO][3465] cni-plugin/k8s.go 418: Populated endpoint ContainerID="63babaa2ce42025976b02839c1ac6ca100c3e560920e718ac39b60e3ba6d9cd8" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.145-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.145-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"d5968094-6d2c-40f9-9c97-9fe42377b4e2", ResourceVersion:"1480", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 23, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.145", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.31.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:23:56.558910 containerd[1458]: 2025-07-14 22:23:56.545 [INFO][3465] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.4/32] ContainerID="63babaa2ce42025976b02839c1ac6ca100c3e560920e718ac39b60e3ba6d9cd8" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.145-k8s-test--pod--1-eth0" Jul 14 22:23:56.558910 containerd[1458]: 2025-07-14 22:23:56.545 [INFO][3465] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="63babaa2ce42025976b02839c1ac6ca100c3e560920e718ac39b60e3ba6d9cd8" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.145-k8s-test--pod--1-eth0" Jul 14 22:23:56.558910 containerd[1458]: 2025-07-14 22:23:56.549 [INFO][3465] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="63babaa2ce42025976b02839c1ac6ca100c3e560920e718ac39b60e3ba6d9cd8" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.145-k8s-test--pod--1-eth0" Jul 14 22:23:56.558910 containerd[1458]: 2025-07-14 22:23:56.549 [INFO][3465] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="63babaa2ce42025976b02839c1ac6ca100c3e560920e718ac39b60e3ba6d9cd8" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.145-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.145-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"d5968094-6d2c-40f9-9c97-9fe42377b4e2", ResourceVersion:"1480", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 23, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.145", ContainerID:"63babaa2ce42025976b02839c1ac6ca100c3e560920e718ac39b60e3ba6d9cd8", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.31.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"62:8a:9a:14:42:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:23:56.558910 containerd[1458]: 2025-07-14 22:23:56.555 [INFO][3465] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="63babaa2ce42025976b02839c1ac6ca100c3e560920e718ac39b60e3ba6d9cd8" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.145-k8s-test--pod--1-eth0" Jul 14 22:23:56.611029 containerd[1458]: time="2025-07-14T22:23:56.610222230Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:23:56.611029 containerd[1458]: time="2025-07-14T22:23:56.610274631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:23:56.611029 containerd[1458]: time="2025-07-14T22:23:56.610295030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:23:56.611029 containerd[1458]: time="2025-07-14T22:23:56.610355968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:23:56.628741 systemd[1]: Started cri-containerd-63babaa2ce42025976b02839c1ac6ca100c3e560920e718ac39b60e3ba6d9cd8.scope - libcontainer container 63babaa2ce42025976b02839c1ac6ca100c3e560920e718ac39b60e3ba6d9cd8. Jul 14 22:23:56.639410 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:23:56.661771 containerd[1458]: time="2025-07-14T22:23:56.661723743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d5968094-6d2c-40f9-9c97-9fe42377b4e2,Namespace:default,Attempt:0,} returns sandbox id \"63babaa2ce42025976b02839c1ac6ca100c3e560920e718ac39b60e3ba6d9cd8\"" Jul 14 22:23:56.662705 containerd[1458]: time="2025-07-14T22:23:56.662684385Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 14 22:23:56.672406 kubelet[1762]: E0714 22:23:56.672368 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:57.049937 containerd[1458]: time="2025-07-14T22:23:57.049896795Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:23:57.050745 containerd[1458]: time="2025-07-14T22:23:57.050698509Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jul 14 22:23:57.053260 containerd[1458]: time="2025-07-14T22:23:57.053226286Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:f6422896ca84c9af48d5417d6b7a573bf6b38f81edc15538907d987fc658d909\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:30bb68e656e0665bce700e67d2756f68bdca3345fa1099a32bfdb8febcf621cd\", size \"73309279\" in 390.516031ms" Jul 14 22:23:57.053260 containerd[1458]: time="2025-07-14T22:23:57.053252646Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:f6422896ca84c9af48d5417d6b7a573bf6b38f81edc15538907d987fc658d909\"" Jul 14 22:23:57.054982 containerd[1458]: time="2025-07-14T22:23:57.054946939Z" level=info msg="CreateContainer within sandbox \"63babaa2ce42025976b02839c1ac6ca100c3e560920e718ac39b60e3ba6d9cd8\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jul 14 22:23:57.071556 containerd[1458]: time="2025-07-14T22:23:57.071508152Z" level=info msg="CreateContainer within sandbox \"63babaa2ce42025976b02839c1ac6ca100c3e560920e718ac39b60e3ba6d9cd8\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"f168aaf220491fd16c926c3a0abcfa8c5a52091b8abb2a2c01ff21d14083991d\"" Jul 14 22:23:57.071982 containerd[1458]: time="2025-07-14T22:23:57.071937705Z" level=info msg="StartContainer for \"f168aaf220491fd16c926c3a0abcfa8c5a52091b8abb2a2c01ff21d14083991d\"" Jul 14 22:23:57.101741 systemd[1]: Started cri-containerd-f168aaf220491fd16c926c3a0abcfa8c5a52091b8abb2a2c01ff21d14083991d.scope - libcontainer container f168aaf220491fd16c926c3a0abcfa8c5a52091b8abb2a2c01ff21d14083991d. Jul 14 22:23:57.126498 containerd[1458]: time="2025-07-14T22:23:57.126442616Z" level=info msg="StartContainer for \"f168aaf220491fd16c926c3a0abcfa8c5a52091b8abb2a2c01ff21d14083991d\" returns successfully" Jul 14 22:23:57.392905 kubelet[1762]: I0714 22:23:57.392775 1762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=15.001240884 podStartE2EDuration="15.392761441s" podCreationTimestamp="2025-07-14 22:23:42 +0000 UTC" firstStartedPulling="2025-07-14 22:23:56.662385646 +0000 UTC m=+92.905481650" lastFinishedPulling="2025-07-14 22:23:57.053906223 +0000 UTC m=+93.297002207" observedRunningTime="2025-07-14 22:23:57.392424588 +0000 UTC m=+93.635520582" watchObservedRunningTime="2025-07-14 22:23:57.392761441 +0000 UTC m=+93.635857435" Jul 14 22:23:57.672929 kubelet[1762]: E0714 22:23:57.672806 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:58.604820 systemd-networkd[1392]: cali5ec59c6bf6e: Gained IPv6LL Jul 14 22:23:58.673738 kubelet[1762]: E0714 22:23:58.673684 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:23:59.674203 kubelet[1762]: E0714 22:23:59.674149 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"