Jan 29 11:56:53.919707 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 29 11:56:53.919736 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 11:56:53.919758 kernel: BIOS-provided physical RAM map: Jan 29 11:56:53.919767 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 29 11:56:53.919776 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 29 11:56:53.919784 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 29 11:56:53.919795 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 29 11:56:53.919805 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 29 11:56:53.919858 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 29 11:56:53.919867 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 29 11:56:53.919881 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 29 11:56:53.919890 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 29 11:56:53.919904 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 29 11:56:53.919913 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 29 11:56:53.919928 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 29 11:56:53.919938 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 29 11:56:53.919951 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 29 11:56:53.919960 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 29 11:56:53.919970 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 29 11:56:53.919980 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 29 11:56:53.919990 kernel: NX (Execute Disable) protection: active Jan 29 11:56:53.919999 kernel: APIC: Static calls initialized Jan 29 11:56:53.920008 kernel: efi: EFI v2.7 by EDK II Jan 29 11:56:53.920027 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Jan 29 11:56:53.920036 kernel: SMBIOS 2.8 present. Jan 29 11:56:53.920046 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 29 11:56:53.920055 kernel: Hypervisor detected: KVM Jan 29 11:56:53.920069 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 11:56:53.920078 kernel: kvm-clock: using sched offset of 4974544206 cycles Jan 29 11:56:53.920088 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 11:56:53.920099 kernel: tsc: Detected 2794.750 MHz processor Jan 29 11:56:53.920109 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 11:56:53.920120 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 11:56:53.920130 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 29 11:56:53.920140 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 29 11:56:53.920150 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 11:56:53.920163 kernel: Using GB pages for direct mapping Jan 29 11:56:53.920173 kernel: Secure boot disabled Jan 29 11:56:53.920182 kernel: ACPI: Early table checksum verification disabled Jan 29 11:56:53.920192 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 29 11:56:53.920207 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 29 11:56:53.920217 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:56:53.920228 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:56:53.920242 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 29 11:56:53.920252 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:56:53.920267 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:56:53.920278 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:56:53.920288 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:56:53.920299 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 29 11:56:53.920310 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 29 11:56:53.920323 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Jan 29 11:56:53.920334 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 29 11:56:53.920344 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 29 11:56:53.920355 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 29 11:56:53.920365 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 29 11:56:53.920376 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 29 11:56:53.920386 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 29 11:56:53.920397 kernel: No NUMA configuration found Jan 29 11:56:53.920409 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 29 11:56:53.920423 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 29 11:56:53.920434 kernel: Zone ranges: Jan 29 11:56:53.920444 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 11:56:53.920455 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 29 11:56:53.920466 kernel: Normal empty Jan 29 11:56:53.920476 kernel: Movable zone start for each node Jan 29 11:56:53.920487 kernel: Early memory node ranges Jan 29 11:56:53.920497 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 29 11:56:53.920510 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 29 11:56:53.920521 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 29 11:56:53.920537 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 29 11:56:53.920548 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 29 11:56:53.920558 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 29 11:56:53.920571 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 29 11:56:53.920582 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:56:53.920592 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 29 11:56:53.920602 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 29 11:56:53.920613 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:56:53.920624 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 29 11:56:53.920637 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 29 11:56:53.920648 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 29 11:56:53.920658 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 11:56:53.920669 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 11:56:53.920679 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 11:56:53.920690 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 11:56:53.920701 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 11:56:53.920711 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 11:56:53.920722 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 11:56:53.920735 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 11:56:53.920746 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 11:56:53.920756 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 11:56:53.920767 kernel: TSC deadline timer available Jan 29 11:56:53.920778 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 29 11:56:53.920788 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 11:56:53.920799 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 29 11:56:53.920847 kernel: kvm-guest: setup PV sched yield Jan 29 11:56:53.920857 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 29 11:56:53.920871 kernel: Booting paravirtualized kernel on KVM Jan 29 11:56:53.920882 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 11:56:53.920893 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 29 11:56:53.920904 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 29 11:56:53.920914 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 29 11:56:53.920923 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 29 11:56:53.920933 kernel: kvm-guest: PV spinlocks enabled Jan 29 11:56:53.920943 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 11:56:53.920955 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 11:56:53.920971 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:56:53.920981 kernel: random: crng init done Jan 29 11:56:53.920991 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:56:53.921001 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:56:53.921011 kernel: Fallback order for Node 0: 0 Jan 29 11:56:53.921030 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 29 11:56:53.921040 kernel: Policy zone: DMA32 Jan 29 11:56:53.921050 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:56:53.921060 kernel: Memory: 2395612K/2567000K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 171128K reserved, 0K cma-reserved) Jan 29 11:56:53.921106 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 11:56:53.921116 kernel: ftrace: allocating 37921 entries in 149 pages Jan 29 11:56:53.921124 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 11:56:53.921134 kernel: Dynamic Preempt: voluntary Jan 29 11:56:53.921152 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:56:53.921165 kernel: rcu: RCU event tracing is enabled. Jan 29 11:56:53.921175 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 11:56:53.921185 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:56:53.921194 kernel: Rude variant of Tasks RCU enabled. Jan 29 11:56:53.921204 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:56:53.921213 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:56:53.921225 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 11:56:53.921234 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 29 11:56:53.921247 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:56:53.921256 kernel: Console: colour dummy device 80x25 Jan 29 11:56:53.921265 kernel: printk: console [ttyS0] enabled Jan 29 11:56:53.921277 kernel: ACPI: Core revision 20230628 Jan 29 11:56:53.921287 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 29 11:56:53.921296 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 11:56:53.921305 kernel: x2apic enabled Jan 29 11:56:53.921314 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 11:56:53.921324 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 29 11:56:53.921333 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 29 11:56:53.921342 kernel: kvm-guest: setup PV IPIs Jan 29 11:56:53.921352 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 11:56:53.921364 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 11:56:53.921373 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jan 29 11:56:53.921383 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 29 11:56:53.921392 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 29 11:56:53.921401 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 29 11:56:53.921411 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 11:56:53.921420 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 11:56:53.921429 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 11:56:53.921438 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 11:56:53.921450 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 29 11:56:53.921459 kernel: RETBleed: Mitigation: untrained return thunk Jan 29 11:56:53.921469 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 11:56:53.921510 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 11:56:53.921523 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 29 11:56:53.921543 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 29 11:56:53.921554 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 29 11:56:53.921574 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 11:56:53.921587 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 11:56:53.921597 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 11:56:53.921606 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 11:56:53.921617 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 29 11:56:53.921627 kernel: Freeing SMP alternatives memory: 32K Jan 29 11:56:53.921638 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:56:53.921648 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:56:53.921659 kernel: landlock: Up and running. Jan 29 11:56:53.921669 kernel: SELinux: Initializing. Jan 29 11:56:53.921682 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:56:53.921693 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:56:53.921703 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 29 11:56:53.921714 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:56:53.921729 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:56:53.921741 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:56:53.921752 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 29 11:56:53.921762 kernel: ... version: 0 Jan 29 11:56:53.921773 kernel: ... bit width: 48 Jan 29 11:56:53.921804 kernel: ... generic registers: 6 Jan 29 11:56:53.921839 kernel: ... value mask: 0000ffffffffffff Jan 29 11:56:53.921851 kernel: ... max period: 00007fffffffffff Jan 29 11:56:53.921862 kernel: ... fixed-purpose events: 0 Jan 29 11:56:53.921873 kernel: ... event mask: 000000000000003f Jan 29 11:56:53.921884 kernel: signal: max sigframe size: 1776 Jan 29 11:56:53.921895 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:56:53.921906 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:56:53.921917 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:56:53.921931 kernel: smpboot: x86: Booting SMP configuration: Jan 29 11:56:53.921942 kernel: .... node #0, CPUs: #1 #2 #3 Jan 29 11:56:53.921953 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 11:56:53.921964 kernel: smpboot: Max logical packages: 1 Jan 29 11:56:53.921975 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jan 29 11:56:53.921986 kernel: devtmpfs: initialized Jan 29 11:56:53.921997 kernel: x86/mm: Memory block size: 128MB Jan 29 11:56:53.922008 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 29 11:56:53.922030 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 29 11:56:53.922041 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 29 11:56:53.922056 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 29 11:56:53.922067 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 29 11:56:53.922079 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:56:53.922090 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 11:56:53.922101 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:56:53.922112 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:56:53.922124 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:56:53.922135 kernel: audit: type=2000 audit(1738151813.266:1): state=initialized audit_enabled=0 res=1 Jan 29 11:56:53.922149 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:56:53.922160 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 11:56:53.922171 kernel: cpuidle: using governor menu Jan 29 11:56:53.922182 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:56:53.922193 kernel: dca service started, version 1.12.1 Jan 29 11:56:53.922204 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 29 11:56:53.922216 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 29 11:56:53.922227 kernel: PCI: Using configuration type 1 for base access Jan 29 11:56:53.922238 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 11:56:53.922252 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:56:53.922263 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:56:53.922274 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:56:53.922286 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:56:53.922297 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:56:53.922307 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:56:53.922319 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:56:53.922330 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:56:53.922341 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:56:53.922355 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 11:56:53.922367 kernel: ACPI: Interpreter enabled Jan 29 11:56:53.922377 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 11:56:53.922388 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 11:56:53.922400 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 11:56:53.922411 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 11:56:53.922422 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 29 11:56:53.922433 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:56:53.922741 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:56:53.923001 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 29 11:56:53.923186 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 29 11:56:53.923202 kernel: PCI host bridge to bus 0000:00 Jan 29 11:56:53.923378 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 11:56:53.923528 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 11:56:53.923677 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 11:56:53.923853 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 29 11:56:53.924006 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 11:56:53.924167 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 29 11:56:53.924315 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:56:53.924530 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 29 11:56:53.924715 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 29 11:56:53.924909 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 29 11:56:53.925071 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 29 11:56:53.925210 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 29 11:56:53.925347 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 29 11:56:53.925485 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 11:56:53.925680 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 11:56:53.925840 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 29 11:56:53.926004 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 29 11:56:53.926206 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 29 11:56:53.926412 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 29 11:56:53.926585 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 29 11:56:53.926762 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 29 11:56:53.926989 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 29 11:56:53.927201 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 29 11:56:53.927379 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 29 11:56:53.927554 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 29 11:56:53.927724 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 29 11:56:53.927917 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 29 11:56:53.928131 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 29 11:56:53.928305 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 29 11:56:53.928503 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 29 11:56:53.928696 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 29 11:56:53.928946 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 29 11:56:53.929150 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 29 11:56:53.929314 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 29 11:56:53.929330 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 11:56:53.929341 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 11:56:53.929352 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 11:56:53.929363 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 11:56:53.929380 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 29 11:56:53.929390 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 29 11:56:53.929401 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 29 11:56:53.929411 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 29 11:56:53.929421 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 29 11:56:53.929432 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 29 11:56:53.929442 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 29 11:56:53.929452 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 29 11:56:53.929463 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 29 11:56:53.929477 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 29 11:56:53.929488 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 29 11:56:53.929499 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 29 11:56:53.929510 kernel: iommu: Default domain type: Translated Jan 29 11:56:53.929520 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 11:56:53.929531 kernel: efivars: Registered efivars operations Jan 29 11:56:53.929541 kernel: PCI: Using ACPI for IRQ routing Jan 29 11:56:53.929552 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 11:56:53.929562 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 29 11:56:53.929577 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 29 11:56:53.929588 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 29 11:56:53.929598 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 29 11:56:53.929768 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 29 11:56:53.929997 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 29 11:56:53.930173 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 11:56:53.930189 kernel: vgaarb: loaded Jan 29 11:56:53.930200 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 29 11:56:53.930216 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 29 11:56:53.930227 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 11:56:53.930237 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:56:53.930248 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:56:53.930259 kernel: pnp: PnP ACPI init Jan 29 11:56:53.930450 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 29 11:56:53.930468 kernel: pnp: PnP ACPI: found 6 devices Jan 29 11:56:53.930479 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 11:56:53.930495 kernel: NET: Registered PF_INET protocol family Jan 29 11:56:53.930506 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:56:53.930517 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 11:56:53.930528 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:56:53.930539 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:56:53.930550 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 11:56:53.930561 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 11:56:53.930572 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:56:53.930583 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:56:53.930598 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:56:53.930608 kernel: NET: Registered PF_XDP protocol family Jan 29 11:56:53.930770 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 29 11:56:53.930964 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 29 11:56:53.931125 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 11:56:53.931274 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 11:56:53.931421 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 11:56:53.931568 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 29 11:56:53.931720 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 29 11:56:53.931924 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 29 11:56:53.931942 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:56:53.931953 kernel: Initialise system trusted keyrings Jan 29 11:56:53.931964 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 11:56:53.931975 kernel: Key type asymmetric registered Jan 29 11:56:53.931985 kernel: Asymmetric key parser 'x509' registered Jan 29 11:56:53.931996 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 11:56:53.932006 kernel: io scheduler mq-deadline registered Jan 29 11:56:53.932033 kernel: io scheduler kyber registered Jan 29 11:56:53.932043 kernel: io scheduler bfq registered Jan 29 11:56:53.932053 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 11:56:53.932065 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 29 11:56:53.932075 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 29 11:56:53.932085 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 29 11:56:53.932096 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:56:53.932107 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 11:56:53.932117 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 11:56:53.932131 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 11:56:53.932141 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 11:56:53.932349 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 29 11:56:53.932367 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 11:56:53.932515 kernel: rtc_cmos 00:04: registered as rtc0 Jan 29 11:56:53.932670 kernel: rtc_cmos 00:04: setting system clock to 2025-01-29T11:56:53 UTC (1738151813) Jan 29 11:56:53.932844 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 29 11:56:53.932861 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 11:56:53.932878 kernel: efifb: probing for efifb Jan 29 11:56:53.932889 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 29 11:56:53.932900 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 29 11:56:53.932910 kernel: efifb: scrolling: redraw Jan 29 11:56:53.932921 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 29 11:56:53.932932 kernel: Console: switching to colour frame buffer device 100x37 Jan 29 11:56:53.932966 kernel: fb0: EFI VGA frame buffer device Jan 29 11:56:53.932979 kernel: pstore: Using crash dump compression: deflate Jan 29 11:56:53.932987 kernel: pstore: Registered efi_pstore as persistent store backend Jan 29 11:56:53.932998 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:56:53.933005 kernel: Segment Routing with IPv6 Jan 29 11:56:53.933013 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:56:53.933034 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:56:53.933043 kernel: Key type dns_resolver registered Jan 29 11:56:53.933051 kernel: IPI shorthand broadcast: enabled Jan 29 11:56:53.933059 kernel: sched_clock: Marking stable (744003196, 119055394)->(909872671, -46814081) Jan 29 11:56:53.933067 kernel: registered taskstats version 1 Jan 29 11:56:53.933075 kernel: Loading compiled-in X.509 certificates Jan 29 11:56:53.933086 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 29 11:56:53.933094 kernel: Key type .fscrypt registered Jan 29 11:56:53.933101 kernel: Key type fscrypt-provisioning registered Jan 29 11:56:53.933109 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:56:53.933117 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:56:53.933125 kernel: ima: No architecture policies found Jan 29 11:56:53.933133 kernel: clk: Disabling unused clocks Jan 29 11:56:53.933141 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 29 11:56:53.933151 kernel: Write protecting the kernel read-only data: 36864k Jan 29 11:56:53.933159 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 29 11:56:53.933167 kernel: Run /init as init process Jan 29 11:56:53.933175 kernel: with arguments: Jan 29 11:56:53.933183 kernel: /init Jan 29 11:56:53.933190 kernel: with environment: Jan 29 11:56:53.933198 kernel: HOME=/ Jan 29 11:56:53.933206 kernel: TERM=linux Jan 29 11:56:53.933214 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:56:53.933227 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:56:53.933237 systemd[1]: Detected virtualization kvm. Jan 29 11:56:53.933245 systemd[1]: Detected architecture x86-64. Jan 29 11:56:53.933254 systemd[1]: Running in initrd. Jan 29 11:56:53.933267 systemd[1]: No hostname configured, using default hostname. Jan 29 11:56:53.933275 systemd[1]: Hostname set to . Jan 29 11:56:53.933283 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:56:53.933292 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:56:53.933300 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:56:53.933308 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:56:53.933318 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:56:53.933326 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:56:53.933337 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:56:53.933346 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:56:53.933356 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:56:53.933364 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:56:53.933373 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:56:53.933382 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:56:53.933390 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:56:53.933401 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:56:53.933409 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:56:53.933417 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:56:53.933426 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:56:53.933434 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:56:53.933444 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:56:53.933455 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:56:53.933464 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:56:53.933473 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:56:53.933484 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:56:53.933492 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:56:53.933500 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:56:53.933509 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:56:53.933517 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:56:53.933526 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:56:53.933534 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:56:53.933542 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:56:53.933553 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:56:53.933562 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:56:53.933570 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:56:53.933579 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:56:53.933611 systemd-journald[193]: Collecting audit messages is disabled. Jan 29 11:56:53.933635 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:56:53.933644 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:56:53.933652 systemd-journald[193]: Journal started Jan 29 11:56:53.933672 systemd-journald[193]: Runtime Journal (/run/log/journal/6b58c1ddc79b4f248c653367b9b0a667) is 6.0M, max 48.3M, 42.2M free. Jan 29 11:56:53.941195 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:56:53.919519 systemd-modules-load[194]: Inserted module 'overlay' Jan 29 11:56:53.945968 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:56:53.946600 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:56:53.952199 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:56:53.953559 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:56:53.956683 kernel: Bridge firewalling registered Jan 29 11:56:53.953979 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 29 11:56:53.958260 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:56:53.961263 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:56:53.962756 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:56:53.965319 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:56:53.972779 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:56:53.976455 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:56:53.978658 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:56:53.981338 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:56:53.983331 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:56:54.002327 dracut-cmdline[224]: dracut-dracut-053 Jan 29 11:56:54.005681 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 11:56:54.018235 systemd-resolved[228]: Positive Trust Anchors: Jan 29 11:56:54.018254 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:56:54.018286 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:56:54.020956 systemd-resolved[228]: Defaulting to hostname 'linux'. Jan 29 11:56:54.022171 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:56:54.028618 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:56:54.086854 kernel: SCSI subsystem initialized Jan 29 11:56:54.095850 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:56:54.107840 kernel: iscsi: registered transport (tcp) Jan 29 11:56:54.128905 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:56:54.128946 kernel: QLogic iSCSI HBA Driver Jan 29 11:56:54.177849 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:56:54.190952 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:56:54.216169 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:56:54.216222 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:56:54.217320 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:56:54.258861 kernel: raid6: avx2x4 gen() 28207 MB/s Jan 29 11:56:54.275854 kernel: raid6: avx2x2 gen() 28360 MB/s Jan 29 11:56:54.293153 kernel: raid6: avx2x1 gen() 22570 MB/s Jan 29 11:56:54.293189 kernel: raid6: using algorithm avx2x2 gen() 28360 MB/s Jan 29 11:56:54.311149 kernel: raid6: .... xor() 16702 MB/s, rmw enabled Jan 29 11:56:54.311181 kernel: raid6: using avx2x2 recovery algorithm Jan 29 11:56:54.331845 kernel: xor: automatically using best checksumming function avx Jan 29 11:56:54.486851 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:56:54.502252 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:56:54.513032 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:56:54.527797 systemd-udevd[411]: Using default interface naming scheme 'v255'. Jan 29 11:56:54.534051 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:56:54.538037 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:56:54.557532 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation Jan 29 11:56:54.603153 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:56:54.615067 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:56:54.690490 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:56:54.703040 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:56:54.716645 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:56:54.720270 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:56:54.721737 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:56:54.723418 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:56:54.737866 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 11:56:54.738054 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:56:54.742841 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 29 11:56:54.775283 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 11:56:54.775452 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:56:54.775465 kernel: GPT:9289727 != 19775487 Jan 29 11:56:54.775475 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:56:54.775486 kernel: GPT:9289727 != 19775487 Jan 29 11:56:54.775496 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:56:54.775506 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:56:54.775517 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 11:56:54.775534 kernel: AES CTR mode by8 optimization enabled Jan 29 11:56:54.760667 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:56:54.777072 kernel: libata version 3.00 loaded. Jan 29 11:56:54.760875 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:56:54.765386 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:56:54.766677 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:56:54.767245 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:56:54.768630 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:56:54.781477 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:56:54.792276 kernel: ahci 0000:00:1f.2: version 3.0 Jan 29 11:56:54.822362 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 29 11:56:54.822384 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 29 11:56:54.822578 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 29 11:56:54.822759 kernel: scsi host0: ahci Jan 29 11:56:54.822972 kernel: scsi host1: ahci Jan 29 11:56:54.823176 kernel: scsi host2: ahci Jan 29 11:56:54.823386 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (459) Jan 29 11:56:54.823402 kernel: scsi host3: ahci Jan 29 11:56:54.823615 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (467) Jan 29 11:56:54.823640 kernel: scsi host4: ahci Jan 29 11:56:54.824217 kernel: scsi host5: ahci Jan 29 11:56:54.824415 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 29 11:56:54.824431 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 29 11:56:54.824445 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 29 11:56:54.824459 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 29 11:56:54.824473 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 29 11:56:54.824490 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 29 11:56:54.786612 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:56:54.802336 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:56:54.802486 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:56:54.829240 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 11:56:54.836448 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 11:56:54.839025 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 11:56:54.843949 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 11:56:54.848874 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:56:54.864176 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:56:54.865649 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:56:54.876363 disk-uuid[557]: Primary Header is updated. Jan 29 11:56:54.876363 disk-uuid[557]: Secondary Entries is updated. Jan 29 11:56:54.876363 disk-uuid[557]: Secondary Header is updated. Jan 29 11:56:54.881826 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:56:54.887861 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:56:54.890701 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:56:54.905040 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:56:54.933596 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:56:55.135514 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 11:56:55.135609 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 29 11:56:55.135625 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 29 11:56:55.136850 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 29 11:56:55.138043 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 29 11:56:55.138141 kernel: ata3.00: applying bridge limits Jan 29 11:56:55.138935 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 11:56:55.139899 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 11:56:55.140894 kernel: ata3.00: configured for UDMA/100 Jan 29 11:56:55.141947 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 11:56:55.192846 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 29 11:56:55.206837 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 11:56:55.206869 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 29 11:56:55.889837 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:56:55.889924 disk-uuid[559]: The operation has completed successfully. Jan 29 11:56:55.917589 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:56:55.917752 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:56:55.950078 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:56:55.954375 sh[595]: Success Jan 29 11:56:55.969843 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 29 11:56:56.013304 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:56:56.023493 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:56:56.027250 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:56:56.041098 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 29 11:56:56.041184 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:56:56.041195 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:56:56.042108 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:56:56.042846 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:56:56.049884 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:56:56.052545 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:56:56.064239 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:56:56.067227 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:56:56.079615 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:56:56.079690 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:56:56.079706 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:56:56.082852 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:56:56.094067 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:56:56.096272 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:56:56.113295 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:56:56.119152 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:56:56.186797 ignition[691]: Ignition 2.19.0 Jan 29 11:56:56.186826 ignition[691]: Stage: fetch-offline Jan 29 11:56:56.186875 ignition[691]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:56:56.186886 ignition[691]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:56:56.186993 ignition[691]: parsed url from cmdline: "" Jan 29 11:56:56.186997 ignition[691]: no config URL provided Jan 29 11:56:56.187002 ignition[691]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:56:56.187012 ignition[691]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:56:56.187039 ignition[691]: op(1): [started] loading QEMU firmware config module Jan 29 11:56:56.187047 ignition[691]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 11:56:56.194516 ignition[691]: op(1): [finished] loading QEMU firmware config module Jan 29 11:56:56.196263 ignition[691]: parsing config with SHA512: 0bbea4d8a2fe15aae9afda8ae51061f6869b0bd2c30c19d18c70c456710cbef45862664950413a42f521580a51eea4c07a5de65cba533dd3715cb4380fb9d438 Jan 29 11:56:56.199110 unknown[691]: fetched base config from "system" Jan 29 11:56:56.199123 unknown[691]: fetched user config from "qemu" Jan 29 11:56:56.199350 ignition[691]: fetch-offline: fetch-offline passed Jan 29 11:56:56.199424 ignition[691]: Ignition finished successfully Jan 29 11:56:56.202492 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:56:56.219777 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:56:56.234052 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:56:56.264345 systemd-networkd[784]: lo: Link UP Jan 29 11:56:56.264363 systemd-networkd[784]: lo: Gained carrier Jan 29 11:56:56.266428 systemd-networkd[784]: Enumeration completed Jan 29 11:56:56.266587 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:56:56.267095 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:56:56.267100 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:56:56.268260 systemd-networkd[784]: eth0: Link UP Jan 29 11:56:56.268265 systemd-networkd[784]: eth0: Gained carrier Jan 29 11:56:56.268273 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:56:56.269008 systemd[1]: Reached target network.target - Network. Jan 29 11:56:56.270796 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 11:56:56.281017 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:56:56.282866 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.125/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:56:56.295633 ignition[787]: Ignition 2.19.0 Jan 29 11:56:56.295646 ignition[787]: Stage: kargs Jan 29 11:56:56.295874 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:56:56.295888 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:56:56.296704 ignition[787]: kargs: kargs passed Jan 29 11:56:56.296755 ignition[787]: Ignition finished successfully Jan 29 11:56:56.300208 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:56:56.313067 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:56:56.329047 ignition[796]: Ignition 2.19.0 Jan 29 11:56:56.329060 ignition[796]: Stage: disks Jan 29 11:56:56.329254 ignition[796]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:56:56.329266 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:56:56.329957 ignition[796]: disks: disks passed Jan 29 11:56:56.333631 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:56:56.330014 ignition[796]: Ignition finished successfully Jan 29 11:56:56.334106 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:56:56.334486 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:56:56.334926 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:56:56.335343 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:56:56.335763 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:56:56.352053 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:56:56.367290 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 11:56:56.378066 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:56:56.388029 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:56:56.476847 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 29 11:56:56.477414 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:56:56.478175 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:56:56.495993 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:56:56.497964 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:56:56.500308 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:56:56.500374 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:56:56.500406 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:56:56.503847 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (815) Jan 29 11:56:56.507221 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:56:56.507258 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:56:56.507269 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:56:56.510843 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:56:56.512758 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:56:56.514771 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:56:56.518596 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:56:56.561671 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:56:56.567088 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:56:56.572732 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:56:56.577716 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:56:56.681188 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:56:56.691925 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:56:56.694636 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:56:56.704844 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:56:56.747322 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:56:56.749780 ignition[928]: INFO : Ignition 2.19.0 Jan 29 11:56:56.749780 ignition[928]: INFO : Stage: mount Jan 29 11:56:56.749780 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:56:56.749780 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:56:56.749780 ignition[928]: INFO : mount: mount passed Jan 29 11:56:56.749780 ignition[928]: INFO : Ignition finished successfully Jan 29 11:56:56.751573 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:56:56.758912 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:56:57.040715 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:56:57.053073 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:56:57.076852 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (943) Jan 29 11:56:57.079191 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:56:57.079226 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:56:57.079242 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:56:57.082851 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:56:57.084216 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:56:57.111320 ignition[960]: INFO : Ignition 2.19.0 Jan 29 11:56:57.111320 ignition[960]: INFO : Stage: files Jan 29 11:56:57.113517 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:56:57.113517 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:56:57.113517 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:56:57.117475 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:56:57.117475 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:56:57.117475 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:56:57.117475 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:56:57.117475 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:56:57.117338 unknown[960]: wrote ssh authorized keys file for user: core Jan 29 11:56:57.126589 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:56:57.126589 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:56:57.126589 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:56:57.126589 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:56:57.126589 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:56:57.126589 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:56:57.126589 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:56:57.126589 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 11:56:57.511125 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 29 11:56:57.903999 systemd-networkd[784]: eth0: Gained IPv6LL Jan 29 11:56:57.946461 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:56:57.946461 ignition[960]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jan 29 11:56:57.950932 ignition[960]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:56:57.950932 ignition[960]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:56:57.950932 ignition[960]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jan 29 11:56:57.950932 ignition[960]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 11:56:57.973167 ignition[960]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:56:58.021569 ignition[960]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:56:58.021569 ignition[960]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 11:56:58.021569 ignition[960]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:56:58.021569 ignition[960]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:56:58.021569 ignition[960]: INFO : files: files passed Jan 29 11:56:58.021569 ignition[960]: INFO : Ignition finished successfully Jan 29 11:56:57.981790 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:56:58.029077 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:56:58.031840 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:56:58.033798 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:56:58.033970 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:56:58.081153 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 11:56:58.083431 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:56:58.083431 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:56:58.088578 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:56:58.086404 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:56:58.089157 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:56:58.137002 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:56:58.162992 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:56:58.163125 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:56:58.165954 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:56:58.167981 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:56:58.170081 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:56:58.177974 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:56:58.191850 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:56:58.218972 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:56:58.230630 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:56:58.230830 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:56:58.231190 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:56:58.320445 ignition[1016]: INFO : Ignition 2.19.0 Jan 29 11:56:58.320445 ignition[1016]: INFO : Stage: umount Jan 29 11:56:58.320445 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:56:58.320445 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:56:58.320445 ignition[1016]: INFO : umount: umount passed Jan 29 11:56:58.320445 ignition[1016]: INFO : Ignition finished successfully Jan 29 11:56:58.231498 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:56:58.231636 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:56:58.254110 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:56:58.254452 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:56:58.254784 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:56:58.255123 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:56:58.255446 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:56:58.255775 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:56:58.256115 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:56:58.256446 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:56:58.256770 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:56:58.257109 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:56:58.257399 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:56:58.257543 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:56:58.258403 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:56:58.258756 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:56:58.259057 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:56:58.259184 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:56:58.259562 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:56:58.259693 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:56:58.260373 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:56:58.260506 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:56:58.261079 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:56:58.261472 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:56:58.264853 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:56:58.295722 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:56:58.296237 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:56:58.296543 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:56:58.296636 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:56:58.297222 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:56:58.297311 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:56:58.297708 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:56:58.297829 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:56:58.298207 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:56:58.298310 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:56:58.299423 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:56:58.300500 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:56:58.300780 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:56:58.300966 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:56:58.301395 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:56:58.301539 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:56:58.306105 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:56:58.306237 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:56:58.321055 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:56:58.321182 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:56:58.322865 systemd[1]: Stopped target network.target - Network. Jan 29 11:56:58.324559 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:56:58.324616 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:56:58.326898 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:56:58.326962 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:56:58.329443 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:56:58.329492 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:56:58.331469 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:56:58.331520 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:56:58.344533 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:56:58.346514 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:56:58.348553 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:56:58.358178 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:56:58.358311 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:56:58.358900 systemd-networkd[784]: eth0: DHCPv6 lease lost Jan 29 11:56:58.362627 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:56:58.363016 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:56:58.365751 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:56:58.365839 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:56:58.380019 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:56:58.381481 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:56:58.381548 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:56:58.383635 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:56:58.383691 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:56:58.385665 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:56:58.385717 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:56:58.443919 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:56:58.444016 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:56:58.446028 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:56:58.460081 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:56:58.460229 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:56:58.524426 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:56:58.524652 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:56:58.527044 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:56:58.527102 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:56:58.528846 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:56:58.528891 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:56:58.529955 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:56:58.530007 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:56:58.530725 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:56:58.530776 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:56:58.531424 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:56:58.531478 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:56:58.547180 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:56:58.548754 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:56:58.548877 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:56:58.551335 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 11:56:58.551406 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:56:58.553463 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:56:58.553519 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:56:58.555761 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:56:58.555826 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:56:58.557333 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:56:58.557450 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:56:58.683447 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:56:58.683603 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:56:58.687059 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:56:58.689356 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:56:58.689432 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:56:58.709041 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:56:58.717582 systemd[1]: Switching root. Jan 29 11:56:58.754758 systemd-journald[193]: Journal stopped Jan 29 11:56:59.826121 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jan 29 11:56:59.826214 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:56:59.826234 kernel: SELinux: policy capability open_perms=1 Jan 29 11:56:59.826255 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:56:59.826271 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:56:59.826286 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:56:59.826303 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:56:59.826326 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:56:59.826342 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:56:59.826360 kernel: audit: type=1403 audit(1738151819.066:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:56:59.826378 systemd[1]: Successfully loaded SELinux policy in 41.949ms. Jan 29 11:56:59.826407 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.467ms. Jan 29 11:56:59.826425 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:56:59.826441 systemd[1]: Detected virtualization kvm. Jan 29 11:56:59.826461 systemd[1]: Detected architecture x86-64. Jan 29 11:56:59.826478 systemd[1]: Detected first boot. Jan 29 11:56:59.826502 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:56:59.826523 zram_generator::config[1060]: No configuration found. Jan 29 11:56:59.826548 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:56:59.826565 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:56:59.826582 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:56:59.826599 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:56:59.826616 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:56:59.826633 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:56:59.826650 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:56:59.826670 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:56:59.826688 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:56:59.826705 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:56:59.826722 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:56:59.826738 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:56:59.826756 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:56:59.826774 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:56:59.826791 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:56:59.826973 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:56:59.827002 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:56:59.827020 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:56:59.827037 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 11:56:59.827054 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:56:59.827078 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:56:59.827096 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:56:59.827114 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:56:59.827134 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:56:59.827152 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:56:59.827169 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:56:59.827186 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:56:59.827204 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:56:59.827221 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:56:59.827238 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:56:59.827256 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:56:59.827272 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:56:59.827292 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:56:59.827312 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:56:59.827329 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:56:59.827345 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:56:59.827362 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:56:59.827379 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:56:59.827396 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:56:59.827413 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:56:59.827430 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:56:59.827451 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:56:59.827468 systemd[1]: Reached target machines.target - Containers. Jan 29 11:56:59.827485 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:56:59.827503 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:56:59.827520 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:56:59.827537 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:56:59.827555 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:56:59.827571 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:56:59.827588 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:56:59.827609 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:56:59.827626 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:56:59.827643 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:56:59.827662 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:56:59.827679 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:56:59.827695 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:56:59.827712 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:56:59.827731 kernel: fuse: init (API version 7.39) Jan 29 11:56:59.827754 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:56:59.827770 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:56:59.827787 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:56:59.827804 kernel: loop: module loaded Jan 29 11:56:59.827838 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:56:59.827855 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:56:59.827872 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:56:59.827899 systemd[1]: Stopped verity-setup.service. Jan 29 11:56:59.827926 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:56:59.827947 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:56:59.827964 kernel: ACPI: bus type drm_connector registered Jan 29 11:56:59.827980 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:56:59.827997 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:56:59.828014 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:56:59.828061 systemd-journald[1134]: Collecting audit messages is disabled. Jan 29 11:56:59.828098 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:56:59.828118 systemd-journald[1134]: Journal started Jan 29 11:56:59.828148 systemd-journald[1134]: Runtime Journal (/run/log/journal/6b58c1ddc79b4f248c653367b9b0a667) is 6.0M, max 48.3M, 42.2M free. Jan 29 11:56:59.828199 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:56:59.592546 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:56:59.609243 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 11:56:59.609737 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:56:59.832468 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:56:59.833536 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:56:59.835253 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:56:59.835469 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:56:59.837193 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:56:59.838828 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:56:59.839013 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:56:59.840589 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:56:59.840848 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:56:59.842382 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:56:59.842557 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:56:59.844143 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:56:59.844315 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:56:59.846011 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:56:59.846223 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:56:59.847898 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:56:59.849403 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:56:59.851010 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:56:59.870551 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:56:59.876916 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:56:59.879919 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:56:59.881218 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:56:59.881250 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:56:59.884239 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:56:59.887373 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:56:59.890507 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:56:59.892101 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:56:59.928475 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:56:59.931679 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:56:59.932977 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:56:59.936006 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:56:59.937411 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:56:59.940301 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:56:59.946568 systemd-journald[1134]: Time spent on flushing to /var/log/journal/6b58c1ddc79b4f248c653367b9b0a667 is 40.203ms for 977 entries. Jan 29 11:56:59.946568 systemd-journald[1134]: System Journal (/var/log/journal/6b58c1ddc79b4f248c653367b9b0a667) is 8.0M, max 195.6M, 187.6M free. Jan 29 11:57:00.008407 systemd-journald[1134]: Received client request to flush runtime journal. Jan 29 11:57:00.008480 kernel: loop0: detected capacity change from 0 to 142488 Jan 29 11:56:59.948015 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:56:59.951412 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:56:59.960694 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:56:59.962728 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:56:59.964565 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:56:59.966660 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:56:59.968743 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:56:59.978388 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:57:00.098591 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:57:00.100845 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:57:00.105141 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:57:00.107404 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:57:00.109317 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:57:00.124849 kernel: loop1: detected capacity change from 0 to 140768 Jan 29 11:57:00.132338 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Jan 29 11:57:00.132362 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Jan 29 11:57:00.135201 udevadm[1190]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 11:57:00.140754 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:57:00.142013 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:57:00.143985 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:57:00.167371 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:57:00.191858 kernel: loop2: detected capacity change from 0 to 210664 Jan 29 11:57:00.256972 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:57:00.260863 kernel: loop3: detected capacity change from 0 to 142488 Jan 29 11:57:00.273119 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:57:00.279828 kernel: loop4: detected capacity change from 0 to 140768 Jan 29 11:57:00.297847 kernel: loop5: detected capacity change from 0 to 210664 Jan 29 11:57:00.296538 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Jan 29 11:57:00.296566 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Jan 29 11:57:00.304928 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:57:00.307972 (sd-merge)[1199]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 11:57:00.308591 (sd-merge)[1199]: Merged extensions into '/usr'. Jan 29 11:57:00.352033 systemd[1]: Reloading requested from client PID 1174 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:57:00.352057 systemd[1]: Reloading... Jan 29 11:57:00.454290 zram_generator::config[1230]: No configuration found. Jan 29 11:57:00.618263 ldconfig[1169]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:57:00.635034 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:57:00.691803 systemd[1]: Reloading finished in 338 ms. Jan 29 11:57:00.727736 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:57:00.729763 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:57:00.747133 systemd[1]: Starting ensure-sysext.service... Jan 29 11:57:00.750088 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:57:00.756533 systemd[1]: Reloading requested from client PID 1266 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:57:00.756556 systemd[1]: Reloading... Jan 29 11:57:00.784368 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:57:00.784934 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:57:00.786261 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:57:00.786666 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Jan 29 11:57:00.786772 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Jan 29 11:57:00.792030 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:57:00.792165 systemd-tmpfiles[1267]: Skipping /boot Jan 29 11:57:00.830393 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:57:00.830561 systemd-tmpfiles[1267]: Skipping /boot Jan 29 11:57:00.888877 zram_generator::config[1297]: No configuration found. Jan 29 11:57:01.048428 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:57:01.106970 systemd[1]: Reloading finished in 349 ms. Jan 29 11:57:01.127681 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:57:01.147492 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 11:57:01.150020 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:57:01.152440 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:57:01.156491 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:57:01.162564 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:57:01.168174 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:57:01.168350 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:57:01.175067 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:57:01.180380 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:57:01.185100 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:57:01.186502 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:57:01.186614 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:57:01.187618 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:57:01.187790 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:57:01.201579 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:57:01.203286 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:57:01.211055 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:57:01.211348 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:57:01.222288 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:57:01.223589 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:57:01.223712 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:57:01.224465 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:57:01.230127 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:57:01.232444 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:57:01.240248 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:57:01.245036 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:57:01.245261 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:57:01.246399 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:57:01.247175 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:57:01.253558 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:57:01.254083 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:57:01.257150 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:57:01.257389 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:57:01.259630 augenrules[1361]: No rules Jan 29 11:57:01.271035 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 11:57:01.273083 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:57:01.275261 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:57:01.275481 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:57:01.305209 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:57:01.307267 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:57:01.311296 systemd[1]: Finished ensure-sysext.service. Jan 29 11:57:01.313105 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:57:01.313220 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:57:01.323032 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:57:01.325786 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:57:01.330513 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:57:01.331656 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:57:01.352545 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:57:01.367144 systemd-udevd[1380]: Using default interface naming scheme 'v255'. Jan 29 11:57:01.381705 systemd-resolved[1336]: Positive Trust Anchors: Jan 29 11:57:01.381731 systemd-resolved[1336]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:57:01.381777 systemd-resolved[1336]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:57:01.386678 systemd-resolved[1336]: Defaulting to hostname 'linux'. Jan 29 11:57:01.389417 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:57:01.390892 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:57:01.395526 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:57:01.406136 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:57:01.410063 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:57:01.416022 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:57:01.454008 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1384) Jan 29 11:57:01.511390 systemd-networkd[1388]: lo: Link UP Jan 29 11:57:01.511408 systemd-networkd[1388]: lo: Gained carrier Jan 29 11:57:01.515619 systemd-networkd[1388]: Enumeration completed Jan 29 11:57:01.517271 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:57:01.517277 systemd-networkd[1388]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:57:01.518797 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:57:01.518869 systemd-networkd[1388]: eth0: Link UP Jan 29 11:57:01.518874 systemd-networkd[1388]: eth0: Gained carrier Jan 29 11:57:01.518886 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:57:01.525635 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:57:01.528465 systemd[1]: Reached target network.target - Network. Jan 29 11:57:01.537051 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:57:01.542045 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 11:57:01.546849 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 29 11:57:01.546950 systemd-networkd[1388]: eth0: DHCPv4 address 10.0.0.125/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:57:01.548161 systemd-timesyncd[1379]: Network configuration changed, trying to establish connection. Jan 29 11:57:02.006141 systemd-timesyncd[1379]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 11:57:02.006221 systemd-timesyncd[1379]: Initial clock synchronization to Wed 2025-01-29 11:57:02.005997 UTC. Jan 29 11:57:02.007343 systemd-resolved[1336]: Clock change detected. Flushing caches. Jan 29 11:57:02.009997 kernel: ACPI: button: Power Button [PWRF] Jan 29 11:57:02.019433 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:57:02.079301 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:57:02.084982 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 29 11:57:02.088506 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 29 11:57:02.093692 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 29 11:57:02.094020 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 29 11:57:02.094272 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 29 11:57:02.093435 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:57:02.097354 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 11:57:02.115687 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:57:02.176748 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:57:02.210243 kernel: kvm_amd: TSC scaling supported Jan 29 11:57:02.210318 kernel: kvm_amd: Nested Virtualization enabled Jan 29 11:57:02.210357 kernel: kvm_amd: Nested Paging enabled Jan 29 11:57:02.211197 kernel: kvm_amd: LBR virtualization supported Jan 29 11:57:02.211221 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 29 11:57:02.212172 kernel: kvm_amd: Virtual GIF supported Jan 29 11:57:02.230967 kernel: EDAC MC: Ver: 3.0.0 Jan 29 11:57:02.256340 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:57:02.267248 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:57:02.277751 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:57:02.315277 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:57:02.316905 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:57:02.318108 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:57:02.319349 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:57:02.320646 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:57:02.322227 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:57:02.323494 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:57:02.324870 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:57:02.326178 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:57:02.326204 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:57:02.327149 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:57:02.328865 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:57:02.332256 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:57:02.344760 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:57:02.347683 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:57:02.349426 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:57:02.350852 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:57:02.352055 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:57:02.353172 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:57:02.353205 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:57:02.354515 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:57:02.356920 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:57:02.360238 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:57:02.361584 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:57:02.365151 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:57:02.367078 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:57:02.370911 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:57:02.374648 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:57:02.375759 jq[1436]: false Jan 29 11:57:02.381439 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:57:02.384961 extend-filesystems[1437]: Found loop3 Jan 29 11:57:02.384961 extend-filesystems[1437]: Found loop4 Jan 29 11:57:02.384961 extend-filesystems[1437]: Found loop5 Jan 29 11:57:02.384961 extend-filesystems[1437]: Found sr0 Jan 29 11:57:02.384961 extend-filesystems[1437]: Found vda Jan 29 11:57:02.384961 extend-filesystems[1437]: Found vda1 Jan 29 11:57:02.384961 extend-filesystems[1437]: Found vda2 Jan 29 11:57:02.384961 extend-filesystems[1437]: Found vda3 Jan 29 11:57:02.384961 extend-filesystems[1437]: Found usr Jan 29 11:57:02.384961 extend-filesystems[1437]: Found vda4 Jan 29 11:57:02.384961 extend-filesystems[1437]: Found vda6 Jan 29 11:57:02.384961 extend-filesystems[1437]: Found vda7 Jan 29 11:57:02.384961 extend-filesystems[1437]: Found vda9 Jan 29 11:57:02.384961 extend-filesystems[1437]: Checking size of /dev/vda9 Jan 29 11:57:02.409864 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 11:57:02.387897 dbus-daemon[1435]: [system] SELinux support is enabled Jan 29 11:57:02.394106 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:57:02.410319 extend-filesystems[1437]: Resized partition /dev/vda9 Jan 29 11:57:02.413895 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1400) Jan 29 11:57:02.395805 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:57:02.414150 extend-filesystems[1453]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:57:02.396390 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:57:02.415893 jq[1455]: true Jan 29 11:57:02.399240 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:57:02.402494 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:57:02.404757 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:57:02.410332 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:57:02.422664 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:57:02.422936 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:57:02.423324 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:57:02.423536 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:57:02.425378 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:57:02.425599 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:57:02.440987 update_engine[1452]: I20250129 11:57:02.440845 1452 main.cc:92] Flatcar Update Engine starting Jan 29 11:57:02.446952 update_engine[1452]: I20250129 11:57:02.444160 1452 update_check_scheduler.cc:74] Next update check in 2m0s Jan 29 11:57:02.449357 jq[1459]: true Jan 29 11:57:02.450481 (ntainerd)[1460]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:57:02.458784 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:57:02.458876 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:57:02.460464 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:57:02.460492 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:57:02.462390 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:57:02.472374 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:57:02.482613 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 11:57:02.540833 extend-filesystems[1453]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 11:57:02.540833 extend-filesystems[1453]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 11:57:02.540833 extend-filesystems[1453]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 11:57:02.551871 extend-filesystems[1437]: Resized filesystem in /dev/vda9 Jan 29 11:57:02.542565 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:57:02.553516 bash[1485]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:57:02.542888 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:57:02.554620 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:57:02.557794 systemd-logind[1443]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 11:57:02.557829 systemd-logind[1443]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 11:57:02.559731 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 11:57:02.559987 systemd-logind[1443]: New seat seat0. Jan 29 11:57:02.561397 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:57:02.605289 locksmithd[1470]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:57:02.900170 containerd[1460]: time="2025-01-29T11:57:02.899978281Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 29 11:57:02.973815 containerd[1460]: time="2025-01-29T11:57:02.973722040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:57:02.977691 containerd[1460]: time="2025-01-29T11:57:02.977638434Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:57:02.977691 containerd[1460]: time="2025-01-29T11:57:02.977673229Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:57:02.977691 containerd[1460]: time="2025-01-29T11:57:02.977690080Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:57:02.978023 containerd[1460]: time="2025-01-29T11:57:02.977985043Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:57:02.978023 containerd[1460]: time="2025-01-29T11:57:02.978012946Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:57:02.978182 containerd[1460]: time="2025-01-29T11:57:02.978143881Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:57:02.978182 containerd[1460]: time="2025-01-29T11:57:02.978165953Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:57:02.978440 containerd[1460]: time="2025-01-29T11:57:02.978403498Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:57:02.978440 containerd[1460]: time="2025-01-29T11:57:02.978422894Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:57:02.978440 containerd[1460]: time="2025-01-29T11:57:02.978435979Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:57:02.978518 containerd[1460]: time="2025-01-29T11:57:02.978446930Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:57:02.978598 containerd[1460]: time="2025-01-29T11:57:02.978571132Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:57:02.978905 containerd[1460]: time="2025-01-29T11:57:02.978865945Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:57:02.979087 containerd[1460]: time="2025-01-29T11:57:02.979048337Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:57:02.979087 containerd[1460]: time="2025-01-29T11:57:02.979068355Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:57:02.979217 containerd[1460]: time="2025-01-29T11:57:02.979184122Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:57:02.979287 containerd[1460]: time="2025-01-29T11:57:02.979263340Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:57:02.985022 containerd[1460]: time="2025-01-29T11:57:02.984974499Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:57:02.985087 containerd[1460]: time="2025-01-29T11:57:02.985046724Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:57:02.985087 containerd[1460]: time="2025-01-29T11:57:02.985080217Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:57:02.985139 containerd[1460]: time="2025-01-29T11:57:02.985103390Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:57:02.985139 containerd[1460]: time="2025-01-29T11:57:02.985120192Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:57:02.985301 containerd[1460]: time="2025-01-29T11:57:02.985262799Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:57:02.985708 containerd[1460]: time="2025-01-29T11:57:02.985644575Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:57:02.985986 containerd[1460]: time="2025-01-29T11:57:02.985918248Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:57:02.985986 containerd[1460]: time="2025-01-29T11:57:02.985971959Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:57:02.985986 containerd[1460]: time="2025-01-29T11:57:02.985985304Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:57:02.986219 containerd[1460]: time="2025-01-29T11:57:02.986000613Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:57:02.986219 containerd[1460]: time="2025-01-29T11:57:02.986016663Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:57:02.986219 containerd[1460]: time="2025-01-29T11:57:02.986030168Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:57:02.986219 containerd[1460]: time="2025-01-29T11:57:02.986045447Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:57:02.986219 containerd[1460]: time="2025-01-29T11:57:02.986061116Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:57:02.986219 containerd[1460]: time="2025-01-29T11:57:02.986074992Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:57:02.986219 containerd[1460]: time="2025-01-29T11:57:02.986087506Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:57:02.986219 containerd[1460]: time="2025-01-29T11:57:02.986099849Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:57:02.986219 containerd[1460]: time="2025-01-29T11:57:02.986131939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:57:02.986219 containerd[1460]: time="2025-01-29T11:57:02.986146196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:57:02.986219 containerd[1460]: time="2025-01-29T11:57:02.986158899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:57:02.986219 containerd[1460]: time="2025-01-29T11:57:02.986171343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:57:02.986219 containerd[1460]: time="2025-01-29T11:57:02.986185349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:57:02.986219 containerd[1460]: time="2025-01-29T11:57:02.986199355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:57:02.986562 containerd[1460]: time="2025-01-29T11:57:02.986212460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:57:02.986562 containerd[1460]: time="2025-01-29T11:57:02.986225585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:57:02.986562 containerd[1460]: time="2025-01-29T11:57:02.986239080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:57:02.986562 containerd[1460]: time="2025-01-29T11:57:02.986254238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:57:02.986562 containerd[1460]: time="2025-01-29T11:57:02.986272142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:57:02.986562 containerd[1460]: time="2025-01-29T11:57:02.986284705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:57:02.986562 containerd[1460]: time="2025-01-29T11:57:02.986298351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:57:02.986562 containerd[1460]: time="2025-01-29T11:57:02.986316555Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:57:02.986562 containerd[1460]: time="2025-01-29T11:57:02.986341742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:57:02.986562 containerd[1460]: time="2025-01-29T11:57:02.986354596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:57:02.986562 containerd[1460]: time="2025-01-29T11:57:02.986365687Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:57:02.986562 containerd[1460]: time="2025-01-29T11:57:02.986422023Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:57:02.986562 containerd[1460]: time="2025-01-29T11:57:02.986443994Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:57:02.986562 containerd[1460]: time="2025-01-29T11:57:02.986454734Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:57:02.986916 containerd[1460]: time="2025-01-29T11:57:02.986466516Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:57:02.986916 containerd[1460]: time="2025-01-29T11:57:02.986476134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:57:02.986916 containerd[1460]: time="2025-01-29T11:57:02.986490151Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:57:02.986916 containerd[1460]: time="2025-01-29T11:57:02.986511941Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:57:02.986916 containerd[1460]: time="2025-01-29T11:57:02.986662353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:57:02.988164 containerd[1460]: time="2025-01-29T11:57:02.988062238Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:57:02.988164 containerd[1460]: time="2025-01-29T11:57:02.988143370Z" level=info msg="Connect containerd service" Jan 29 11:57:02.988485 containerd[1460]: time="2025-01-29T11:57:02.988213592Z" level=info msg="using legacy CRI server" Jan 29 11:57:02.988485 containerd[1460]: time="2025-01-29T11:57:02.988231125Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:57:02.988485 containerd[1460]: time="2025-01-29T11:57:02.988376217Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:57:02.989312 containerd[1460]: time="2025-01-29T11:57:02.989269722Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:57:02.989589 containerd[1460]: time="2025-01-29T11:57:02.989474436Z" level=info msg="Start subscribing containerd event" Jan 29 11:57:02.989589 containerd[1460]: time="2025-01-29T11:57:02.989523698Z" level=info msg="Start recovering state" Jan 29 11:57:02.989777 containerd[1460]: time="2025-01-29T11:57:02.989733873Z" level=info msg="Start event monitor" Jan 29 11:57:02.989777 containerd[1460]: time="2025-01-29T11:57:02.989766333Z" level=info msg="Start snapshots syncer" Jan 29 11:57:02.989833 containerd[1460]: time="2025-01-29T11:57:02.989781291Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:57:02.989833 containerd[1460]: time="2025-01-29T11:57:02.989791060Z" level=info msg="Start streaming server" Jan 29 11:57:02.992966 containerd[1460]: time="2025-01-29T11:57:02.990353164Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:57:02.992966 containerd[1460]: time="2025-01-29T11:57:02.990455185Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:57:02.992966 containerd[1460]: time="2025-01-29T11:57:02.990791886Z" level=info msg="containerd successfully booted in 0.092290s" Jan 29 11:57:02.990641 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:57:03.267022 sshd_keygen[1458]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:57:03.296793 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:57:03.310350 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:57:03.319233 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:57:03.319493 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:57:03.322574 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:57:03.413876 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:57:03.432572 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:57:03.435446 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 11:57:03.436776 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:57:03.801203 systemd-networkd[1388]: eth0: Gained IPv6LL Jan 29 11:57:03.805035 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:57:03.806968 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:57:03.815242 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 11:57:03.818100 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:57:03.820374 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:57:03.843680 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 11:57:03.844005 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 11:57:03.845864 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:57:03.848476 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:57:05.235556 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:57:05.237388 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:57:05.238704 systemd[1]: Startup finished in 881ms (kernel) + 5.352s (initrd) + 5.754s (userspace) = 11.989s. Jan 29 11:57:05.243557 (kubelet)[1540]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:57:06.042122 kubelet[1540]: E0129 11:57:06.041991 1540 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:57:06.046794 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:57:06.047021 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:57:06.047369 systemd[1]: kubelet.service: Consumed 2.068s CPU time. Jan 29 11:57:08.084322 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:57:08.085906 systemd[1]: Started sshd@0-10.0.0.125:22-10.0.0.1:57680.service - OpenSSH per-connection server daemon (10.0.0.1:57680). Jan 29 11:57:08.136794 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 57680 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:57:08.139309 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:57:08.149809 systemd-logind[1443]: New session 1 of user core. Jan 29 11:57:08.151198 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:57:08.167243 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:57:08.181700 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:57:08.183858 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:57:08.195437 (systemd)[1558]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:57:08.320239 systemd[1558]: Queued start job for default target default.target. Jan 29 11:57:08.331756 systemd[1558]: Created slice app.slice - User Application Slice. Jan 29 11:57:08.331792 systemd[1558]: Reached target paths.target - Paths. Jan 29 11:57:08.331815 systemd[1558]: Reached target timers.target - Timers. Jan 29 11:57:08.333744 systemd[1558]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:57:08.346049 systemd[1558]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:57:08.346216 systemd[1558]: Reached target sockets.target - Sockets. Jan 29 11:57:08.346236 systemd[1558]: Reached target basic.target - Basic System. Jan 29 11:57:08.346292 systemd[1558]: Reached target default.target - Main User Target. Jan 29 11:57:08.346336 systemd[1558]: Startup finished in 142ms. Jan 29 11:57:08.346866 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:57:08.348882 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:57:08.412029 systemd[1]: Started sshd@1-10.0.0.125:22-10.0.0.1:57692.service - OpenSSH per-connection server daemon (10.0.0.1:57692). Jan 29 11:57:08.458207 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 57692 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:57:08.459788 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:57:08.464446 systemd-logind[1443]: New session 2 of user core. Jan 29 11:57:08.478108 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:57:08.532994 sshd[1569]: pam_unix(sshd:session): session closed for user core Jan 29 11:57:08.553965 systemd[1]: sshd@1-10.0.0.125:22-10.0.0.1:57692.service: Deactivated successfully. Jan 29 11:57:08.555774 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:57:08.557635 systemd-logind[1443]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:57:08.559121 systemd[1]: Started sshd@2-10.0.0.125:22-10.0.0.1:57704.service - OpenSSH per-connection server daemon (10.0.0.1:57704). Jan 29 11:57:08.560075 systemd-logind[1443]: Removed session 2. Jan 29 11:57:08.599332 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 57704 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:57:08.601407 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:57:08.606074 systemd-logind[1443]: New session 3 of user core. Jan 29 11:57:08.627123 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:57:08.679179 sshd[1576]: pam_unix(sshd:session): session closed for user core Jan 29 11:57:08.702473 systemd[1]: sshd@2-10.0.0.125:22-10.0.0.1:57704.service: Deactivated successfully. Jan 29 11:57:08.704376 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:57:08.706110 systemd-logind[1443]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:57:08.716219 systemd[1]: Started sshd@3-10.0.0.125:22-10.0.0.1:57718.service - OpenSSH per-connection server daemon (10.0.0.1:57718). Jan 29 11:57:08.717190 systemd-logind[1443]: Removed session 3. Jan 29 11:57:08.750289 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 57718 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:57:08.752056 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:57:08.756588 systemd-logind[1443]: New session 4 of user core. Jan 29 11:57:08.766119 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:57:08.825119 sshd[1583]: pam_unix(sshd:session): session closed for user core Jan 29 11:57:08.833167 systemd[1]: sshd@3-10.0.0.125:22-10.0.0.1:57718.service: Deactivated successfully. Jan 29 11:57:08.835073 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:57:08.836711 systemd-logind[1443]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:57:08.854244 systemd[1]: Started sshd@4-10.0.0.125:22-10.0.0.1:57728.service - OpenSSH per-connection server daemon (10.0.0.1:57728). Jan 29 11:57:08.855485 systemd-logind[1443]: Removed session 4. Jan 29 11:57:08.890478 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 57728 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:57:08.892991 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:57:08.898070 systemd-logind[1443]: New session 5 of user core. Jan 29 11:57:08.908157 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:57:08.969177 sudo[1593]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:57:08.969611 sudo[1593]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:57:08.985662 sudo[1593]: pam_unix(sudo:session): session closed for user root Jan 29 11:57:08.988258 sshd[1590]: pam_unix(sshd:session): session closed for user core Jan 29 11:57:09.001220 systemd[1]: sshd@4-10.0.0.125:22-10.0.0.1:57728.service: Deactivated successfully. Jan 29 11:57:09.003294 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:57:09.005087 systemd-logind[1443]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:57:09.006646 systemd[1]: Started sshd@5-10.0.0.125:22-10.0.0.1:57742.service - OpenSSH per-connection server daemon (10.0.0.1:57742). Jan 29 11:57:09.007459 systemd-logind[1443]: Removed session 5. Jan 29 11:57:09.048219 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 57742 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:57:09.049977 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:57:09.055055 systemd-logind[1443]: New session 6 of user core. Jan 29 11:57:09.066095 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:57:09.123785 sudo[1602]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:57:09.124231 sudo[1602]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:57:09.129645 sudo[1602]: pam_unix(sudo:session): session closed for user root Jan 29 11:57:09.137142 sudo[1601]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 29 11:57:09.137529 sudo[1601]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:57:09.158157 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 29 11:57:09.159991 auditctl[1605]: No rules Jan 29 11:57:09.160498 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:57:09.160783 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 29 11:57:09.164215 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 11:57:09.200850 augenrules[1623]: No rules Jan 29 11:57:09.204404 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 11:57:09.205861 sudo[1601]: pam_unix(sudo:session): session closed for user root Jan 29 11:57:09.208421 sshd[1598]: pam_unix(sshd:session): session closed for user core Jan 29 11:57:09.220104 systemd[1]: sshd@5-10.0.0.125:22-10.0.0.1:57742.service: Deactivated successfully. Jan 29 11:57:09.222065 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:57:09.224036 systemd-logind[1443]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:57:09.234502 systemd[1]: Started sshd@6-10.0.0.125:22-10.0.0.1:57750.service - OpenSSH per-connection server daemon (10.0.0.1:57750). Jan 29 11:57:09.235645 systemd-logind[1443]: Removed session 6. Jan 29 11:57:09.269903 sshd[1631]: Accepted publickey for core from 10.0.0.1 port 57750 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:57:09.272191 sshd[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:57:09.276749 systemd-logind[1443]: New session 7 of user core. Jan 29 11:57:09.286043 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:57:09.341910 sudo[1634]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:57:09.342382 sudo[1634]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:57:09.365346 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 11:57:09.388434 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 11:57:09.388678 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 11:57:10.161647 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:57:10.162021 systemd[1]: kubelet.service: Consumed 2.068s CPU time. Jan 29 11:57:10.173636 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:57:10.213536 systemd[1]: Reloading requested from client PID 1681 ('systemctl') (unit session-7.scope)... Jan 29 11:57:10.213561 systemd[1]: Reloading... Jan 29 11:57:10.305012 zram_generator::config[1719]: No configuration found. Jan 29 11:57:10.860773 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:57:10.939955 systemd[1]: Reloading finished in 725 ms. Jan 29 11:57:11.009296 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 11:57:11.009404 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 11:57:11.009750 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:57:11.028442 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:57:11.182890 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:57:11.194346 (kubelet)[1768]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:57:11.264057 kubelet[1768]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:57:11.264057 kubelet[1768]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:57:11.264057 kubelet[1768]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:57:11.264585 kubelet[1768]: I0129 11:57:11.264100 1768 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:57:11.645750 kubelet[1768]: I0129 11:57:11.645592 1768 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 11:57:11.645750 kubelet[1768]: I0129 11:57:11.645653 1768 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:57:11.646047 kubelet[1768]: I0129 11:57:11.646002 1768 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 11:57:11.662972 kubelet[1768]: I0129 11:57:11.662894 1768 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:57:11.675508 kubelet[1768]: I0129 11:57:11.675460 1768 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:57:11.675855 kubelet[1768]: I0129 11:57:11.675798 1768 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:57:11.676106 kubelet[1768]: I0129 11:57:11.675836 1768 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.125","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 11:57:11.676643 kubelet[1768]: I0129 11:57:11.676601 1768 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:57:11.676643 kubelet[1768]: I0129 11:57:11.676626 1768 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 11:57:11.677467 kubelet[1768]: I0129 11:57:11.677428 1768 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:57:11.678231 kubelet[1768]: I0129 11:57:11.678160 1768 kubelet.go:400] "Attempting to sync node with API server" Jan 29 11:57:11.678231 kubelet[1768]: I0129 11:57:11.678207 1768 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:57:11.678231 kubelet[1768]: I0129 11:57:11.678247 1768 kubelet.go:312] "Adding apiserver pod source" Jan 29 11:57:11.678418 kubelet[1768]: I0129 11:57:11.678272 1768 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:57:11.678624 kubelet[1768]: E0129 11:57:11.678603 1768 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:11.678740 kubelet[1768]: E0129 11:57:11.678692 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:11.681912 kubelet[1768]: I0129 11:57:11.681887 1768 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 11:57:11.683091 kubelet[1768]: I0129 11:57:11.683066 1768 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:57:11.683141 kubelet[1768]: W0129 11:57:11.683133 1768 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:57:11.686094 kubelet[1768]: I0129 11:57:11.685912 1768 server.go:1264] "Started kubelet" Jan 29 11:57:11.686865 kubelet[1768]: I0129 11:57:11.686815 1768 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:57:11.688502 kubelet[1768]: I0129 11:57:11.687211 1768 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:57:11.688502 kubelet[1768]: I0129 11:57:11.687919 1768 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:57:11.689329 kubelet[1768]: I0129 11:57:11.688659 1768 server.go:455] "Adding debug handlers to kubelet server" Jan 29 11:57:11.691292 kubelet[1768]: I0129 11:57:11.691258 1768 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:57:11.694633 kubelet[1768]: I0129 11:57:11.692829 1768 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 11:57:11.694633 kubelet[1768]: I0129 11:57:11.693207 1768 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:57:11.694633 kubelet[1768]: I0129 11:57:11.693267 1768 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:57:11.699800 kubelet[1768]: I0129 11:57:11.698618 1768 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:57:11.700478 kubelet[1768]: E0129 11:57:11.700411 1768 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:57:11.700708 kubelet[1768]: E0129 11:57:11.700673 1768 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.125\" not found" node="10.0.0.125" Jan 29 11:57:11.701036 kubelet[1768]: I0129 11:57:11.700994 1768 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:57:11.701036 kubelet[1768]: I0129 11:57:11.701021 1768 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:57:11.712237 kubelet[1768]: I0129 11:57:11.712168 1768 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:57:11.712237 kubelet[1768]: I0129 11:57:11.712187 1768 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:57:11.712237 kubelet[1768]: I0129 11:57:11.712206 1768 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:57:11.794331 kubelet[1768]: I0129 11:57:11.794291 1768 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.125" Jan 29 11:57:12.217569 kubelet[1768]: I0129 11:57:12.217405 1768 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.125" Jan 29 11:57:12.221076 kubelet[1768]: I0129 11:57:12.221025 1768 policy_none.go:49] "None policy: Start" Jan 29 11:57:12.221669 kubelet[1768]: I0129 11:57:12.221566 1768 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 29 11:57:12.222097 kubelet[1768]: I0129 11:57:12.222060 1768 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:57:12.222165 kubelet[1768]: I0129 11:57:12.222112 1768 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:57:12.222811 containerd[1460]: time="2025-01-29T11:57:12.222758411Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:57:12.223383 kubelet[1768]: I0129 11:57:12.223006 1768 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 29 11:57:12.228148 sudo[1634]: pam_unix(sudo:session): session closed for user root Jan 29 11:57:12.230194 sshd[1631]: pam_unix(sshd:session): session closed for user core Jan 29 11:57:12.233476 systemd[1]: sshd@6-10.0.0.125:22-10.0.0.1:57750.service: Deactivated successfully. Jan 29 11:57:12.235544 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:57:12.237020 systemd-logind[1443]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:57:12.238517 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:57:12.239023 systemd-logind[1443]: Removed session 7. Jan 29 11:57:12.249970 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:57:12.251469 kubelet[1768]: I0129 11:57:12.251425 1768 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:57:12.252733 kubelet[1768]: I0129 11:57:12.252691 1768 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:57:12.252772 kubelet[1768]: I0129 11:57:12.252742 1768 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:57:12.252772 kubelet[1768]: I0129 11:57:12.252766 1768 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 11:57:12.253134 kubelet[1768]: E0129 11:57:12.252816 1768 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:57:12.260323 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:57:12.261758 kubelet[1768]: I0129 11:57:12.261698 1768 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:57:12.262065 kubelet[1768]: I0129 11:57:12.262010 1768 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:57:12.262368 kubelet[1768]: I0129 11:57:12.262328 1768 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:57:12.649136 kubelet[1768]: I0129 11:57:12.648967 1768 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 11:57:12.649603 kubelet[1768]: W0129 11:57:12.649209 1768 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 11:57:12.649603 kubelet[1768]: W0129 11:57:12.649249 1768 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 11:57:12.649603 kubelet[1768]: W0129 11:57:12.649276 1768 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 11:57:12.679390 kubelet[1768]: E0129 11:57:12.679346 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:12.679390 kubelet[1768]: I0129 11:57:12.679355 1768 apiserver.go:52] "Watching apiserver" Jan 29 11:57:12.683312 kubelet[1768]: I0129 11:57:12.683253 1768 topology_manager.go:215] "Topology Admit Handler" podUID="30ba54c6-5ba3-4ff1-a963-d0154818f519" podNamespace="calico-system" podName="calico-node-l4ptl" Jan 29 11:57:12.683396 kubelet[1768]: I0129 11:57:12.683377 1768 topology_manager.go:215] "Topology Admit Handler" podUID="2ceb4503-465d-45e6-bb4a-09d53c856388" podNamespace="kube-system" podName="kube-proxy-dqfdm" Jan 29 11:57:12.683491 kubelet[1768]: I0129 11:57:12.683472 1768 topology_manager.go:215] "Topology Admit Handler" podUID="a9d13e01-6e79-4768-8067-8fdf452aca9e" podNamespace="calico-system" podName="csi-node-driver-h6rn2" Jan 29 11:57:12.683684 kubelet[1768]: E0129 11:57:12.683628 1768 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6rn2" podUID="a9d13e01-6e79-4768-8067-8fdf452aca9e" Jan 29 11:57:12.692644 systemd[1]: Created slice kubepods-besteffort-pod2ceb4503_465d_45e6_bb4a_09d53c856388.slice - libcontainer container kubepods-besteffort-pod2ceb4503_465d_45e6_bb4a_09d53c856388.slice. Jan 29 11:57:12.718278 kubelet[1768]: I0129 11:57:12.718224 1768 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:57:12.719190 kubelet[1768]: I0129 11:57:12.719109 1768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30ba54c6-5ba3-4ff1-a963-d0154818f519-tigera-ca-bundle\") pod \"calico-node-l4ptl\" (UID: \"30ba54c6-5ba3-4ff1-a963-d0154818f519\") " pod="calico-system/calico-node-l4ptl" Jan 29 11:57:12.719955 kubelet[1768]: I0129 11:57:12.719453 1768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a9d13e01-6e79-4768-8067-8fdf452aca9e-socket-dir\") pod \"csi-node-driver-h6rn2\" (UID: \"a9d13e01-6e79-4768-8067-8fdf452aca9e\") " pod="calico-system/csi-node-driver-h6rn2" Jan 29 11:57:12.719955 kubelet[1768]: I0129 11:57:12.719497 1768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/30ba54c6-5ba3-4ff1-a963-d0154818f519-flexvol-driver-host\") pod \"calico-node-l4ptl\" (UID: \"30ba54c6-5ba3-4ff1-a963-d0154818f519\") " pod="calico-system/calico-node-l4ptl" Jan 29 11:57:12.719955 kubelet[1768]: I0129 11:57:12.719524 1768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ceb4503-465d-45e6-bb4a-09d53c856388-lib-modules\") pod \"kube-proxy-dqfdm\" (UID: \"2ceb4503-465d-45e6-bb4a-09d53c856388\") " pod="kube-system/kube-proxy-dqfdm" Jan 29 11:57:12.719955 kubelet[1768]: I0129 11:57:12.719550 1768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sn2h\" (UniqueName: \"kubernetes.io/projected/2ceb4503-465d-45e6-bb4a-09d53c856388-kube-api-access-2sn2h\") pod \"kube-proxy-dqfdm\" (UID: \"2ceb4503-465d-45e6-bb4a-09d53c856388\") " pod="kube-system/kube-proxy-dqfdm" Jan 29 11:57:12.719955 kubelet[1768]: I0129 11:57:12.719571 1768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a9d13e01-6e79-4768-8067-8fdf452aca9e-kubelet-dir\") pod \"csi-node-driver-h6rn2\" (UID: \"a9d13e01-6e79-4768-8067-8fdf452aca9e\") " pod="calico-system/csi-node-driver-h6rn2" Jan 29 11:57:12.720159 kubelet[1768]: I0129 11:57:12.719592 1768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/30ba54c6-5ba3-4ff1-a963-d0154818f519-xtables-lock\") pod \"calico-node-l4ptl\" (UID: \"30ba54c6-5ba3-4ff1-a963-d0154818f519\") " pod="calico-system/calico-node-l4ptl" Jan 29 11:57:12.720159 kubelet[1768]: I0129 11:57:12.719612 1768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/30ba54c6-5ba3-4ff1-a963-d0154818f519-var-lib-calico\") pod \"calico-node-l4ptl\" (UID: \"30ba54c6-5ba3-4ff1-a963-d0154818f519\") " pod="calico-system/calico-node-l4ptl" Jan 29 11:57:12.720159 kubelet[1768]: I0129 11:57:12.719639 1768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/30ba54c6-5ba3-4ff1-a963-d0154818f519-cni-net-dir\") pod \"calico-node-l4ptl\" (UID: \"30ba54c6-5ba3-4ff1-a963-d0154818f519\") " pod="calico-system/calico-node-l4ptl" Jan 29 11:57:12.720159 kubelet[1768]: I0129 11:57:12.719660 1768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/30ba54c6-5ba3-4ff1-a963-d0154818f519-cni-log-dir\") pod \"calico-node-l4ptl\" (UID: \"30ba54c6-5ba3-4ff1-a963-d0154818f519\") " pod="calico-system/calico-node-l4ptl" Jan 29 11:57:12.720159 kubelet[1768]: I0129 11:57:12.719682 1768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgpdq\" (UniqueName: \"kubernetes.io/projected/30ba54c6-5ba3-4ff1-a963-d0154818f519-kube-api-access-jgpdq\") pod \"calico-node-l4ptl\" (UID: \"30ba54c6-5ba3-4ff1-a963-d0154818f519\") " pod="calico-system/calico-node-l4ptl" Jan 29 11:57:12.720323 kubelet[1768]: I0129 11:57:12.719712 1768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2ceb4503-465d-45e6-bb4a-09d53c856388-kube-proxy\") pod \"kube-proxy-dqfdm\" (UID: \"2ceb4503-465d-45e6-bb4a-09d53c856388\") " pod="kube-system/kube-proxy-dqfdm" Jan 29 11:57:12.720323 kubelet[1768]: I0129 11:57:12.719736 1768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2ceb4503-465d-45e6-bb4a-09d53c856388-xtables-lock\") pod \"kube-proxy-dqfdm\" (UID: \"2ceb4503-465d-45e6-bb4a-09d53c856388\") " pod="kube-system/kube-proxy-dqfdm" Jan 29 11:57:12.720323 kubelet[1768]: I0129 11:57:12.719756 1768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a9d13e01-6e79-4768-8067-8fdf452aca9e-registration-dir\") pod \"csi-node-driver-h6rn2\" (UID: \"a9d13e01-6e79-4768-8067-8fdf452aca9e\") " pod="calico-system/csi-node-driver-h6rn2" Jan 29 11:57:12.720323 kubelet[1768]: I0129 11:57:12.719782 1768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/30ba54c6-5ba3-4ff1-a963-d0154818f519-node-certs\") pod \"calico-node-l4ptl\" (UID: \"30ba54c6-5ba3-4ff1-a963-d0154818f519\") " pod="calico-system/calico-node-l4ptl" Jan 29 11:57:12.720323 kubelet[1768]: I0129 11:57:12.719805 1768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/30ba54c6-5ba3-4ff1-a963-d0154818f519-var-run-calico\") pod \"calico-node-l4ptl\" (UID: \"30ba54c6-5ba3-4ff1-a963-d0154818f519\") " pod="calico-system/calico-node-l4ptl" Jan 29 11:57:12.720479 kubelet[1768]: I0129 11:57:12.719829 1768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/30ba54c6-5ba3-4ff1-a963-d0154818f519-cni-bin-dir\") pod \"calico-node-l4ptl\" (UID: \"30ba54c6-5ba3-4ff1-a963-d0154818f519\") " pod="calico-system/calico-node-l4ptl" Jan 29 11:57:12.720479 kubelet[1768]: I0129 11:57:12.719854 1768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4czc\" (UniqueName: \"kubernetes.io/projected/a9d13e01-6e79-4768-8067-8fdf452aca9e-kube-api-access-m4czc\") pod \"csi-node-driver-h6rn2\" (UID: \"a9d13e01-6e79-4768-8067-8fdf452aca9e\") " pod="calico-system/csi-node-driver-h6rn2" Jan 29 11:57:12.720479 kubelet[1768]: I0129 11:57:12.719880 1768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30ba54c6-5ba3-4ff1-a963-d0154818f519-lib-modules\") pod \"calico-node-l4ptl\" (UID: \"30ba54c6-5ba3-4ff1-a963-d0154818f519\") " pod="calico-system/calico-node-l4ptl" Jan 29 11:57:12.720479 kubelet[1768]: I0129 11:57:12.719904 1768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/30ba54c6-5ba3-4ff1-a963-d0154818f519-policysync\") pod \"calico-node-l4ptl\" (UID: \"30ba54c6-5ba3-4ff1-a963-d0154818f519\") " pod="calico-system/calico-node-l4ptl" Jan 29 11:57:12.720479 kubelet[1768]: I0129 11:57:12.719966 1768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a9d13e01-6e79-4768-8067-8fdf452aca9e-varrun\") pod \"csi-node-driver-h6rn2\" (UID: \"a9d13e01-6e79-4768-8067-8fdf452aca9e\") " pod="calico-system/csi-node-driver-h6rn2" Jan 29 11:57:12.730264 systemd[1]: Created slice kubepods-besteffort-pod30ba54c6_5ba3_4ff1_a963_d0154818f519.slice - libcontainer container kubepods-besteffort-pod30ba54c6_5ba3_4ff1_a963_d0154818f519.slice. Jan 29 11:57:12.821590 kubelet[1768]: E0129 11:57:12.821549 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:12.821590 kubelet[1768]: W0129 11:57:12.821579 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:12.821758 kubelet[1768]: E0129 11:57:12.821628 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:12.821948 kubelet[1768]: E0129 11:57:12.821917 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:12.821948 kubelet[1768]: W0129 11:57:12.821946 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:12.822022 kubelet[1768]: E0129 11:57:12.821961 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:12.822231 kubelet[1768]: E0129 11:57:12.822217 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:12.822231 kubelet[1768]: W0129 11:57:12.822228 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:12.822296 kubelet[1768]: E0129 11:57:12.822240 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:12.822479 kubelet[1768]: E0129 11:57:12.822454 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:12.822479 kubelet[1768]: W0129 11:57:12.822467 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:12.822543 kubelet[1768]: E0129 11:57:12.822481 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:12.822716 kubelet[1768]: E0129 11:57:12.822693 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:12.822716 kubelet[1768]: W0129 11:57:12.822714 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:12.822790 kubelet[1768]: E0129 11:57:12.822762 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:12.822966 kubelet[1768]: E0129 11:57:12.822948 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:12.822966 kubelet[1768]: W0129 11:57:12.822962 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:12.823042 kubelet[1768]: E0129 11:57:12.822993 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:12.823189 kubelet[1768]: E0129 11:57:12.823175 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:12.823189 kubelet[1768]: W0129 11:57:12.823186 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:12.823239 kubelet[1768]: E0129 11:57:12.823213 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:12.823386 kubelet[1768]: E0129 11:57:12.823374 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:12.823386 kubelet[1768]: W0129 11:57:12.823384 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:12.823443 kubelet[1768]: E0129 11:57:12.823419 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:12.823598 kubelet[1768]: E0129 11:57:12.823579 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:12.823598 kubelet[1768]: W0129 11:57:12.823590 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:12.823644 kubelet[1768]: E0129 11:57:12.823615 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:12.823798 kubelet[1768]: E0129 11:57:12.823786 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:12.823798 kubelet[1768]: W0129 11:57:12.823796 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:12.823866 kubelet[1768]: E0129 11:57:12.823823 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:12.824035 kubelet[1768]: E0129 11:57:12.824018 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:12.824035 kubelet[1768]: W0129 11:57:12.824030 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:12.824195 kubelet[1768]: E0129 11:57:12.824130 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:12.824274 kubelet[1768]: E0129 11:57:12.824261 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:12.824274 kubelet[1768]: W0129 11:57:12.824271 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:12.824448 kubelet[1768]: E0129 11:57:12.824406 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:12.824498 kubelet[1768]: E0129 11:57:12.824465 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:12.824498 kubelet[1768]: W0129 11:57:12.824472 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:12.824613 kubelet[1768]: E0129 11:57:12.824528 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:12.824668 kubelet[1768]: E0129 11:57:12.824654 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:12.824668 kubelet[1768]: W0129 11:57:12.824664 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:12.824781 kubelet[1768]: E0129 11:57:12.824758 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:12.824894 kubelet[1768]: E0129 11:57:12.824879 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:12.824894 kubelet[1768]: W0129 11:57:12.824891 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:12.825733 kubelet[1768]: E0129 11:57:12.824951 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:12.825733 kubelet[1768]: E0129 11:57:12.825167 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:12.825733 kubelet[1768]: W0129 11:57:12.825182 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:12.825733 kubelet[1768]: E0129 11:57:12.825249 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:12.825733 kubelet[1768]: E0129 11:57:12.825554 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:12.825733 kubelet[1768]: W0129 11:57:12.825564 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:12.825733 kubelet[1768]: E0129 11:57:12.825666 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:12.825999 kubelet[1768]: E0129 11:57:12.825839 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:12.825999 kubelet[1768]: W0129 11:57:12.825849 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:12.826072 kubelet[1768]: E0129 11:57:12.826052 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:12.826072 kubelet[1768]: W0129 11:57:12.826062 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:12.826266 kubelet[1768]: E0129 11:57:12.826161 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:12.826266 kubelet[1768]: E0129 11:57:12.826206 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:12.826266 kubelet[1768]: E0129 11:57:12.826242 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:12.826266 kubelet[1768]: W0129 11:57:12.826251 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:12.828992 kubelet[1768]: E0129 11:57:12.826300 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:12.828992 kubelet[1768]: E0129 11:57:12.826464 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:12.828992 kubelet[1768]: W0129 11:57:12.826480 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:12.828992 kubelet[1768]: E0129 11:57:12.826727 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:12.828992 kubelet[1768]: W0129 11:57:12.826746 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:12.828992 kubelet[1768]: E0129 11:57:12.827043 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:12.828992 kubelet[1768]: W0129 11:57:12.827059 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:12.828992 kubelet[1768]: E0129 11:57:12.827310 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:12.828992 kubelet[1768]: W0129 11:57:12.827320 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:12.828992 kubelet[1768]: E0129 11:57:12.827535 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:12.829224 kubelet[1768]: W0129 11:57:12.827544 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:12.829224 kubelet[1768]: E0129 11:57:12.827769 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:12.829224 kubelet[1768]: W0129 11:57:12.827778 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:12.829224 kubelet[1768]: E0129 11:57:12.828863 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:12.829224 kubelet[1768]: E0129 11:57:12.828900 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:12.829224 kubelet[1768]: E0129 11:57:12.828971 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:12.832097 kubelet[1768]: E0129 11:57:12.829403 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:12.832097 kubelet[1768]: W0129 11:57:12.829419 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:12.832097 kubelet[1768]: E0129 11:57:12.829434 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:12.832097 kubelet[1768]: E0129 11:57:12.829672 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:12.832097 kubelet[1768]: W0129 11:57:12.829680 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:12.832097 kubelet[1768]: E0129 11:57:12.829690 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:12.832097 kubelet[1768]: E0129 11:57:12.830072 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:12.832097 kubelet[1768]: W0129 11:57:12.830081 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:12.832097 kubelet[1768]: E0129 11:57:12.830091 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:12.832097 kubelet[1768]: E0129 11:57:12.830107 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:12.832378 kubelet[1768]: E0129 11:57:12.830122 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:12.832378 kubelet[1768]: E0129 11:57:12.830137 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:12.833948 kubelet[1768]: E0129 11:57:12.833761 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:12.833948 kubelet[1768]: W0129 11:57:12.833782 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:12.833948 kubelet[1768]: E0129 11:57:12.833801 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:12.834128 kubelet[1768]: E0129 11:57:12.834108 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:12.834128 kubelet[1768]: W0129 11:57:12.834126 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:12.834193 kubelet[1768]: E0129 11:57:12.834147 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:12.834459 kubelet[1768]: E0129 11:57:12.834434 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:12.834459 kubelet[1768]: W0129 11:57:12.834454 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:12.834550 kubelet[1768]: E0129 11:57:12.834474 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:13.029551 kubelet[1768]: E0129 11:57:13.029386 1768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:13.030495 containerd[1460]: time="2025-01-29T11:57:13.030412813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dqfdm,Uid:2ceb4503-465d-45e6-bb4a-09d53c856388,Namespace:kube-system,Attempt:0,}" Jan 29 11:57:13.032679 kubelet[1768]: E0129 11:57:13.032653 1768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:13.033226 containerd[1460]: time="2025-01-29T11:57:13.033045870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l4ptl,Uid:30ba54c6-5ba3-4ff1-a963-d0154818f519,Namespace:calico-system,Attempt:0,}" Jan 29 11:57:13.680531 kubelet[1768]: E0129 11:57:13.680449 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:13.851987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount740288485.mount: Deactivated successfully. Jan 29 11:57:13.869206 containerd[1460]: time="2025-01-29T11:57:13.869129927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:57:13.870183 containerd[1460]: time="2025-01-29T11:57:13.870147245Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:57:13.871188 containerd[1460]: time="2025-01-29T11:57:13.871103388Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:57:13.872104 containerd[1460]: time="2025-01-29T11:57:13.872075681Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 11:57:13.873589 containerd[1460]: time="2025-01-29T11:57:13.873540928Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:57:13.877794 containerd[1460]: time="2025-01-29T11:57:13.877739290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:57:13.878846 containerd[1460]: time="2025-01-29T11:57:13.878802974Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 848.244979ms" Jan 29 11:57:13.879792 containerd[1460]: time="2025-01-29T11:57:13.879751593Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 846.600856ms" Jan 29 11:57:14.106478 containerd[1460]: time="2025-01-29T11:57:14.106114002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:57:14.106478 containerd[1460]: time="2025-01-29T11:57:14.106177652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:57:14.106478 containerd[1460]: time="2025-01-29T11:57:14.106214972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:14.106478 containerd[1460]: time="2025-01-29T11:57:14.106370644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:14.106730 containerd[1460]: time="2025-01-29T11:57:14.106454701Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:57:14.106730 containerd[1460]: time="2025-01-29T11:57:14.106505757Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:57:14.106730 containerd[1460]: time="2025-01-29T11:57:14.106516788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:14.106730 containerd[1460]: time="2025-01-29T11:57:14.106582952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:14.253875 kubelet[1768]: E0129 11:57:14.253771 1768 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6rn2" podUID="a9d13e01-6e79-4768-8067-8fdf452aca9e" Jan 29 11:57:14.283156 systemd[1]: Started cri-containerd-799d4aa81d8178b8c635116da3874cb2de0aea74441b7ec8aaab2cd4ebda3bf6.scope - libcontainer container 799d4aa81d8178b8c635116da3874cb2de0aea74441b7ec8aaab2cd4ebda3bf6. Jan 29 11:57:14.287593 systemd[1]: Started cri-containerd-3563683b75210114860d20202caa05e20331d707c6d9e9249eb1d002d60a60ca.scope - libcontainer container 3563683b75210114860d20202caa05e20331d707c6d9e9249eb1d002d60a60ca. Jan 29 11:57:14.333277 containerd[1460]: time="2025-01-29T11:57:14.333213313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dqfdm,Uid:2ceb4503-465d-45e6-bb4a-09d53c856388,Namespace:kube-system,Attempt:0,} returns sandbox id \"3563683b75210114860d20202caa05e20331d707c6d9e9249eb1d002d60a60ca\"" Jan 29 11:57:14.334999 kubelet[1768]: E0129 11:57:14.334909 1768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:14.337023 containerd[1460]: time="2025-01-29T11:57:14.336864299Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 11:57:14.341299 containerd[1460]: time="2025-01-29T11:57:14.341228252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l4ptl,Uid:30ba54c6-5ba3-4ff1-a963-d0154818f519,Namespace:calico-system,Attempt:0,} returns sandbox id \"799d4aa81d8178b8c635116da3874cb2de0aea74441b7ec8aaab2cd4ebda3bf6\"" Jan 29 11:57:14.342962 kubelet[1768]: E0129 11:57:14.342916 1768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:14.681557 kubelet[1768]: E0129 11:57:14.681495 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:15.682486 kubelet[1768]: E0129 11:57:15.682417 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:16.254156 kubelet[1768]: E0129 11:57:16.254056 1768 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6rn2" podUID="a9d13e01-6e79-4768-8067-8fdf452aca9e" Jan 29 11:57:16.389782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1867930329.mount: Deactivated successfully. Jan 29 11:57:16.683725 kubelet[1768]: E0129 11:57:16.683498 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:17.144318 containerd[1460]: time="2025-01-29T11:57:17.144126405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:17.145267 containerd[1460]: time="2025-01-29T11:57:17.145199387Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 29 11:57:17.146803 containerd[1460]: time="2025-01-29T11:57:17.146756877Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:17.150162 containerd[1460]: time="2025-01-29T11:57:17.150107259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:17.150825 containerd[1460]: time="2025-01-29T11:57:17.150780080Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 2.813867791s" Jan 29 11:57:17.150825 containerd[1460]: time="2025-01-29T11:57:17.150819615Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 29 11:57:17.152215 containerd[1460]: time="2025-01-29T11:57:17.152059860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 11:57:17.156427 containerd[1460]: time="2025-01-29T11:57:17.156304849Z" level=info msg="CreateContainer within sandbox \"3563683b75210114860d20202caa05e20331d707c6d9e9249eb1d002d60a60ca\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:57:17.209621 containerd[1460]: time="2025-01-29T11:57:17.209536399Z" level=info msg="CreateContainer within sandbox \"3563683b75210114860d20202caa05e20331d707c6d9e9249eb1d002d60a60ca\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"31035722c38a805a7f06911beaa291adfc3eb72bfe2dd351707e85ddb64db5ca\"" Jan 29 11:57:17.210399 containerd[1460]: time="2025-01-29T11:57:17.210344363Z" level=info msg="StartContainer for \"31035722c38a805a7f06911beaa291adfc3eb72bfe2dd351707e85ddb64db5ca\"" Jan 29 11:57:17.250099 systemd[1]: Started cri-containerd-31035722c38a805a7f06911beaa291adfc3eb72bfe2dd351707e85ddb64db5ca.scope - libcontainer container 31035722c38a805a7f06911beaa291adfc3eb72bfe2dd351707e85ddb64db5ca. Jan 29 11:57:17.285399 containerd[1460]: time="2025-01-29T11:57:17.285341213Z" level=info msg="StartContainer for \"31035722c38a805a7f06911beaa291adfc3eb72bfe2dd351707e85ddb64db5ca\" returns successfully" Jan 29 11:57:17.683865 kubelet[1768]: E0129 11:57:17.683802 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:18.253984 kubelet[1768]: E0129 11:57:18.253895 1768 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6rn2" podUID="a9d13e01-6e79-4768-8067-8fdf452aca9e" Jan 29 11:57:18.268236 kubelet[1768]: E0129 11:57:18.268179 1768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:18.283435 kubelet[1768]: I0129 11:57:18.283344 1768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dqfdm" podStartSLOduration=4.467697581 podStartE2EDuration="7.283319011s" podCreationTimestamp="2025-01-29 11:57:11 +0000 UTC" firstStartedPulling="2025-01-29 11:57:14.336282568 +0000 UTC m=+3.120229340" lastFinishedPulling="2025-01-29 11:57:17.151904008 +0000 UTC m=+5.935850770" observedRunningTime="2025-01-29 11:57:18.283149162 +0000 UTC m=+7.067095924" watchObservedRunningTime="2025-01-29 11:57:18.283319011 +0000 UTC m=+7.067265773" Jan 29 11:57:18.366631 kubelet[1768]: E0129 11:57:18.366562 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:18.366631 kubelet[1768]: W0129 11:57:18.366608 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:18.366631 kubelet[1768]: E0129 11:57:18.366635 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:18.367028 kubelet[1768]: E0129 11:57:18.367004 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:18.367028 kubelet[1768]: W0129 11:57:18.367019 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:18.367103 kubelet[1768]: E0129 11:57:18.367032 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:18.367421 kubelet[1768]: E0129 11:57:18.367379 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:18.367421 kubelet[1768]: W0129 11:57:18.367415 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:18.367501 kubelet[1768]: E0129 11:57:18.367446 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:18.367833 kubelet[1768]: E0129 11:57:18.367790 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:18.367833 kubelet[1768]: W0129 11:57:18.367810 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:18.367833 kubelet[1768]: E0129 11:57:18.367824 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:18.368162 kubelet[1768]: E0129 11:57:18.368130 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:18.368162 kubelet[1768]: W0129 11:57:18.368146 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:18.368162 kubelet[1768]: E0129 11:57:18.368157 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:18.368417 kubelet[1768]: E0129 11:57:18.368390 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:18.368417 kubelet[1768]: W0129 11:57:18.368407 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:18.368417 kubelet[1768]: E0129 11:57:18.368420 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:18.368675 kubelet[1768]: E0129 11:57:18.368654 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:18.368675 kubelet[1768]: W0129 11:57:18.368671 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:18.368770 kubelet[1768]: E0129 11:57:18.368681 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:18.368966 kubelet[1768]: E0129 11:57:18.368919 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:18.368966 kubelet[1768]: W0129 11:57:18.368962 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:18.369029 kubelet[1768]: E0129 11:57:18.368973 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:18.369229 kubelet[1768]: E0129 11:57:18.369211 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:18.369229 kubelet[1768]: W0129 11:57:18.369224 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:18.369307 kubelet[1768]: E0129 11:57:18.369235 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:18.369468 kubelet[1768]: E0129 11:57:18.369448 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:18.369468 kubelet[1768]: W0129 11:57:18.369463 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:18.369536 kubelet[1768]: E0129 11:57:18.369473 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:18.369715 kubelet[1768]: E0129 11:57:18.369696 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:18.369715 kubelet[1768]: W0129 11:57:18.369710 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:18.369780 kubelet[1768]: E0129 11:57:18.369721 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:18.369974 kubelet[1768]: E0129 11:57:18.369953 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:18.369974 kubelet[1768]: W0129 11:57:18.369969 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:18.370027 kubelet[1768]: E0129 11:57:18.369979 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:18.370254 kubelet[1768]: E0129 11:57:18.370236 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:18.370254 kubelet[1768]: W0129 11:57:18.370250 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:18.370399 kubelet[1768]: E0129 11:57:18.370260 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:18.370489 kubelet[1768]: E0129 11:57:18.370471 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:18.370489 kubelet[1768]: W0129 11:57:18.370486 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:18.370535 kubelet[1768]: E0129 11:57:18.370499 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:18.370754 kubelet[1768]: E0129 11:57:18.370735 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:18.370754 kubelet[1768]: W0129 11:57:18.370749 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:18.370820 kubelet[1768]: E0129 11:57:18.370760 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:18.371010 kubelet[1768]: E0129 11:57:18.370992 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:18.371010 kubelet[1768]: W0129 11:57:18.371007 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:18.371067 kubelet[1768]: E0129 11:57:18.371018 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:18.371260 kubelet[1768]: E0129 11:57:18.371243 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:18.371260 kubelet[1768]: W0129 11:57:18.371256 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:18.371313 kubelet[1768]: E0129 11:57:18.371268 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:18.371501 kubelet[1768]: E0129 11:57:18.371483 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:18.371501 kubelet[1768]: W0129 11:57:18.371498 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:18.371546 kubelet[1768]: E0129 11:57:18.371508 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:18.371744 kubelet[1768]: E0129 11:57:18.371727 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:18.371744 kubelet[1768]: W0129 11:57:18.371742 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:18.371798 kubelet[1768]: E0129 11:57:18.371752 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:18.372000 kubelet[1768]: E0129 11:57:18.371982 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:18.372000 kubelet[1768]: W0129 11:57:18.371997 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:18.372062 kubelet[1768]: E0129 11:57:18.372009 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:18.467159 kubelet[1768]: E0129 11:57:18.467113 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:18.467159 kubelet[1768]: W0129 11:57:18.467137 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:18.467159 kubelet[1768]: E0129 11:57:18.467159 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:18.467419 kubelet[1768]: E0129 11:57:18.467391 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:18.467419 kubelet[1768]: W0129 11:57:18.467403 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:18.467419 kubelet[1768]: E0129 11:57:18.467418 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:18.467852 kubelet[1768]: E0129 11:57:18.467790 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:18.467852 kubelet[1768]: W0129 11:57:18.467832 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:18.467911 kubelet[1768]: E0129 11:57:18.467869 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:18.468213 kubelet[1768]: E0129 11:57:18.468164 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:18.468213 kubelet[1768]: W0129 11:57:18.468183 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:18.468213 kubelet[1768]: E0129 11:57:18.468203 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:18.468499 kubelet[1768]: E0129 11:57:18.468458 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:18.468499 kubelet[1768]: W0129 11:57:18.468478 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:18.468499 kubelet[1768]: E0129 11:57:18.468500 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:18.468811 kubelet[1768]: E0129 11:57:18.468781 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:18.468811 kubelet[1768]: W0129 11:57:18.468803 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:18.468901 kubelet[1768]: E0129 11:57:18.468835 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:18.469152 kubelet[1768]: E0129 11:57:18.469121 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:18.469152 kubelet[1768]: W0129 11:57:18.469147 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:18.469234 kubelet[1768]: E0129 11:57:18.469159 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:18.469424 kubelet[1768]: E0129 11:57:18.469402 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:18.469424 kubelet[1768]: W0129 11:57:18.469419 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:18.469515 kubelet[1768]: E0129 11:57:18.469437 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:18.469707 kubelet[1768]: E0129 11:57:18.469677 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:18.469707 kubelet[1768]: W0129 11:57:18.469693 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:18.469707 kubelet[1768]: E0129 11:57:18.469704 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:18.469998 kubelet[1768]: E0129 11:57:18.469978 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:18.469998 kubelet[1768]: W0129 11:57:18.469995 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:18.470066 kubelet[1768]: E0129 11:57:18.470007 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:18.470327 kubelet[1768]: E0129 11:57:18.470299 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:18.470327 kubelet[1768]: W0129 11:57:18.470318 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:18.470369 kubelet[1768]: E0129 11:57:18.470330 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:18.470795 kubelet[1768]: E0129 11:57:18.470771 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:18.470795 kubelet[1768]: W0129 11:57:18.470787 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:18.470853 kubelet[1768]: E0129 11:57:18.470800 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:18.684891 kubelet[1768]: E0129 11:57:18.684689 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:19.153399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3701448551.mount: Deactivated successfully. Jan 29 11:57:19.269746 kubelet[1768]: E0129 11:57:19.269697 1768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:19.279523 kubelet[1768]: E0129 11:57:19.279485 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:19.279523 kubelet[1768]: W0129 11:57:19.279510 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:19.279694 kubelet[1768]: E0129 11:57:19.279535 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:19.279960 kubelet[1768]: E0129 11:57:19.279789 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:19.279960 kubelet[1768]: W0129 11:57:19.279804 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:19.279960 kubelet[1768]: E0129 11:57:19.279816 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:19.280113 kubelet[1768]: E0129 11:57:19.280072 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:19.280113 kubelet[1768]: W0129 11:57:19.280090 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:19.280113 kubelet[1768]: E0129 11:57:19.280102 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:19.280495 kubelet[1768]: E0129 11:57:19.280473 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:19.280761 kubelet[1768]: W0129 11:57:19.280574 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:19.280761 kubelet[1768]: E0129 11:57:19.280605 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:19.280996 kubelet[1768]: E0129 11:57:19.280915 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:19.280996 kubelet[1768]: W0129 11:57:19.280944 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:19.280996 kubelet[1768]: E0129 11:57:19.280955 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:19.281463 kubelet[1768]: E0129 11:57:19.281411 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:19.281463 kubelet[1768]: W0129 11:57:19.281449 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:19.281521 kubelet[1768]: E0129 11:57:19.281478 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:19.281775 kubelet[1768]: E0129 11:57:19.281757 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:19.281775 kubelet[1768]: W0129 11:57:19.281770 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:19.281837 kubelet[1768]: E0129 11:57:19.281781 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:19.282102 kubelet[1768]: E0129 11:57:19.282074 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:19.282102 kubelet[1768]: W0129 11:57:19.282091 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:19.282102 kubelet[1768]: E0129 11:57:19.282101 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:19.282441 kubelet[1768]: E0129 11:57:19.282338 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:19.282441 kubelet[1768]: W0129 11:57:19.282351 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:19.282441 kubelet[1768]: E0129 11:57:19.282360 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:19.282644 kubelet[1768]: E0129 11:57:19.282621 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:19.282644 kubelet[1768]: W0129 11:57:19.282638 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:19.282694 kubelet[1768]: E0129 11:57:19.282648 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:19.283283 kubelet[1768]: E0129 11:57:19.282857 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:19.283283 kubelet[1768]: W0129 11:57:19.282882 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:19.283283 kubelet[1768]: E0129 11:57:19.282895 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:19.283283 kubelet[1768]: E0129 11:57:19.283192 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:19.283283 kubelet[1768]: W0129 11:57:19.283203 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:19.283283 kubelet[1768]: E0129 11:57:19.283213 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:19.283495 kubelet[1768]: E0129 11:57:19.283473 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:19.283535 kubelet[1768]: W0129 11:57:19.283495 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:19.283535 kubelet[1768]: E0129 11:57:19.283509 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:19.283850 kubelet[1768]: E0129 11:57:19.283765 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:19.283850 kubelet[1768]: W0129 11:57:19.283783 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:19.283850 kubelet[1768]: E0129 11:57:19.283796 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:19.284073 kubelet[1768]: E0129 11:57:19.284052 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:19.284073 kubelet[1768]: W0129 11:57:19.284069 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:19.284150 kubelet[1768]: E0129 11:57:19.284080 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:19.284317 kubelet[1768]: E0129 11:57:19.284299 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:19.284317 kubelet[1768]: W0129 11:57:19.284314 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:19.284367 kubelet[1768]: E0129 11:57:19.284325 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:19.284602 kubelet[1768]: E0129 11:57:19.284581 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:19.284602 kubelet[1768]: W0129 11:57:19.284598 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:19.284662 kubelet[1768]: E0129 11:57:19.284612 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:19.284980 kubelet[1768]: E0129 11:57:19.284907 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:19.284980 kubelet[1768]: W0129 11:57:19.284948 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:19.284980 kubelet[1768]: E0129 11:57:19.284962 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:19.285296 kubelet[1768]: E0129 11:57:19.285239 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:19.285296 kubelet[1768]: W0129 11:57:19.285257 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:19.285296 kubelet[1768]: E0129 11:57:19.285269 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:19.285536 kubelet[1768]: E0129 11:57:19.285514 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:19.285536 kubelet[1768]: W0129 11:57:19.285533 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:19.285607 kubelet[1768]: E0129 11:57:19.285545 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:19.314465 containerd[1460]: time="2025-01-29T11:57:19.314376463Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:19.315215 containerd[1460]: time="2025-01-29T11:57:19.315163549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 29 11:57:19.316809 containerd[1460]: time="2025-01-29T11:57:19.316772025Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:19.319332 containerd[1460]: time="2025-01-29T11:57:19.319302790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:19.319967 containerd[1460]: time="2025-01-29T11:57:19.319908355Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.167801467s" Jan 29 11:57:19.320039 containerd[1460]: time="2025-01-29T11:57:19.319968899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 11:57:19.322088 containerd[1460]: time="2025-01-29T11:57:19.322055421Z" level=info msg="CreateContainer within sandbox \"799d4aa81d8178b8c635116da3874cb2de0aea74441b7ec8aaab2cd4ebda3bf6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 11:57:19.339324 containerd[1460]: time="2025-01-29T11:57:19.339269538Z" level=info msg="CreateContainer within sandbox \"799d4aa81d8178b8c635116da3874cb2de0aea74441b7ec8aaab2cd4ebda3bf6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"630e36ef4995e9398324104dd280f6f57bdbc7336d0ae2a06b8c9afaf9e0fb48\"" Jan 29 11:57:19.340033 containerd[1460]: time="2025-01-29T11:57:19.340006008Z" level=info msg="StartContainer for \"630e36ef4995e9398324104dd280f6f57bdbc7336d0ae2a06b8c9afaf9e0fb48\"" Jan 29 11:57:19.372102 systemd[1]: Started cri-containerd-630e36ef4995e9398324104dd280f6f57bdbc7336d0ae2a06b8c9afaf9e0fb48.scope - libcontainer container 630e36ef4995e9398324104dd280f6f57bdbc7336d0ae2a06b8c9afaf9e0fb48. Jan 29 11:57:19.374010 kubelet[1768]: E0129 11:57:19.373983 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:19.374010 kubelet[1768]: W0129 11:57:19.374006 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:19.374089 kubelet[1768]: E0129 11:57:19.374028 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:19.374378 kubelet[1768]: E0129 11:57:19.374355 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:19.374441 kubelet[1768]: W0129 11:57:19.374368 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:19.374441 kubelet[1768]: E0129 11:57:19.374405 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:19.375455 kubelet[1768]: E0129 11:57:19.374848 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:19.375455 kubelet[1768]: W0129 11:57:19.374882 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:19.375455 kubelet[1768]: E0129 11:57:19.374898 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:19.375455 kubelet[1768]: E0129 11:57:19.375218 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:19.375455 kubelet[1768]: W0129 11:57:19.375226 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:19.375455 kubelet[1768]: E0129 11:57:19.375245 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:19.375618 kubelet[1768]: E0129 11:57:19.375565 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:19.375618 kubelet[1768]: W0129 11:57:19.375575 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:19.375697 kubelet[1768]: E0129 11:57:19.375655 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:19.375964 kubelet[1768]: E0129 11:57:19.375939 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:19.375964 kubelet[1768]: W0129 11:57:19.375952 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:19.376056 kubelet[1768]: E0129 11:57:19.375984 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:19.376285 kubelet[1768]: E0129 11:57:19.376258 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:19.376285 kubelet[1768]: W0129 11:57:19.376273 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:19.376338 kubelet[1768]: E0129 11:57:19.376309 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:19.376571 kubelet[1768]: E0129 11:57:19.376556 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:19.376571 kubelet[1768]: W0129 11:57:19.376568 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:19.376626 kubelet[1768]: E0129 11:57:19.376581 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:19.376832 kubelet[1768]: E0129 11:57:19.376816 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:19.376870 kubelet[1768]: W0129 11:57:19.376851 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:19.376989 kubelet[1768]: E0129 11:57:19.376969 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:19.377308 kubelet[1768]: E0129 11:57:19.377284 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:19.377308 kubelet[1768]: W0129 11:57:19.377298 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:19.377308 kubelet[1768]: E0129 11:57:19.377307 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:19.377810 kubelet[1768]: E0129 11:57:19.377787 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:19.377836 kubelet[1768]: W0129 11:57:19.377801 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:19.377859 kubelet[1768]: E0129 11:57:19.377836 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:19.378179 kubelet[1768]: E0129 11:57:19.378154 1768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:19.378179 kubelet[1768]: W0129 11:57:19.378170 1768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:19.378179 kubelet[1768]: E0129 11:57:19.378180 1768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:19.403662 containerd[1460]: time="2025-01-29T11:57:19.403521029Z" level=info msg="StartContainer for \"630e36ef4995e9398324104dd280f6f57bdbc7336d0ae2a06b8c9afaf9e0fb48\" returns successfully" Jan 29 11:57:19.418560 systemd[1]: cri-containerd-630e36ef4995e9398324104dd280f6f57bdbc7336d0ae2a06b8c9afaf9e0fb48.scope: Deactivated successfully. Jan 29 11:57:19.646304 containerd[1460]: time="2025-01-29T11:57:19.646195724Z" level=info msg="shim disconnected" id=630e36ef4995e9398324104dd280f6f57bdbc7336d0ae2a06b8c9afaf9e0fb48 namespace=k8s.io Jan 29 11:57:19.646304 containerd[1460]: time="2025-01-29T11:57:19.646296213Z" level=warning msg="cleaning up after shim disconnected" id=630e36ef4995e9398324104dd280f6f57bdbc7336d0ae2a06b8c9afaf9e0fb48 namespace=k8s.io Jan 29 11:57:19.646304 containerd[1460]: time="2025-01-29T11:57:19.646306071Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:57:19.685952 kubelet[1768]: E0129 11:57:19.685803 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:20.134012 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-630e36ef4995e9398324104dd280f6f57bdbc7336d0ae2a06b8c9afaf9e0fb48-rootfs.mount: Deactivated successfully. Jan 29 11:57:20.253466 kubelet[1768]: E0129 11:57:20.253361 1768 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6rn2" podUID="a9d13e01-6e79-4768-8067-8fdf452aca9e" Jan 29 11:57:20.272889 kubelet[1768]: E0129 11:57:20.272813 1768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:20.273600 containerd[1460]: time="2025-01-29T11:57:20.273566284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 11:57:20.686673 kubelet[1768]: E0129 11:57:20.686626 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:21.687648 kubelet[1768]: E0129 11:57:21.687529 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:22.254110 kubelet[1768]: E0129 11:57:22.253746 1768 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6rn2" podUID="a9d13e01-6e79-4768-8067-8fdf452aca9e" Jan 29 11:57:22.688867 kubelet[1768]: E0129 11:57:22.688666 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:23.689181 kubelet[1768]: E0129 11:57:23.689115 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:24.253449 kubelet[1768]: E0129 11:57:24.253376 1768 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6rn2" podUID="a9d13e01-6e79-4768-8067-8fdf452aca9e" Jan 29 11:57:24.622355 containerd[1460]: time="2025-01-29T11:57:24.622177689Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:24.647950 containerd[1460]: time="2025-01-29T11:57:24.647851688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 11:57:24.689608 kubelet[1768]: E0129 11:57:24.689546 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:24.695044 containerd[1460]: time="2025-01-29T11:57:24.694962310Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:24.721255 containerd[1460]: time="2025-01-29T11:57:24.721178896Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:24.722151 containerd[1460]: time="2025-01-29T11:57:24.722068344Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.448455903s" Jan 29 11:57:24.722151 containerd[1460]: time="2025-01-29T11:57:24.722134268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 11:57:24.724568 containerd[1460]: time="2025-01-29T11:57:24.724532444Z" level=info msg="CreateContainer within sandbox \"799d4aa81d8178b8c635116da3874cb2de0aea74441b7ec8aaab2cd4ebda3bf6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 11:57:25.010250 containerd[1460]: time="2025-01-29T11:57:25.010058135Z" level=info msg="CreateContainer within sandbox \"799d4aa81d8178b8c635116da3874cb2de0aea74441b7ec8aaab2cd4ebda3bf6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c517b54b8222ca2b4741b6d504b4769fab8bf114ad3c45ab32293fa3922813fa\"" Jan 29 11:57:25.010887 containerd[1460]: time="2025-01-29T11:57:25.010725917Z" level=info msg="StartContainer for \"c517b54b8222ca2b4741b6d504b4769fab8bf114ad3c45ab32293fa3922813fa\"" Jan 29 11:57:25.041294 systemd[1]: Started cri-containerd-c517b54b8222ca2b4741b6d504b4769fab8bf114ad3c45ab32293fa3922813fa.scope - libcontainer container c517b54b8222ca2b4741b6d504b4769fab8bf114ad3c45ab32293fa3922813fa. Jan 29 11:57:25.327870 containerd[1460]: time="2025-01-29T11:57:25.327706585Z" level=info msg="StartContainer for \"c517b54b8222ca2b4741b6d504b4769fab8bf114ad3c45ab32293fa3922813fa\" returns successfully" Jan 29 11:57:25.690866 kubelet[1768]: E0129 11:57:25.690708 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:26.254121 kubelet[1768]: E0129 11:57:26.254026 1768 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6rn2" podUID="a9d13e01-6e79-4768-8067-8fdf452aca9e" Jan 29 11:57:26.331688 kubelet[1768]: E0129 11:57:26.331637 1768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:26.691544 kubelet[1768]: E0129 11:57:26.691366 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:27.333422 kubelet[1768]: E0129 11:57:27.333360 1768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:27.362484 systemd[1]: cri-containerd-c517b54b8222ca2b4741b6d504b4769fab8bf114ad3c45ab32293fa3922813fa.scope: Deactivated successfully. Jan 29 11:57:27.383203 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c517b54b8222ca2b4741b6d504b4769fab8bf114ad3c45ab32293fa3922813fa-rootfs.mount: Deactivated successfully. Jan 29 11:57:27.410404 kubelet[1768]: I0129 11:57:27.410375 1768 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 11:57:27.692809 kubelet[1768]: E0129 11:57:27.692608 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:28.258585 systemd[1]: Created slice kubepods-besteffort-poda9d13e01_6e79_4768_8067_8fdf452aca9e.slice - libcontainer container kubepods-besteffort-poda9d13e01_6e79_4768_8067_8fdf452aca9e.slice. Jan 29 11:57:28.260595 containerd[1460]: time="2025-01-29T11:57:28.260549615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h6rn2,Uid:a9d13e01-6e79-4768-8067-8fdf452aca9e,Namespace:calico-system,Attempt:0,}" Jan 29 11:57:28.693973 kubelet[1768]: E0129 11:57:28.693759 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:28.870538 containerd[1460]: time="2025-01-29T11:57:28.870448295Z" level=info msg="shim disconnected" id=c517b54b8222ca2b4741b6d504b4769fab8bf114ad3c45ab32293fa3922813fa namespace=k8s.io Jan 29 11:57:28.870538 containerd[1460]: time="2025-01-29T11:57:28.870520871Z" level=warning msg="cleaning up after shim disconnected" id=c517b54b8222ca2b4741b6d504b4769fab8bf114ad3c45ab32293fa3922813fa namespace=k8s.io Jan 29 11:57:28.870538 containerd[1460]: time="2025-01-29T11:57:28.870537202Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:57:28.965946 containerd[1460]: time="2025-01-29T11:57:28.965777507Z" level=error msg="Failed to destroy network for sandbox \"0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:28.966616 containerd[1460]: time="2025-01-29T11:57:28.966582286Z" level=error msg="encountered an error cleaning up failed sandbox \"0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:28.966681 containerd[1460]: time="2025-01-29T11:57:28.966649212Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h6rn2,Uid:a9d13e01-6e79-4768-8067-8fdf452aca9e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:28.967014 kubelet[1768]: E0129 11:57:28.966956 1768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:28.967181 kubelet[1768]: E0129 11:57:28.967042 1768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h6rn2" Jan 29 11:57:28.967181 kubelet[1768]: E0129 11:57:28.967070 1768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h6rn2" Jan 29 11:57:28.967181 kubelet[1768]: E0129 11:57:28.967119 1768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h6rn2_calico-system(a9d13e01-6e79-4768-8067-8fdf452aca9e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h6rn2_calico-system(a9d13e01-6e79-4768-8067-8fdf452aca9e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h6rn2" podUID="a9d13e01-6e79-4768-8067-8fdf452aca9e" Jan 29 11:57:28.967858 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62-shm.mount: Deactivated successfully. Jan 29 11:57:29.339452 kubelet[1768]: E0129 11:57:29.339103 1768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:29.339826 kubelet[1768]: I0129 11:57:29.339790 1768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" Jan 29 11:57:29.339919 containerd[1460]: time="2025-01-29T11:57:29.339855303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 11:57:29.341240 containerd[1460]: time="2025-01-29T11:57:29.340480616Z" level=info msg="StopPodSandbox for \"0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62\"" Jan 29 11:57:29.341240 containerd[1460]: time="2025-01-29T11:57:29.340653530Z" level=info msg="Ensure that sandbox 0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62 in task-service has been cleanup successfully" Jan 29 11:57:29.370155 containerd[1460]: time="2025-01-29T11:57:29.370075486Z" level=error msg="StopPodSandbox for \"0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62\" failed" error="failed to destroy network for sandbox \"0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:29.370452 kubelet[1768]: E0129 11:57:29.370403 1768 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" Jan 29 11:57:29.370566 kubelet[1768]: E0129 11:57:29.370482 1768 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62"} Jan 29 11:57:29.370611 kubelet[1768]: E0129 11:57:29.370584 1768 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a9d13e01-6e79-4768-8067-8fdf452aca9e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:57:29.370691 kubelet[1768]: E0129 11:57:29.370621 1768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a9d13e01-6e79-4768-8067-8fdf452aca9e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h6rn2" podUID="a9d13e01-6e79-4768-8067-8fdf452aca9e" Jan 29 11:57:29.378530 kubelet[1768]: I0129 11:57:29.378491 1768 topology_manager.go:215] "Topology Admit Handler" podUID="8cc386dc-04f6-4f58-9b8c-56b8293a2190" podNamespace="default" podName="nginx-deployment-85f456d6dd-j4f8v" Jan 29 11:57:29.382818 kubelet[1768]: W0129 11:57:29.382770 1768 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:10.0.0.125" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node '10.0.0.125' and this object Jan 29 11:57:29.382899 kubelet[1768]: E0129 11:57:29.382823 1768 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:10.0.0.125" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node '10.0.0.125' and this object Jan 29 11:57:29.384681 systemd[1]: Created slice kubepods-besteffort-pod8cc386dc_04f6_4f58_9b8c_56b8293a2190.slice - libcontainer container kubepods-besteffort-pod8cc386dc_04f6_4f58_9b8c_56b8293a2190.slice. Jan 29 11:57:29.549586 kubelet[1768]: I0129 11:57:29.549497 1768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cxjk\" (UniqueName: \"kubernetes.io/projected/8cc386dc-04f6-4f58-9b8c-56b8293a2190-kube-api-access-9cxjk\") pod \"nginx-deployment-85f456d6dd-j4f8v\" (UID: \"8cc386dc-04f6-4f58-9b8c-56b8293a2190\") " pod="default/nginx-deployment-85f456d6dd-j4f8v" Jan 29 11:57:29.694838 kubelet[1768]: E0129 11:57:29.694676 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:30.694896 kubelet[1768]: E0129 11:57:30.694814 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:30.888626 containerd[1460]: time="2025-01-29T11:57:30.888574111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-j4f8v,Uid:8cc386dc-04f6-4f58-9b8c-56b8293a2190,Namespace:default,Attempt:0,}" Jan 29 11:57:30.952711 containerd[1460]: time="2025-01-29T11:57:30.952545377Z" level=error msg="Failed to destroy network for sandbox \"f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:30.953096 containerd[1460]: time="2025-01-29T11:57:30.953060823Z" level=error msg="encountered an error cleaning up failed sandbox \"f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:30.953188 containerd[1460]: time="2025-01-29T11:57:30.953128871Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-j4f8v,Uid:8cc386dc-04f6-4f58-9b8c-56b8293a2190,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:30.953508 kubelet[1768]: E0129 11:57:30.953439 1768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:30.953631 kubelet[1768]: E0129 11:57:30.953520 1768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-j4f8v" Jan 29 11:57:30.953631 kubelet[1768]: E0129 11:57:30.953551 1768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-j4f8v" Jan 29 11:57:30.953815 kubelet[1768]: E0129 11:57:30.953619 1768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-j4f8v_default(8cc386dc-04f6-4f58-9b8c-56b8293a2190)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-j4f8v_default(8cc386dc-04f6-4f58-9b8c-56b8293a2190)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-j4f8v" podUID="8cc386dc-04f6-4f58-9b8c-56b8293a2190" Jan 29 11:57:30.954829 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32-shm.mount: Deactivated successfully. Jan 29 11:57:31.348352 kubelet[1768]: I0129 11:57:31.347713 1768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" Jan 29 11:57:31.350462 containerd[1460]: time="2025-01-29T11:57:31.349683390Z" level=info msg="StopPodSandbox for \"f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32\"" Jan 29 11:57:31.350462 containerd[1460]: time="2025-01-29T11:57:31.350125680Z" level=info msg="Ensure that sandbox f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32 in task-service has been cleanup successfully" Jan 29 11:57:31.426664 containerd[1460]: time="2025-01-29T11:57:31.426596593Z" level=error msg="StopPodSandbox for \"f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32\" failed" error="failed to destroy network for sandbox \"f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:31.427142 kubelet[1768]: E0129 11:57:31.427074 1768 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" Jan 29 11:57:31.427229 kubelet[1768]: E0129 11:57:31.427152 1768 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32"} Jan 29 11:57:31.427296 kubelet[1768]: E0129 11:57:31.427261 1768 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8cc386dc-04f6-4f58-9b8c-56b8293a2190\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:57:31.427366 kubelet[1768]: E0129 11:57:31.427314 1768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8cc386dc-04f6-4f58-9b8c-56b8293a2190\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-j4f8v" podUID="8cc386dc-04f6-4f58-9b8c-56b8293a2190" Jan 29 11:57:31.680555 kubelet[1768]: E0129 11:57:31.680020 1768 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:31.695790 kubelet[1768]: E0129 11:57:31.695744 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:32.696806 kubelet[1768]: E0129 11:57:32.696721 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:33.697197 kubelet[1768]: E0129 11:57:33.697141 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:34.697901 kubelet[1768]: E0129 11:57:34.697838 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:34.870485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1902763805.mount: Deactivated successfully. Jan 29 11:57:35.698220 kubelet[1768]: E0129 11:57:35.698147 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:36.425716 containerd[1460]: time="2025-01-29T11:57:36.425634888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:36.438066 containerd[1460]: time="2025-01-29T11:57:36.438010506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 11:57:36.445028 containerd[1460]: time="2025-01-29T11:57:36.444989307Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:36.460108 containerd[1460]: time="2025-01-29T11:57:36.460063029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:36.460720 containerd[1460]: time="2025-01-29T11:57:36.460667036Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.120758273s" Jan 29 11:57:36.460720 containerd[1460]: time="2025-01-29T11:57:36.460706552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 11:57:36.470202 containerd[1460]: time="2025-01-29T11:57:36.470071298Z" level=info msg="CreateContainer within sandbox \"799d4aa81d8178b8c635116da3874cb2de0aea74441b7ec8aaab2cd4ebda3bf6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 11:57:36.654515 containerd[1460]: time="2025-01-29T11:57:36.654423264Z" level=info msg="CreateContainer within sandbox \"799d4aa81d8178b8c635116da3874cb2de0aea74441b7ec8aaab2cd4ebda3bf6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ce0bd89475120bd1636ad2c891e3cb86e358fc24a6dbc791af06f67b0d4baf44\"" Jan 29 11:57:36.655169 containerd[1460]: time="2025-01-29T11:57:36.655132694Z" level=info msg="StartContainer for \"ce0bd89475120bd1636ad2c891e3cb86e358fc24a6dbc791af06f67b0d4baf44\"" Jan 29 11:57:36.691146 systemd[1]: Started cri-containerd-ce0bd89475120bd1636ad2c891e3cb86e358fc24a6dbc791af06f67b0d4baf44.scope - libcontainer container ce0bd89475120bd1636ad2c891e3cb86e358fc24a6dbc791af06f67b0d4baf44. Jan 29 11:57:36.698309 kubelet[1768]: E0129 11:57:36.698273 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:36.914747 containerd[1460]: time="2025-01-29T11:57:36.914686559Z" level=info msg="StartContainer for \"ce0bd89475120bd1636ad2c891e3cb86e358fc24a6dbc791af06f67b0d4baf44\" returns successfully" Jan 29 11:57:36.954481 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 11:57:36.954662 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 11:57:37.362183 kubelet[1768]: E0129 11:57:37.362071 1768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:37.577110 kubelet[1768]: I0129 11:57:37.577015 1768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-l4ptl" podStartSLOduration=4.459000821 podStartE2EDuration="26.576993342s" podCreationTimestamp="2025-01-29 11:57:11 +0000 UTC" firstStartedPulling="2025-01-29 11:57:14.343572257 +0000 UTC m=+3.127519019" lastFinishedPulling="2025-01-29 11:57:36.461564777 +0000 UTC m=+25.245511540" observedRunningTime="2025-01-29 11:57:37.576769022 +0000 UTC m=+26.360715784" watchObservedRunningTime="2025-01-29 11:57:37.576993342 +0000 UTC m=+26.360940104" Jan 29 11:57:37.699282 kubelet[1768]: E0129 11:57:37.699168 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:38.365200 kubelet[1768]: E0129 11:57:38.365159 1768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:38.700429 kubelet[1768]: E0129 11:57:38.700357 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:39.241986 kernel: bpftool[2684]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 11:57:39.500642 systemd-networkd[1388]: vxlan.calico: Link UP Jan 29 11:57:39.500655 systemd-networkd[1388]: vxlan.calico: Gained carrier Jan 29 11:57:39.700601 kubelet[1768]: E0129 11:57:39.700532 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:40.601186 systemd-networkd[1388]: vxlan.calico: Gained IPv6LL Jan 29 11:57:40.701256 kubelet[1768]: E0129 11:57:40.701173 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:41.701503 kubelet[1768]: E0129 11:57:41.701452 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:42.701747 kubelet[1768]: E0129 11:57:42.701614 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:43.702077 kubelet[1768]: E0129 11:57:43.701998 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:44.254026 containerd[1460]: time="2025-01-29T11:57:44.253950916Z" level=info msg="StopPodSandbox for \"0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62\"" Jan 29 11:57:44.403379 containerd[1460]: 2025-01-29 11:57:44.365 [INFO][2779] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" Jan 29 11:57:44.403379 containerd[1460]: 2025-01-29 11:57:44.365 [INFO][2779] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" iface="eth0" netns="/var/run/netns/cni-32f85595-a400-af1f-b12b-7c00a735127d" Jan 29 11:57:44.403379 containerd[1460]: 2025-01-29 11:57:44.365 [INFO][2779] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" iface="eth0" netns="/var/run/netns/cni-32f85595-a400-af1f-b12b-7c00a735127d" Jan 29 11:57:44.403379 containerd[1460]: 2025-01-29 11:57:44.366 [INFO][2779] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" iface="eth0" netns="/var/run/netns/cni-32f85595-a400-af1f-b12b-7c00a735127d" Jan 29 11:57:44.403379 containerd[1460]: 2025-01-29 11:57:44.366 [INFO][2779] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" Jan 29 11:57:44.403379 containerd[1460]: 2025-01-29 11:57:44.366 [INFO][2779] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" Jan 29 11:57:44.403379 containerd[1460]: 2025-01-29 11:57:44.387 [INFO][2786] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" HandleID="k8s-pod-network.0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" Workload="10.0.0.125-k8s-csi--node--driver--h6rn2-eth0" Jan 29 11:57:44.403379 containerd[1460]: 2025-01-29 11:57:44.387 [INFO][2786] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:57:44.403379 containerd[1460]: 2025-01-29 11:57:44.387 [INFO][2786] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:57:44.403379 containerd[1460]: 2025-01-29 11:57:44.393 [WARNING][2786] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" HandleID="k8s-pod-network.0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" Workload="10.0.0.125-k8s-csi--node--driver--h6rn2-eth0" Jan 29 11:57:44.403379 containerd[1460]: 2025-01-29 11:57:44.393 [INFO][2786] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" HandleID="k8s-pod-network.0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" Workload="10.0.0.125-k8s-csi--node--driver--h6rn2-eth0" Jan 29 11:57:44.403379 containerd[1460]: 2025-01-29 11:57:44.396 [INFO][2786] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:57:44.403379 containerd[1460]: 2025-01-29 11:57:44.401 [INFO][2779] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" Jan 29 11:57:44.404283 containerd[1460]: time="2025-01-29T11:57:44.403587946Z" level=info msg="TearDown network for sandbox \"0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62\" successfully" Jan 29 11:57:44.404283 containerd[1460]: time="2025-01-29T11:57:44.403627411Z" level=info msg="StopPodSandbox for \"0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62\" returns successfully" Jan 29 11:57:44.404766 containerd[1460]: time="2025-01-29T11:57:44.404724266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h6rn2,Uid:a9d13e01-6e79-4768-8067-8fdf452aca9e,Namespace:calico-system,Attempt:1,}" Jan 29 11:57:44.406418 systemd[1]: run-netns-cni\x2d32f85595\x2da400\x2daf1f\x2db12b\x2d7c00a735127d.mount: Deactivated successfully. Jan 29 11:57:44.601740 systemd-networkd[1388]: cali7a3dc71b523: Link UP Jan 29 11:57:44.602970 systemd-networkd[1388]: cali7a3dc71b523: Gained carrier Jan 29 11:57:44.619917 containerd[1460]: 2025-01-29 11:57:44.459 [INFO][2793] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.125-k8s-csi--node--driver--h6rn2-eth0 csi-node-driver- calico-system a9d13e01-6e79-4768-8067-8fdf452aca9e 1072 0 2025-01-29 11:57:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.125 csi-node-driver-h6rn2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7a3dc71b523 [] []}} ContainerID="09165dd80ec9e30521318e853442b7d48c11b8fc412674c4c87bcb998efd3118" Namespace="calico-system" Pod="csi-node-driver-h6rn2" WorkloadEndpoint="10.0.0.125-k8s-csi--node--driver--h6rn2-" Jan 29 11:57:44.619917 containerd[1460]: 2025-01-29 11:57:44.459 [INFO][2793] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="09165dd80ec9e30521318e853442b7d48c11b8fc412674c4c87bcb998efd3118" Namespace="calico-system" Pod="csi-node-driver-h6rn2" WorkloadEndpoint="10.0.0.125-k8s-csi--node--driver--h6rn2-eth0" Jan 29 11:57:44.619917 containerd[1460]: 2025-01-29 11:57:44.564 [INFO][2807] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="09165dd80ec9e30521318e853442b7d48c11b8fc412674c4c87bcb998efd3118" HandleID="k8s-pod-network.09165dd80ec9e30521318e853442b7d48c11b8fc412674c4c87bcb998efd3118" Workload="10.0.0.125-k8s-csi--node--driver--h6rn2-eth0" Jan 29 11:57:44.619917 containerd[1460]: 2025-01-29 11:57:44.572 [INFO][2807] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="09165dd80ec9e30521318e853442b7d48c11b8fc412674c4c87bcb998efd3118" HandleID="k8s-pod-network.09165dd80ec9e30521318e853442b7d48c11b8fc412674c4c87bcb998efd3118" Workload="10.0.0.125-k8s-csi--node--driver--h6rn2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003093b0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.125", "pod":"csi-node-driver-h6rn2", "timestamp":"2025-01-29 11:57:44.564092858 +0000 UTC"}, Hostname:"10.0.0.125", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:57:44.619917 containerd[1460]: 2025-01-29 11:57:44.572 [INFO][2807] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:57:44.619917 containerd[1460]: 2025-01-29 11:57:44.572 [INFO][2807] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:57:44.619917 containerd[1460]: 2025-01-29 11:57:44.572 [INFO][2807] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.125' Jan 29 11:57:44.619917 containerd[1460]: 2025-01-29 11:57:44.574 [INFO][2807] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.09165dd80ec9e30521318e853442b7d48c11b8fc412674c4c87bcb998efd3118" host="10.0.0.125" Jan 29 11:57:44.619917 containerd[1460]: 2025-01-29 11:57:44.577 [INFO][2807] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.125" Jan 29 11:57:44.619917 containerd[1460]: 2025-01-29 11:57:44.582 [INFO][2807] ipam/ipam.go 489: Trying affinity for 192.168.83.192/26 host="10.0.0.125" Jan 29 11:57:44.619917 containerd[1460]: 2025-01-29 11:57:44.583 [INFO][2807] ipam/ipam.go 155: Attempting to load block cidr=192.168.83.192/26 host="10.0.0.125" Jan 29 11:57:44.619917 containerd[1460]: 2025-01-29 11:57:44.585 [INFO][2807] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.83.192/26 host="10.0.0.125" Jan 29 11:57:44.619917 containerd[1460]: 2025-01-29 11:57:44.585 [INFO][2807] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.83.192/26 handle="k8s-pod-network.09165dd80ec9e30521318e853442b7d48c11b8fc412674c4c87bcb998efd3118" host="10.0.0.125" Jan 29 11:57:44.619917 containerd[1460]: 2025-01-29 11:57:44.586 [INFO][2807] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.09165dd80ec9e30521318e853442b7d48c11b8fc412674c4c87bcb998efd3118 Jan 29 11:57:44.619917 containerd[1460]: 2025-01-29 11:57:44.590 [INFO][2807] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.83.192/26 handle="k8s-pod-network.09165dd80ec9e30521318e853442b7d48c11b8fc412674c4c87bcb998efd3118" host="10.0.0.125" Jan 29 11:57:44.619917 containerd[1460]: 2025-01-29 11:57:44.595 [INFO][2807] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.83.193/26] block=192.168.83.192/26 handle="k8s-pod-network.09165dd80ec9e30521318e853442b7d48c11b8fc412674c4c87bcb998efd3118" host="10.0.0.125" Jan 29 11:57:44.619917 containerd[1460]: 2025-01-29 11:57:44.595 [INFO][2807] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.83.193/26] handle="k8s-pod-network.09165dd80ec9e30521318e853442b7d48c11b8fc412674c4c87bcb998efd3118" host="10.0.0.125" Jan 29 11:57:44.619917 containerd[1460]: 2025-01-29 11:57:44.596 [INFO][2807] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:57:44.619917 containerd[1460]: 2025-01-29 11:57:44.596 [INFO][2807] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.193/26] IPv6=[] ContainerID="09165dd80ec9e30521318e853442b7d48c11b8fc412674c4c87bcb998efd3118" HandleID="k8s-pod-network.09165dd80ec9e30521318e853442b7d48c11b8fc412674c4c87bcb998efd3118" Workload="10.0.0.125-k8s-csi--node--driver--h6rn2-eth0" Jan 29 11:57:44.620509 containerd[1460]: 2025-01-29 11:57:44.598 [INFO][2793] cni-plugin/k8s.go 386: Populated endpoint ContainerID="09165dd80ec9e30521318e853442b7d48c11b8fc412674c4c87bcb998efd3118" Namespace="calico-system" Pod="csi-node-driver-h6rn2" WorkloadEndpoint="10.0.0.125-k8s-csi--node--driver--h6rn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.125-k8s-csi--node--driver--h6rn2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a9d13e01-6e79-4768-8067-8fdf452aca9e", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.125", ContainerID:"", Pod:"csi-node-driver-h6rn2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.83.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7a3dc71b523", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:57:44.620509 containerd[1460]: 2025-01-29 11:57:44.599 [INFO][2793] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.83.193/32] ContainerID="09165dd80ec9e30521318e853442b7d48c11b8fc412674c4c87bcb998efd3118" Namespace="calico-system" Pod="csi-node-driver-h6rn2" WorkloadEndpoint="10.0.0.125-k8s-csi--node--driver--h6rn2-eth0" Jan 29 11:57:44.620509 containerd[1460]: 2025-01-29 11:57:44.599 [INFO][2793] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7a3dc71b523 ContainerID="09165dd80ec9e30521318e853442b7d48c11b8fc412674c4c87bcb998efd3118" Namespace="calico-system" Pod="csi-node-driver-h6rn2" WorkloadEndpoint="10.0.0.125-k8s-csi--node--driver--h6rn2-eth0" Jan 29 11:57:44.620509 containerd[1460]: 2025-01-29 11:57:44.602 [INFO][2793] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="09165dd80ec9e30521318e853442b7d48c11b8fc412674c4c87bcb998efd3118" Namespace="calico-system" Pod="csi-node-driver-h6rn2" WorkloadEndpoint="10.0.0.125-k8s-csi--node--driver--h6rn2-eth0" Jan 29 11:57:44.620509 containerd[1460]: 2025-01-29 11:57:44.603 [INFO][2793] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="09165dd80ec9e30521318e853442b7d48c11b8fc412674c4c87bcb998efd3118" Namespace="calico-system" Pod="csi-node-driver-h6rn2" WorkloadEndpoint="10.0.0.125-k8s-csi--node--driver--h6rn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.125-k8s-csi--node--driver--h6rn2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a9d13e01-6e79-4768-8067-8fdf452aca9e", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.125", ContainerID:"09165dd80ec9e30521318e853442b7d48c11b8fc412674c4c87bcb998efd3118", Pod:"csi-node-driver-h6rn2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.83.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7a3dc71b523", MAC:"da:3a:54:3d:27:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:57:44.620509 containerd[1460]: 2025-01-29 11:57:44.613 [INFO][2793] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="09165dd80ec9e30521318e853442b7d48c11b8fc412674c4c87bcb998efd3118" Namespace="calico-system" Pod="csi-node-driver-h6rn2" WorkloadEndpoint="10.0.0.125-k8s-csi--node--driver--h6rn2-eth0" Jan 29 11:57:44.644648 containerd[1460]: time="2025-01-29T11:57:44.644505985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:57:44.644648 containerd[1460]: time="2025-01-29T11:57:44.644609000Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:57:44.644648 containerd[1460]: time="2025-01-29T11:57:44.644627826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:44.644859 containerd[1460]: time="2025-01-29T11:57:44.644729550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:44.673225 systemd[1]: Started cri-containerd-09165dd80ec9e30521318e853442b7d48c11b8fc412674c4c87bcb998efd3118.scope - libcontainer container 09165dd80ec9e30521318e853442b7d48c11b8fc412674c4c87bcb998efd3118. Jan 29 11:57:44.686356 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:57:44.697853 containerd[1460]: time="2025-01-29T11:57:44.697790640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h6rn2,Uid:a9d13e01-6e79-4768-8067-8fdf452aca9e,Namespace:calico-system,Attempt:1,} returns sandbox id \"09165dd80ec9e30521318e853442b7d48c11b8fc412674c4c87bcb998efd3118\"" Jan 29 11:57:44.699902 containerd[1460]: time="2025-01-29T11:57:44.699866414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 11:57:44.703111 kubelet[1768]: E0129 11:57:44.703058 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:45.254454 containerd[1460]: time="2025-01-29T11:57:45.254404336Z" level=info msg="StopPodSandbox for \"f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32\"" Jan 29 11:57:45.326208 containerd[1460]: 2025-01-29 11:57:45.292 [INFO][2893] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" Jan 29 11:57:45.326208 containerd[1460]: 2025-01-29 11:57:45.292 [INFO][2893] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" iface="eth0" netns="/var/run/netns/cni-7f0d66e4-696b-256a-03fc-bd928f894c89" Jan 29 11:57:45.326208 containerd[1460]: 2025-01-29 11:57:45.292 [INFO][2893] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" iface="eth0" netns="/var/run/netns/cni-7f0d66e4-696b-256a-03fc-bd928f894c89" Jan 29 11:57:45.326208 containerd[1460]: 2025-01-29 11:57:45.292 [INFO][2893] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" iface="eth0" netns="/var/run/netns/cni-7f0d66e4-696b-256a-03fc-bd928f894c89" Jan 29 11:57:45.326208 containerd[1460]: 2025-01-29 11:57:45.292 [INFO][2893] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" Jan 29 11:57:45.326208 containerd[1460]: 2025-01-29 11:57:45.292 [INFO][2893] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" Jan 29 11:57:45.326208 containerd[1460]: 2025-01-29 11:57:45.313 [INFO][2900] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" HandleID="k8s-pod-network.f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" Workload="10.0.0.125-k8s-nginx--deployment--85f456d6dd--j4f8v-eth0" Jan 29 11:57:45.326208 containerd[1460]: 2025-01-29 11:57:45.314 [INFO][2900] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:57:45.326208 containerd[1460]: 2025-01-29 11:57:45.314 [INFO][2900] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:57:45.326208 containerd[1460]: 2025-01-29 11:57:45.319 [WARNING][2900] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" HandleID="k8s-pod-network.f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" Workload="10.0.0.125-k8s-nginx--deployment--85f456d6dd--j4f8v-eth0" Jan 29 11:57:45.326208 containerd[1460]: 2025-01-29 11:57:45.319 [INFO][2900] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" HandleID="k8s-pod-network.f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" Workload="10.0.0.125-k8s-nginx--deployment--85f456d6dd--j4f8v-eth0" Jan 29 11:57:45.326208 containerd[1460]: 2025-01-29 11:57:45.322 [INFO][2900] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:57:45.326208 containerd[1460]: 2025-01-29 11:57:45.324 [INFO][2893] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" Jan 29 11:57:45.326704 containerd[1460]: time="2025-01-29T11:57:45.326424500Z" level=info msg="TearDown network for sandbox \"f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32\" successfully" Jan 29 11:57:45.326704 containerd[1460]: time="2025-01-29T11:57:45.326458364Z" level=info msg="StopPodSandbox for \"f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32\" returns successfully" Jan 29 11:57:45.327234 containerd[1460]: time="2025-01-29T11:57:45.327208117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-j4f8v,Uid:8cc386dc-04f6-4f58-9b8c-56b8293a2190,Namespace:default,Attempt:1,}" Jan 29 11:57:45.408203 systemd[1]: run-containerd-runc-k8s.io-09165dd80ec9e30521318e853442b7d48c11b8fc412674c4c87bcb998efd3118-runc.fczLG2.mount: Deactivated successfully. Jan 29 11:57:45.408357 systemd[1]: run-netns-cni\x2d7f0d66e4\x2d696b\x2d256a\x2d03fc\x2dbd928f894c89.mount: Deactivated successfully. Jan 29 11:57:45.439307 systemd-networkd[1388]: califf385d48b1d: Link UP Jan 29 11:57:45.439695 systemd-networkd[1388]: califf385d48b1d: Gained carrier Jan 29 11:57:45.450210 containerd[1460]: 2025-01-29 11:57:45.371 [INFO][2907] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.125-k8s-nginx--deployment--85f456d6dd--j4f8v-eth0 nginx-deployment-85f456d6dd- default 8cc386dc-04f6-4f58-9b8c-56b8293a2190 1081 0 2025-01-29 11:57:29 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.125 nginx-deployment-85f456d6dd-j4f8v eth0 default [] [] [kns.default ksa.default.default] califf385d48b1d [] []}} ContainerID="de2e874df91962f792fc8d2eac41951e87be6ce30b27426f2ccb0b56e433ad9f" Namespace="default" Pod="nginx-deployment-85f456d6dd-j4f8v" WorkloadEndpoint="10.0.0.125-k8s-nginx--deployment--85f456d6dd--j4f8v-" Jan 29 11:57:45.450210 containerd[1460]: 2025-01-29 11:57:45.371 [INFO][2907] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="de2e874df91962f792fc8d2eac41951e87be6ce30b27426f2ccb0b56e433ad9f" Namespace="default" Pod="nginx-deployment-85f456d6dd-j4f8v" WorkloadEndpoint="10.0.0.125-k8s-nginx--deployment--85f456d6dd--j4f8v-eth0" Jan 29 11:57:45.450210 containerd[1460]: 2025-01-29 11:57:45.400 [INFO][2921] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de2e874df91962f792fc8d2eac41951e87be6ce30b27426f2ccb0b56e433ad9f" HandleID="k8s-pod-network.de2e874df91962f792fc8d2eac41951e87be6ce30b27426f2ccb0b56e433ad9f" Workload="10.0.0.125-k8s-nginx--deployment--85f456d6dd--j4f8v-eth0" Jan 29 11:57:45.450210 containerd[1460]: 2025-01-29 11:57:45.411 [INFO][2921] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="de2e874df91962f792fc8d2eac41951e87be6ce30b27426f2ccb0b56e433ad9f" HandleID="k8s-pod-network.de2e874df91962f792fc8d2eac41951e87be6ce30b27426f2ccb0b56e433ad9f" Workload="10.0.0.125-k8s-nginx--deployment--85f456d6dd--j4f8v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051b50), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.125", "pod":"nginx-deployment-85f456d6dd-j4f8v", "timestamp":"2025-01-29 11:57:45.400643087 +0000 UTC"}, Hostname:"10.0.0.125", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:57:45.450210 containerd[1460]: 2025-01-29 11:57:45.411 [INFO][2921] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:57:45.450210 containerd[1460]: 2025-01-29 11:57:45.411 [INFO][2921] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:57:45.450210 containerd[1460]: 2025-01-29 11:57:45.411 [INFO][2921] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.125' Jan 29 11:57:45.450210 containerd[1460]: 2025-01-29 11:57:45.413 [INFO][2921] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.de2e874df91962f792fc8d2eac41951e87be6ce30b27426f2ccb0b56e433ad9f" host="10.0.0.125" Jan 29 11:57:45.450210 containerd[1460]: 2025-01-29 11:57:45.417 [INFO][2921] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.125" Jan 29 11:57:45.450210 containerd[1460]: 2025-01-29 11:57:45.422 [INFO][2921] ipam/ipam.go 489: Trying affinity for 192.168.83.192/26 host="10.0.0.125" Jan 29 11:57:45.450210 containerd[1460]: 2025-01-29 11:57:45.423 [INFO][2921] ipam/ipam.go 155: Attempting to load block cidr=192.168.83.192/26 host="10.0.0.125" Jan 29 11:57:45.450210 containerd[1460]: 2025-01-29 11:57:45.425 [INFO][2921] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.83.192/26 host="10.0.0.125" Jan 29 11:57:45.450210 containerd[1460]: 2025-01-29 11:57:45.425 [INFO][2921] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.83.192/26 handle="k8s-pod-network.de2e874df91962f792fc8d2eac41951e87be6ce30b27426f2ccb0b56e433ad9f" host="10.0.0.125" Jan 29 11:57:45.450210 containerd[1460]: 2025-01-29 11:57:45.427 [INFO][2921] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.de2e874df91962f792fc8d2eac41951e87be6ce30b27426f2ccb0b56e433ad9f Jan 29 11:57:45.450210 containerd[1460]: 2025-01-29 11:57:45.430 [INFO][2921] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.83.192/26 handle="k8s-pod-network.de2e874df91962f792fc8d2eac41951e87be6ce30b27426f2ccb0b56e433ad9f" host="10.0.0.125" Jan 29 11:57:45.450210 containerd[1460]: 2025-01-29 11:57:45.434 [INFO][2921] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.83.194/26] block=192.168.83.192/26 handle="k8s-pod-network.de2e874df91962f792fc8d2eac41951e87be6ce30b27426f2ccb0b56e433ad9f" host="10.0.0.125" Jan 29 11:57:45.450210 containerd[1460]: 2025-01-29 11:57:45.434 [INFO][2921] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.83.194/26] handle="k8s-pod-network.de2e874df91962f792fc8d2eac41951e87be6ce30b27426f2ccb0b56e433ad9f" host="10.0.0.125" Jan 29 11:57:45.450210 containerd[1460]: 2025-01-29 11:57:45.434 [INFO][2921] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:57:45.450210 containerd[1460]: 2025-01-29 11:57:45.434 [INFO][2921] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.194/26] IPv6=[] ContainerID="de2e874df91962f792fc8d2eac41951e87be6ce30b27426f2ccb0b56e433ad9f" HandleID="k8s-pod-network.de2e874df91962f792fc8d2eac41951e87be6ce30b27426f2ccb0b56e433ad9f" Workload="10.0.0.125-k8s-nginx--deployment--85f456d6dd--j4f8v-eth0" Jan 29 11:57:45.450967 containerd[1460]: 2025-01-29 11:57:45.437 [INFO][2907] cni-plugin/k8s.go 386: Populated endpoint ContainerID="de2e874df91962f792fc8d2eac41951e87be6ce30b27426f2ccb0b56e433ad9f" Namespace="default" Pod="nginx-deployment-85f456d6dd-j4f8v" WorkloadEndpoint="10.0.0.125-k8s-nginx--deployment--85f456d6dd--j4f8v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.125-k8s-nginx--deployment--85f456d6dd--j4f8v-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"8cc386dc-04f6-4f58-9b8c-56b8293a2190", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.125", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-j4f8v", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.83.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"califf385d48b1d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:57:45.450967 containerd[1460]: 2025-01-29 11:57:45.437 [INFO][2907] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.83.194/32] ContainerID="de2e874df91962f792fc8d2eac41951e87be6ce30b27426f2ccb0b56e433ad9f" Namespace="default" Pod="nginx-deployment-85f456d6dd-j4f8v" WorkloadEndpoint="10.0.0.125-k8s-nginx--deployment--85f456d6dd--j4f8v-eth0" Jan 29 11:57:45.450967 containerd[1460]: 2025-01-29 11:57:45.437 [INFO][2907] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf385d48b1d ContainerID="de2e874df91962f792fc8d2eac41951e87be6ce30b27426f2ccb0b56e433ad9f" Namespace="default" Pod="nginx-deployment-85f456d6dd-j4f8v" WorkloadEndpoint="10.0.0.125-k8s-nginx--deployment--85f456d6dd--j4f8v-eth0" Jan 29 11:57:45.450967 containerd[1460]: 2025-01-29 11:57:45.440 [INFO][2907] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de2e874df91962f792fc8d2eac41951e87be6ce30b27426f2ccb0b56e433ad9f" Namespace="default" Pod="nginx-deployment-85f456d6dd-j4f8v" WorkloadEndpoint="10.0.0.125-k8s-nginx--deployment--85f456d6dd--j4f8v-eth0" Jan 29 11:57:45.450967 containerd[1460]: 2025-01-29 11:57:45.440 [INFO][2907] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="de2e874df91962f792fc8d2eac41951e87be6ce30b27426f2ccb0b56e433ad9f" Namespace="default" Pod="nginx-deployment-85f456d6dd-j4f8v" WorkloadEndpoint="10.0.0.125-k8s-nginx--deployment--85f456d6dd--j4f8v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.125-k8s-nginx--deployment--85f456d6dd--j4f8v-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"8cc386dc-04f6-4f58-9b8c-56b8293a2190", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.125", ContainerID:"de2e874df91962f792fc8d2eac41951e87be6ce30b27426f2ccb0b56e433ad9f", Pod:"nginx-deployment-85f456d6dd-j4f8v", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.83.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"califf385d48b1d", MAC:"62:5a:b4:d2:22:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:57:45.450967 containerd[1460]: 2025-01-29 11:57:45.447 [INFO][2907] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="de2e874df91962f792fc8d2eac41951e87be6ce30b27426f2ccb0b56e433ad9f" Namespace="default" Pod="nginx-deployment-85f456d6dd-j4f8v" WorkloadEndpoint="10.0.0.125-k8s-nginx--deployment--85f456d6dd--j4f8v-eth0" Jan 29 11:57:45.471915 containerd[1460]: time="2025-01-29T11:57:45.471706615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:57:45.471915 containerd[1460]: time="2025-01-29T11:57:45.471761870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:57:45.471915 containerd[1460]: time="2025-01-29T11:57:45.471772400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:45.471915 containerd[1460]: time="2025-01-29T11:57:45.471844326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:45.491172 systemd[1]: Started cri-containerd-de2e874df91962f792fc8d2eac41951e87be6ce30b27426f2ccb0b56e433ad9f.scope - libcontainer container de2e874df91962f792fc8d2eac41951e87be6ce30b27426f2ccb0b56e433ad9f. Jan 29 11:57:45.504000 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:57:45.530256 containerd[1460]: time="2025-01-29T11:57:45.530128159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-j4f8v,Uid:8cc386dc-04f6-4f58-9b8c-56b8293a2190,Namespace:default,Attempt:1,} returns sandbox id \"de2e874df91962f792fc8d2eac41951e87be6ce30b27426f2ccb0b56e433ad9f\"" Jan 29 11:57:45.703746 kubelet[1768]: E0129 11:57:45.703688 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:45.849251 systemd-networkd[1388]: cali7a3dc71b523: Gained IPv6LL Jan 29 11:57:46.044975 containerd[1460]: time="2025-01-29T11:57:46.044895237Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:46.045826 containerd[1460]: time="2025-01-29T11:57:46.045787229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 11:57:46.047067 containerd[1460]: time="2025-01-29T11:57:46.047042881Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:46.049305 containerd[1460]: time="2025-01-29T11:57:46.049261010Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:46.050001 containerd[1460]: time="2025-01-29T11:57:46.049966938Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.350053856s" Jan 29 11:57:46.050029 containerd[1460]: time="2025-01-29T11:57:46.050003849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 11:57:46.051201 containerd[1460]: time="2025-01-29T11:57:46.051157347Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 11:57:46.052313 containerd[1460]: time="2025-01-29T11:57:46.052274838Z" level=info msg="CreateContainer within sandbox \"09165dd80ec9e30521318e853442b7d48c11b8fc412674c4c87bcb998efd3118\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 11:57:46.069707 containerd[1460]: time="2025-01-29T11:57:46.069653348Z" level=info msg="CreateContainer within sandbox \"09165dd80ec9e30521318e853442b7d48c11b8fc412674c4c87bcb998efd3118\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"dc728a9ca660187b4333bfc9fb8ccffa2650b612f6fadf2ec4128309a762b77c\"" Jan 29 11:57:46.070214 containerd[1460]: time="2025-01-29T11:57:46.070178805Z" level=info msg="StartContainer for \"dc728a9ca660187b4333bfc9fb8ccffa2650b612f6fadf2ec4128309a762b77c\"" Jan 29 11:57:46.098059 systemd[1]: Started cri-containerd-dc728a9ca660187b4333bfc9fb8ccffa2650b612f6fadf2ec4128309a762b77c.scope - libcontainer container dc728a9ca660187b4333bfc9fb8ccffa2650b612f6fadf2ec4128309a762b77c. Jan 29 11:57:46.129450 containerd[1460]: time="2025-01-29T11:57:46.129405691Z" level=info msg="StartContainer for \"dc728a9ca660187b4333bfc9fb8ccffa2650b612f6fadf2ec4128309a762b77c\" returns successfully" Jan 29 11:57:46.704591 kubelet[1768]: E0129 11:57:46.704526 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:47.003867 kubelet[1768]: E0129 11:57:47.003730 1768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:47.250900 update_engine[1452]: I20250129 11:57:47.250803 1452 update_attempter.cc:509] Updating boot flags... Jan 29 11:57:47.274980 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3050) Jan 29 11:57:47.322958 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3052) Jan 29 11:57:47.359261 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3052) Jan 29 11:57:47.388149 systemd-networkd[1388]: califf385d48b1d: Gained IPv6LL Jan 29 11:57:47.704689 kubelet[1768]: E0129 11:57:47.704630 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:48.705207 kubelet[1768]: E0129 11:57:48.705090 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:49.705498 kubelet[1768]: E0129 11:57:49.705427 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:50.656578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2138077492.mount: Deactivated successfully. Jan 29 11:57:50.706310 kubelet[1768]: E0129 11:57:50.706231 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:51.678830 kubelet[1768]: E0129 11:57:51.678777 1768 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:51.706913 kubelet[1768]: E0129 11:57:51.706874 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:52.255184 containerd[1460]: time="2025-01-29T11:57:52.255121294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:52.255948 containerd[1460]: time="2025-01-29T11:57:52.255873986Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71015561" Jan 29 11:57:52.257159 containerd[1460]: time="2025-01-29T11:57:52.257120202Z" level=info msg="ImageCreate event name:\"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:52.259764 containerd[1460]: time="2025-01-29T11:57:52.259714224Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:52.260755 containerd[1460]: time="2025-01-29T11:57:52.260717540Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 6.209512353s" Jan 29 11:57:52.260813 containerd[1460]: time="2025-01-29T11:57:52.260757296Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 11:57:52.262132 containerd[1460]: time="2025-01-29T11:57:52.262089915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 11:57:52.263289 containerd[1460]: time="2025-01-29T11:57:52.263230200Z" level=info msg="CreateContainer within sandbox \"de2e874df91962f792fc8d2eac41951e87be6ce30b27426f2ccb0b56e433ad9f\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 29 11:57:52.276536 containerd[1460]: time="2025-01-29T11:57:52.276483309Z" level=info msg="CreateContainer within sandbox \"de2e874df91962f792fc8d2eac41951e87be6ce30b27426f2ccb0b56e433ad9f\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"066e524d144f7c4b62f990097264d0e47c9cf349973c99722afa4d6e0841291d\"" Jan 29 11:57:52.277016 containerd[1460]: time="2025-01-29T11:57:52.276981381Z" level=info msg="StartContainer for \"066e524d144f7c4b62f990097264d0e47c9cf349973c99722afa4d6e0841291d\"" Jan 29 11:57:52.357201 systemd[1]: Started cri-containerd-066e524d144f7c4b62f990097264d0e47c9cf349973c99722afa4d6e0841291d.scope - libcontainer container 066e524d144f7c4b62f990097264d0e47c9cf349973c99722afa4d6e0841291d. Jan 29 11:57:52.538807 containerd[1460]: time="2025-01-29T11:57:52.538632301Z" level=info msg="StartContainer for \"066e524d144f7c4b62f990097264d0e47c9cf349973c99722afa4d6e0841291d\" returns successfully" Jan 29 11:57:52.707836 kubelet[1768]: E0129 11:57:52.707753 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:53.708937 kubelet[1768]: E0129 11:57:53.708854 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:53.714414 kubelet[1768]: I0129 11:57:53.714351 1768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-j4f8v" podStartSLOduration=17.984249003 podStartE2EDuration="24.714333506s" podCreationTimestamp="2025-01-29 11:57:29 +0000 UTC" firstStartedPulling="2025-01-29 11:57:45.531812477 +0000 UTC m=+34.315759239" lastFinishedPulling="2025-01-29 11:57:52.26189698 +0000 UTC m=+41.045843742" observedRunningTime="2025-01-29 11:57:53.714172943 +0000 UTC m=+42.498119725" watchObservedRunningTime="2025-01-29 11:57:53.714333506 +0000 UTC m=+42.498280268" Jan 29 11:57:54.014740 containerd[1460]: time="2025-01-29T11:57:54.014568109Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:54.016031 containerd[1460]: time="2025-01-29T11:57:54.015915623Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 11:57:54.019146 containerd[1460]: time="2025-01-29T11:57:54.019066774Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:54.021354 containerd[1460]: time="2025-01-29T11:57:54.021278410Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:54.022000 containerd[1460]: time="2025-01-29T11:57:54.021936753Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.75978915s" Jan 29 11:57:54.022000 containerd[1460]: time="2025-01-29T11:57:54.021988119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 11:57:54.024588 containerd[1460]: time="2025-01-29T11:57:54.024515151Z" level=info msg="CreateContainer within sandbox \"09165dd80ec9e30521318e853442b7d48c11b8fc412674c4c87bcb998efd3118\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 11:57:54.074033 containerd[1460]: time="2025-01-29T11:57:54.073953633Z" level=info msg="CreateContainer within sandbox \"09165dd80ec9e30521318e853442b7d48c11b8fc412674c4c87bcb998efd3118\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ce66629fc1fe8026675b5aca19ef2ed3d676ecfca369baaff4f2550379163758\"" Jan 29 11:57:54.074706 containerd[1460]: time="2025-01-29T11:57:54.074648135Z" level=info msg="StartContainer for \"ce66629fc1fe8026675b5aca19ef2ed3d676ecfca369baaff4f2550379163758\"" Jan 29 11:57:54.123241 systemd[1]: Started cri-containerd-ce66629fc1fe8026675b5aca19ef2ed3d676ecfca369baaff4f2550379163758.scope - libcontainer container ce66629fc1fe8026675b5aca19ef2ed3d676ecfca369baaff4f2550379163758. Jan 29 11:57:54.281261 containerd[1460]: time="2025-01-29T11:57:54.281094568Z" level=info msg="StartContainer for \"ce66629fc1fe8026675b5aca19ef2ed3d676ecfca369baaff4f2550379163758\" returns successfully" Jan 29 11:57:54.290180 kubelet[1768]: I0129 11:57:54.290154 1768 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 11:57:54.290307 kubelet[1768]: I0129 11:57:54.290190 1768 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 11:57:54.562158 kubelet[1768]: I0129 11:57:54.561988 1768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-h6rn2" podStartSLOduration=33.238502509 podStartE2EDuration="42.561962097s" podCreationTimestamp="2025-01-29 11:57:12 +0000 UTC" firstStartedPulling="2025-01-29 11:57:44.699392243 +0000 UTC m=+33.483339005" lastFinishedPulling="2025-01-29 11:57:54.022851831 +0000 UTC m=+42.806798593" observedRunningTime="2025-01-29 11:57:54.560688512 +0000 UTC m=+43.344635284" watchObservedRunningTime="2025-01-29 11:57:54.561962097 +0000 UTC m=+43.345908859" Jan 29 11:57:54.709873 kubelet[1768]: E0129 11:57:54.709798 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:55.710828 kubelet[1768]: E0129 11:57:55.710756 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:56.711372 kubelet[1768]: E0129 11:57:56.711288 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:57.563033 kubelet[1768]: I0129 11:57:57.562953 1768 topology_manager.go:215] "Topology Admit Handler" podUID="ed823ec4-4940-433d-8f88-3dec164c6236" podNamespace="default" podName="nfs-server-provisioner-0" Jan 29 11:57:57.569712 systemd[1]: Created slice kubepods-besteffort-poded823ec4_4940_433d_8f88_3dec164c6236.slice - libcontainer container kubepods-besteffort-poded823ec4_4940_433d_8f88_3dec164c6236.slice. Jan 29 11:57:57.712187 kubelet[1768]: E0129 11:57:57.712119 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:57.725461 kubelet[1768]: I0129 11:57:57.725367 1768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/ed823ec4-4940-433d-8f88-3dec164c6236-data\") pod \"nfs-server-provisioner-0\" (UID: \"ed823ec4-4940-433d-8f88-3dec164c6236\") " pod="default/nfs-server-provisioner-0" Jan 29 11:57:57.725461 kubelet[1768]: I0129 11:57:57.725445 1768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfxjm\" (UniqueName: \"kubernetes.io/projected/ed823ec4-4940-433d-8f88-3dec164c6236-kube-api-access-jfxjm\") pod \"nfs-server-provisioner-0\" (UID: \"ed823ec4-4940-433d-8f88-3dec164c6236\") " pod="default/nfs-server-provisioner-0" Jan 29 11:57:57.873430 containerd[1460]: time="2025-01-29T11:57:57.873237163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ed823ec4-4940-433d-8f88-3dec164c6236,Namespace:default,Attempt:0,}" Jan 29 11:57:58.001492 systemd-networkd[1388]: cali60e51b789ff: Link UP Jan 29 11:57:58.002362 systemd-networkd[1388]: cali60e51b789ff: Gained carrier Jan 29 11:57:58.012681 containerd[1460]: 2025-01-29 11:57:57.924 [INFO][3203] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.125-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default ed823ec4-4940-433d-8f88-3dec164c6236 1146 0 2025-01-29 11:57:57 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.125 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="0d4f8f9855e123fbdc1c80ae1aed641c34e021012954a955d3b795f9ee35b2a1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.125-k8s-nfs--server--provisioner--0-" Jan 29 11:57:58.012681 containerd[1460]: 2025-01-29 11:57:57.924 [INFO][3203] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0d4f8f9855e123fbdc1c80ae1aed641c34e021012954a955d3b795f9ee35b2a1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.125-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:57:58.012681 containerd[1460]: 2025-01-29 11:57:57.953 [INFO][3217] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0d4f8f9855e123fbdc1c80ae1aed641c34e021012954a955d3b795f9ee35b2a1" HandleID="k8s-pod-network.0d4f8f9855e123fbdc1c80ae1aed641c34e021012954a955d3b795f9ee35b2a1" Workload="10.0.0.125-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:57:58.012681 containerd[1460]: 2025-01-29 11:57:57.965 [INFO][3217] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0d4f8f9855e123fbdc1c80ae1aed641c34e021012954a955d3b795f9ee35b2a1" HandleID="k8s-pod-network.0d4f8f9855e123fbdc1c80ae1aed641c34e021012954a955d3b795f9ee35b2a1" Workload="10.0.0.125-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ad0c0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.125", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-29 11:57:57.953816819 +0000 UTC"}, Hostname:"10.0.0.125", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:57:58.012681 containerd[1460]: 2025-01-29 11:57:57.965 [INFO][3217] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:57:58.012681 containerd[1460]: 2025-01-29 11:57:57.965 [INFO][3217] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:57:58.012681 containerd[1460]: 2025-01-29 11:57:57.965 [INFO][3217] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.125' Jan 29 11:57:58.012681 containerd[1460]: 2025-01-29 11:57:57.967 [INFO][3217] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0d4f8f9855e123fbdc1c80ae1aed641c34e021012954a955d3b795f9ee35b2a1" host="10.0.0.125" Jan 29 11:57:58.012681 containerd[1460]: 2025-01-29 11:57:57.971 [INFO][3217] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.125" Jan 29 11:57:58.012681 containerd[1460]: 2025-01-29 11:57:57.977 [INFO][3217] ipam/ipam.go 489: Trying affinity for 192.168.83.192/26 host="10.0.0.125" Jan 29 11:57:58.012681 containerd[1460]: 2025-01-29 11:57:57.980 [INFO][3217] ipam/ipam.go 155: Attempting to load block cidr=192.168.83.192/26 host="10.0.0.125" Jan 29 11:57:58.012681 containerd[1460]: 2025-01-29 11:57:57.983 [INFO][3217] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.83.192/26 host="10.0.0.125" Jan 29 11:57:58.012681 containerd[1460]: 2025-01-29 11:57:57.983 [INFO][3217] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.83.192/26 handle="k8s-pod-network.0d4f8f9855e123fbdc1c80ae1aed641c34e021012954a955d3b795f9ee35b2a1" host="10.0.0.125" Jan 29 11:57:58.012681 containerd[1460]: 2025-01-29 11:57:57.984 [INFO][3217] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0d4f8f9855e123fbdc1c80ae1aed641c34e021012954a955d3b795f9ee35b2a1 Jan 29 11:57:58.012681 containerd[1460]: 2025-01-29 11:57:57.988 [INFO][3217] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.83.192/26 handle="k8s-pod-network.0d4f8f9855e123fbdc1c80ae1aed641c34e021012954a955d3b795f9ee35b2a1" host="10.0.0.125" Jan 29 11:57:58.012681 containerd[1460]: 2025-01-29 11:57:57.994 [INFO][3217] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.83.195/26] block=192.168.83.192/26 handle="k8s-pod-network.0d4f8f9855e123fbdc1c80ae1aed641c34e021012954a955d3b795f9ee35b2a1" host="10.0.0.125" Jan 29 11:57:58.012681 containerd[1460]: 2025-01-29 11:57:57.995 [INFO][3217] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.83.195/26] handle="k8s-pod-network.0d4f8f9855e123fbdc1c80ae1aed641c34e021012954a955d3b795f9ee35b2a1" host="10.0.0.125" Jan 29 11:57:58.012681 containerd[1460]: 2025-01-29 11:57:57.995 [INFO][3217] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:57:58.012681 containerd[1460]: 2025-01-29 11:57:57.995 [INFO][3217] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.195/26] IPv6=[] ContainerID="0d4f8f9855e123fbdc1c80ae1aed641c34e021012954a955d3b795f9ee35b2a1" HandleID="k8s-pod-network.0d4f8f9855e123fbdc1c80ae1aed641c34e021012954a955d3b795f9ee35b2a1" Workload="10.0.0.125-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:57:58.013665 containerd[1460]: 2025-01-29 11:57:57.998 [INFO][3203] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0d4f8f9855e123fbdc1c80ae1aed641c34e021012954a955d3b795f9ee35b2a1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.125-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.125-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"ed823ec4-4940-433d-8f88-3dec164c6236", ResourceVersion:"1146", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.125", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.83.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:57:58.013665 containerd[1460]: 2025-01-29 11:57:57.998 [INFO][3203] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.83.195/32] ContainerID="0d4f8f9855e123fbdc1c80ae1aed641c34e021012954a955d3b795f9ee35b2a1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.125-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:57:58.013665 containerd[1460]: 2025-01-29 11:57:57.998 [INFO][3203] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="0d4f8f9855e123fbdc1c80ae1aed641c34e021012954a955d3b795f9ee35b2a1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.125-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:57:58.013665 containerd[1460]: 2025-01-29 11:57:58.001 [INFO][3203] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0d4f8f9855e123fbdc1c80ae1aed641c34e021012954a955d3b795f9ee35b2a1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.125-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:57:58.013818 containerd[1460]: 2025-01-29 11:57:58.001 [INFO][3203] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0d4f8f9855e123fbdc1c80ae1aed641c34e021012954a955d3b795f9ee35b2a1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.125-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.125-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"ed823ec4-4940-433d-8f88-3dec164c6236", ResourceVersion:"1146", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.125", ContainerID:"0d4f8f9855e123fbdc1c80ae1aed641c34e021012954a955d3b795f9ee35b2a1", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.83.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"ea:09:2c:f5:60:5a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:57:58.013818 containerd[1460]: 2025-01-29 11:57:58.008 [INFO][3203] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0d4f8f9855e123fbdc1c80ae1aed641c34e021012954a955d3b795f9ee35b2a1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.125-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:57:58.258965 containerd[1460]: time="2025-01-29T11:57:58.258827639Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:57:58.259667 containerd[1460]: time="2025-01-29T11:57:58.259591650Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:57:58.259667 containerd[1460]: time="2025-01-29T11:57:58.259627949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:58.259796 containerd[1460]: time="2025-01-29T11:57:58.259736874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:58.283066 systemd[1]: Started cri-containerd-0d4f8f9855e123fbdc1c80ae1aed641c34e021012954a955d3b795f9ee35b2a1.scope - libcontainer container 0d4f8f9855e123fbdc1c80ae1aed641c34e021012954a955d3b795f9ee35b2a1. Jan 29 11:57:58.296763 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:57:58.324774 containerd[1460]: time="2025-01-29T11:57:58.324709462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ed823ec4-4940-433d-8f88-3dec164c6236,Namespace:default,Attempt:0,} returns sandbox id \"0d4f8f9855e123fbdc1c80ae1aed641c34e021012954a955d3b795f9ee35b2a1\"" Jan 29 11:57:58.326540 containerd[1460]: time="2025-01-29T11:57:58.326415087Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 29 11:57:58.712867 kubelet[1768]: E0129 11:57:58.712796 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:57:58.849780 systemd[1]: run-containerd-runc-k8s.io-0d4f8f9855e123fbdc1c80ae1aed641c34e021012954a955d3b795f9ee35b2a1-runc.OiiceE.mount: Deactivated successfully. Jan 29 11:57:59.545163 systemd-networkd[1388]: cali60e51b789ff: Gained IPv6LL Jan 29 11:57:59.713734 kubelet[1768]: E0129 11:57:59.713650 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:58:00.714564 kubelet[1768]: E0129 11:58:00.714498 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:58:00.786544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4274493849.mount: Deactivated successfully. Jan 29 11:58:01.714738 kubelet[1768]: E0129 11:58:01.714673 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:58:02.715588 kubelet[1768]: E0129 11:58:02.715495 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:58:02.964579 containerd[1460]: time="2025-01-29T11:58:02.964487909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:02.965846 containerd[1460]: time="2025-01-29T11:58:02.965694992Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 29 11:58:02.967087 containerd[1460]: time="2025-01-29T11:58:02.967039924Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:02.970008 containerd[1460]: time="2025-01-29T11:58:02.969965633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:02.970846 containerd[1460]: time="2025-01-29T11:58:02.970809071Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.644326456s" Jan 29 11:58:02.970889 containerd[1460]: time="2025-01-29T11:58:02.970847303Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 29 11:58:02.973167 containerd[1460]: time="2025-01-29T11:58:02.973124691Z" level=info msg="CreateContainer within sandbox \"0d4f8f9855e123fbdc1c80ae1aed641c34e021012954a955d3b795f9ee35b2a1\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 29 11:58:02.986605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3396713721.mount: Deactivated successfully. Jan 29 11:58:02.989176 containerd[1460]: time="2025-01-29T11:58:02.989120760Z" level=info msg="CreateContainer within sandbox \"0d4f8f9855e123fbdc1c80ae1aed641c34e021012954a955d3b795f9ee35b2a1\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"74d81b88308a10402f61bb05c91a779f599407a420cd1d9157108472180985b6\"" Jan 29 11:58:02.989766 containerd[1460]: time="2025-01-29T11:58:02.989738273Z" level=info msg="StartContainer for \"74d81b88308a10402f61bb05c91a779f599407a420cd1d9157108472180985b6\"" Jan 29 11:58:03.026178 systemd[1]: Started cri-containerd-74d81b88308a10402f61bb05c91a779f599407a420cd1d9157108472180985b6.scope - libcontainer container 74d81b88308a10402f61bb05c91a779f599407a420cd1d9157108472180985b6. Jan 29 11:58:03.252326 containerd[1460]: time="2025-01-29T11:58:03.252173400Z" level=info msg="StartContainer for \"74d81b88308a10402f61bb05c91a779f599407a420cd1d9157108472180985b6\" returns successfully" Jan 29 11:58:03.584310 kubelet[1768]: I0129 11:58:03.584106 1768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.9384418380000001 podStartE2EDuration="6.58408826s" podCreationTimestamp="2025-01-29 11:57:57 +0000 UTC" firstStartedPulling="2025-01-29 11:57:58.326067603 +0000 UTC m=+47.110014365" lastFinishedPulling="2025-01-29 11:58:02.971714024 +0000 UTC m=+51.755660787" observedRunningTime="2025-01-29 11:58:03.583611163 +0000 UTC m=+52.367557925" watchObservedRunningTime="2025-01-29 11:58:03.58408826 +0000 UTC m=+52.368035022" Jan 29 11:58:03.716059 kubelet[1768]: E0129 11:58:03.715998 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:58:04.716478 kubelet[1768]: E0129 11:58:04.716415 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:58:05.717612 kubelet[1768]: E0129 11:58:05.717526 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:58:06.718155 kubelet[1768]: E0129 11:58:06.718084 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:58:07.718895 kubelet[1768]: E0129 11:58:07.718778 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:58:08.720032 kubelet[1768]: E0129 11:58:08.719953 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:58:09.721062 kubelet[1768]: E0129 11:58:09.720977 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:58:10.721971 kubelet[1768]: E0129 11:58:10.721822 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:58:11.679113 kubelet[1768]: E0129 11:58:11.679029 1768 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:58:11.702692 containerd[1460]: time="2025-01-29T11:58:11.702649572Z" level=info msg="StopPodSandbox for \"f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32\"" Jan 29 11:58:11.722369 kubelet[1768]: E0129 11:58:11.722304 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:58:11.777841 containerd[1460]: 2025-01-29 11:58:11.741 [WARNING][3412] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.125-k8s-nginx--deployment--85f456d6dd--j4f8v-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"8cc386dc-04f6-4f58-9b8c-56b8293a2190", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.125", ContainerID:"de2e874df91962f792fc8d2eac41951e87be6ce30b27426f2ccb0b56e433ad9f", Pod:"nginx-deployment-85f456d6dd-j4f8v", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.83.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"califf385d48b1d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:58:11.777841 containerd[1460]: 2025-01-29 11:58:11.742 [INFO][3412] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" Jan 29 11:58:11.777841 containerd[1460]: 2025-01-29 11:58:11.742 [INFO][3412] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" iface="eth0" netns="" Jan 29 11:58:11.777841 containerd[1460]: 2025-01-29 11:58:11.742 [INFO][3412] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" Jan 29 11:58:11.777841 containerd[1460]: 2025-01-29 11:58:11.742 [INFO][3412] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" Jan 29 11:58:11.777841 containerd[1460]: 2025-01-29 11:58:11.764 [INFO][3419] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" HandleID="k8s-pod-network.f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" Workload="10.0.0.125-k8s-nginx--deployment--85f456d6dd--j4f8v-eth0" Jan 29 11:58:11.777841 containerd[1460]: 2025-01-29 11:58:11.764 [INFO][3419] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:58:11.777841 containerd[1460]: 2025-01-29 11:58:11.764 [INFO][3419] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:58:11.777841 containerd[1460]: 2025-01-29 11:58:11.770 [WARNING][3419] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" HandleID="k8s-pod-network.f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" Workload="10.0.0.125-k8s-nginx--deployment--85f456d6dd--j4f8v-eth0" Jan 29 11:58:11.777841 containerd[1460]: 2025-01-29 11:58:11.770 [INFO][3419] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" HandleID="k8s-pod-network.f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" Workload="10.0.0.125-k8s-nginx--deployment--85f456d6dd--j4f8v-eth0" Jan 29 11:58:11.777841 containerd[1460]: 2025-01-29 11:58:11.772 [INFO][3419] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:58:11.777841 containerd[1460]: 2025-01-29 11:58:11.775 [INFO][3412] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" Jan 29 11:58:11.778426 containerd[1460]: time="2025-01-29T11:58:11.777917282Z" level=info msg="TearDown network for sandbox \"f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32\" successfully" Jan 29 11:58:11.778426 containerd[1460]: time="2025-01-29T11:58:11.778008834Z" level=info msg="StopPodSandbox for \"f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32\" returns successfully" Jan 29 11:58:11.778913 containerd[1460]: time="2025-01-29T11:58:11.778884009Z" level=info msg="RemovePodSandbox for \"f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32\"" Jan 29 11:58:11.779000 containerd[1460]: time="2025-01-29T11:58:11.778923353Z" level=info msg="Forcibly stopping sandbox \"f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32\"" Jan 29 11:58:11.854296 containerd[1460]: 2025-01-29 11:58:11.819 [WARNING][3441] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.125-k8s-nginx--deployment--85f456d6dd--j4f8v-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"8cc386dc-04f6-4f58-9b8c-56b8293a2190", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.125", ContainerID:"de2e874df91962f792fc8d2eac41951e87be6ce30b27426f2ccb0b56e433ad9f", Pod:"nginx-deployment-85f456d6dd-j4f8v", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.83.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"califf385d48b1d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:58:11.854296 containerd[1460]: 2025-01-29 11:58:11.819 [INFO][3441] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" Jan 29 11:58:11.854296 containerd[1460]: 2025-01-29 11:58:11.819 [INFO][3441] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" iface="eth0" netns="" Jan 29 11:58:11.854296 containerd[1460]: 2025-01-29 11:58:11.819 [INFO][3441] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" Jan 29 11:58:11.854296 containerd[1460]: 2025-01-29 11:58:11.819 [INFO][3441] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" Jan 29 11:58:11.854296 containerd[1460]: 2025-01-29 11:58:11.842 [INFO][3448] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" HandleID="k8s-pod-network.f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" Workload="10.0.0.125-k8s-nginx--deployment--85f456d6dd--j4f8v-eth0" Jan 29 11:58:11.854296 containerd[1460]: 2025-01-29 11:58:11.842 [INFO][3448] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:58:11.854296 containerd[1460]: 2025-01-29 11:58:11.842 [INFO][3448] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:58:11.854296 containerd[1460]: 2025-01-29 11:58:11.848 [WARNING][3448] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" HandleID="k8s-pod-network.f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" Workload="10.0.0.125-k8s-nginx--deployment--85f456d6dd--j4f8v-eth0" Jan 29 11:58:11.854296 containerd[1460]: 2025-01-29 11:58:11.848 [INFO][3448] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" HandleID="k8s-pod-network.f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" Workload="10.0.0.125-k8s-nginx--deployment--85f456d6dd--j4f8v-eth0" Jan 29 11:58:11.854296 containerd[1460]: 2025-01-29 11:58:11.849 [INFO][3448] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:58:11.854296 containerd[1460]: 2025-01-29 11:58:11.852 [INFO][3441] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32" Jan 29 11:58:11.855025 containerd[1460]: time="2025-01-29T11:58:11.854355002Z" level=info msg="TearDown network for sandbox \"f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32\" successfully" Jan 29 11:58:11.919586 containerd[1460]: time="2025-01-29T11:58:11.919505416Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:58:11.919586 containerd[1460]: time="2025-01-29T11:58:11.919613359Z" level=info msg="RemovePodSandbox \"f4357dca3bdab23f504c2f8745b88bb6f3ee11bf395e019838e88f8f77c06d32\" returns successfully" Jan 29 11:58:11.920438 containerd[1460]: time="2025-01-29T11:58:11.920388856Z" level=info msg="StopPodSandbox for \"0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62\"" Jan 29 11:58:11.995180 containerd[1460]: 2025-01-29 11:58:11.957 [WARNING][3471] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.125-k8s-csi--node--driver--h6rn2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a9d13e01-6e79-4768-8067-8fdf452aca9e", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.125", ContainerID:"09165dd80ec9e30521318e853442b7d48c11b8fc412674c4c87bcb998efd3118", Pod:"csi-node-driver-h6rn2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.83.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7a3dc71b523", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:58:11.995180 containerd[1460]: 2025-01-29 11:58:11.957 [INFO][3471] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" Jan 29 11:58:11.995180 containerd[1460]: 2025-01-29 11:58:11.957 [INFO][3471] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" iface="eth0" netns="" Jan 29 11:58:11.995180 containerd[1460]: 2025-01-29 11:58:11.957 [INFO][3471] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" Jan 29 11:58:11.995180 containerd[1460]: 2025-01-29 11:58:11.957 [INFO][3471] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" Jan 29 11:58:11.995180 containerd[1460]: 2025-01-29 11:58:11.981 [INFO][3478] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" HandleID="k8s-pod-network.0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" Workload="10.0.0.125-k8s-csi--node--driver--h6rn2-eth0" Jan 29 11:58:11.995180 containerd[1460]: 2025-01-29 11:58:11.981 [INFO][3478] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:58:11.995180 containerd[1460]: 2025-01-29 11:58:11.981 [INFO][3478] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:58:11.995180 containerd[1460]: 2025-01-29 11:58:11.987 [WARNING][3478] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" HandleID="k8s-pod-network.0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" Workload="10.0.0.125-k8s-csi--node--driver--h6rn2-eth0" Jan 29 11:58:11.995180 containerd[1460]: 2025-01-29 11:58:11.987 [INFO][3478] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" HandleID="k8s-pod-network.0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" Workload="10.0.0.125-k8s-csi--node--driver--h6rn2-eth0" Jan 29 11:58:11.995180 containerd[1460]: 2025-01-29 11:58:11.990 [INFO][3478] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:58:11.995180 containerd[1460]: 2025-01-29 11:58:11.992 [INFO][3471] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" Jan 29 11:58:11.995180 containerd[1460]: time="2025-01-29T11:58:11.995108817Z" level=info msg="TearDown network for sandbox \"0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62\" successfully" Jan 29 11:58:11.995180 containerd[1460]: time="2025-01-29T11:58:11.995152579Z" level=info msg="StopPodSandbox for \"0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62\" returns successfully" Jan 29 11:58:11.995963 containerd[1460]: time="2025-01-29T11:58:11.995808772Z" level=info msg="RemovePodSandbox for \"0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62\"" Jan 29 11:58:11.995963 containerd[1460]: time="2025-01-29T11:58:11.995844370Z" level=info msg="Forcibly stopping sandbox \"0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62\"" Jan 29 11:58:12.077852 containerd[1460]: 2025-01-29 11:58:12.038 [WARNING][3501] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.125-k8s-csi--node--driver--h6rn2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a9d13e01-6e79-4768-8067-8fdf452aca9e", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.125", ContainerID:"09165dd80ec9e30521318e853442b7d48c11b8fc412674c4c87bcb998efd3118", Pod:"csi-node-driver-h6rn2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.83.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7a3dc71b523", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:58:12.077852 containerd[1460]: 2025-01-29 11:58:12.039 [INFO][3501] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" Jan 29 11:58:12.077852 containerd[1460]: 2025-01-29 11:58:12.039 [INFO][3501] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" iface="eth0" netns="" Jan 29 11:58:12.077852 containerd[1460]: 2025-01-29 11:58:12.039 [INFO][3501] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" Jan 29 11:58:12.077852 containerd[1460]: 2025-01-29 11:58:12.039 [INFO][3501] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" Jan 29 11:58:12.077852 containerd[1460]: 2025-01-29 11:58:12.060 [INFO][3508] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" HandleID="k8s-pod-network.0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" Workload="10.0.0.125-k8s-csi--node--driver--h6rn2-eth0" Jan 29 11:58:12.077852 containerd[1460]: 2025-01-29 11:58:12.061 [INFO][3508] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:58:12.077852 containerd[1460]: 2025-01-29 11:58:12.061 [INFO][3508] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:58:12.077852 containerd[1460]: 2025-01-29 11:58:12.067 [WARNING][3508] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" HandleID="k8s-pod-network.0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" Workload="10.0.0.125-k8s-csi--node--driver--h6rn2-eth0" Jan 29 11:58:12.077852 containerd[1460]: 2025-01-29 11:58:12.067 [INFO][3508] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" HandleID="k8s-pod-network.0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" Workload="10.0.0.125-k8s-csi--node--driver--h6rn2-eth0" Jan 29 11:58:12.077852 containerd[1460]: 2025-01-29 11:58:12.069 [INFO][3508] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:58:12.077852 containerd[1460]: 2025-01-29 11:58:12.075 [INFO][3501] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62" Jan 29 11:58:12.078607 containerd[1460]: time="2025-01-29T11:58:12.077899625Z" level=info msg="TearDown network for sandbox \"0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62\" successfully" Jan 29 11:58:12.081886 containerd[1460]: time="2025-01-29T11:58:12.081837655Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:58:12.082005 containerd[1460]: time="2025-01-29T11:58:12.081910962Z" level=info msg="RemovePodSandbox \"0d2b450ad4fca2b39e73332a01937e1e9d5f6ae49a64f0132148aef4ed62ba62\" returns successfully" Jan 29 11:58:12.572402 kubelet[1768]: I0129 11:58:12.572334 1768 topology_manager.go:215] "Topology Admit Handler" podUID="11a26aff-fcd8-459b-9d68-660d658f7eda" podNamespace="default" podName="test-pod-1" Jan 29 11:58:12.578784 systemd[1]: Created slice kubepods-besteffort-pod11a26aff_fcd8_459b_9d68_660d658f7eda.slice - libcontainer container kubepods-besteffort-pod11a26aff_fcd8_459b_9d68_660d658f7eda.slice. Jan 29 11:58:12.719168 kubelet[1768]: I0129 11:58:12.719081 1768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2c26bfd1-2454-46dd-a5d2-1802abe1f9f0\" (UniqueName: \"kubernetes.io/nfs/11a26aff-fcd8-459b-9d68-660d658f7eda-pvc-2c26bfd1-2454-46dd-a5d2-1802abe1f9f0\") pod \"test-pod-1\" (UID: \"11a26aff-fcd8-459b-9d68-660d658f7eda\") " pod="default/test-pod-1" Jan 29 11:58:12.719168 kubelet[1768]: I0129 11:58:12.719162 1768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz4wl\" (UniqueName: \"kubernetes.io/projected/11a26aff-fcd8-459b-9d68-660d658f7eda-kube-api-access-lz4wl\") pod \"test-pod-1\" (UID: \"11a26aff-fcd8-459b-9d68-660d658f7eda\") " pod="default/test-pod-1" Jan 29 11:58:12.723103 kubelet[1768]: E0129 11:58:12.723064 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:58:12.853956 kernel: FS-Cache: Loaded Jan 29 11:58:12.923444 kernel: RPC: Registered named UNIX socket transport module. Jan 29 11:58:12.923579 kernel: RPC: Registered udp transport module. Jan 29 11:58:12.923600 kernel: RPC: Registered tcp transport module. Jan 29 11:58:12.923977 kernel: RPC: Registered tcp-with-tls transport module. Jan 29 11:58:12.925277 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 29 11:58:13.212333 kernel: NFS: Registering the id_resolver key type Jan 29 11:58:13.212463 kernel: Key type id_resolver registered Jan 29 11:58:13.212530 kernel: Key type id_legacy registered Jan 29 11:58:13.244371 nfsidmap[3535]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 29 11:58:13.249377 nfsidmap[3538]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 29 11:58:13.482745 containerd[1460]: time="2025-01-29T11:58:13.482673316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:11a26aff-fcd8-459b-9d68-660d658f7eda,Namespace:default,Attempt:0,}" Jan 29 11:58:13.693304 systemd-networkd[1388]: cali5ec59c6bf6e: Link UP Jan 29 11:58:13.694068 systemd-networkd[1388]: cali5ec59c6bf6e: Gained carrier Jan 29 11:58:13.724079 kubelet[1768]: E0129 11:58:13.724003 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:58:13.736186 containerd[1460]: 2025-01-29 11:58:13.544 [INFO][3541] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.125-k8s-test--pod--1-eth0 default 11a26aff-fcd8-459b-9d68-660d658f7eda 1223 0 2025-01-29 11:57:57 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.125 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="1a7c2295c2e324c7408ee4b6f5ec5b41d87ebc44e7ece0a6d5155f63439613f4" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.125-k8s-test--pod--1-" Jan 29 11:58:13.736186 containerd[1460]: 2025-01-29 11:58:13.544 [INFO][3541] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1a7c2295c2e324c7408ee4b6f5ec5b41d87ebc44e7ece0a6d5155f63439613f4" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.125-k8s-test--pod--1-eth0" Jan 29 11:58:13.736186 containerd[1460]: 2025-01-29 11:58:13.571 [INFO][3553] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a7c2295c2e324c7408ee4b6f5ec5b41d87ebc44e7ece0a6d5155f63439613f4" HandleID="k8s-pod-network.1a7c2295c2e324c7408ee4b6f5ec5b41d87ebc44e7ece0a6d5155f63439613f4" Workload="10.0.0.125-k8s-test--pod--1-eth0" Jan 29 11:58:13.736186 containerd[1460]: 2025-01-29 11:58:13.579 [INFO][3553] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1a7c2295c2e324c7408ee4b6f5ec5b41d87ebc44e7ece0a6d5155f63439613f4" HandleID="k8s-pod-network.1a7c2295c2e324c7408ee4b6f5ec5b41d87ebc44e7ece0a6d5155f63439613f4" Workload="10.0.0.125-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030d400), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.125", "pod":"test-pod-1", "timestamp":"2025-01-29 11:58:13.5713712 +0000 UTC"}, Hostname:"10.0.0.125", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:58:13.736186 containerd[1460]: 2025-01-29 11:58:13.579 [INFO][3553] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:58:13.736186 containerd[1460]: 2025-01-29 11:58:13.579 [INFO][3553] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:58:13.736186 containerd[1460]: 2025-01-29 11:58:13.579 [INFO][3553] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.125' Jan 29 11:58:13.736186 containerd[1460]: 2025-01-29 11:58:13.581 [INFO][3553] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1a7c2295c2e324c7408ee4b6f5ec5b41d87ebc44e7ece0a6d5155f63439613f4" host="10.0.0.125" Jan 29 11:58:13.736186 containerd[1460]: 2025-01-29 11:58:13.587 [INFO][3553] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.125" Jan 29 11:58:13.736186 containerd[1460]: 2025-01-29 11:58:13.591 [INFO][3553] ipam/ipam.go 489: Trying affinity for 192.168.83.192/26 host="10.0.0.125" Jan 29 11:58:13.736186 containerd[1460]: 2025-01-29 11:58:13.593 [INFO][3553] ipam/ipam.go 155: Attempting to load block cidr=192.168.83.192/26 host="10.0.0.125" Jan 29 11:58:13.736186 containerd[1460]: 2025-01-29 11:58:13.596 [INFO][3553] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.83.192/26 host="10.0.0.125" Jan 29 11:58:13.736186 containerd[1460]: 2025-01-29 11:58:13.596 [INFO][3553] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.83.192/26 handle="k8s-pod-network.1a7c2295c2e324c7408ee4b6f5ec5b41d87ebc44e7ece0a6d5155f63439613f4" host="10.0.0.125" Jan 29 11:58:13.736186 containerd[1460]: 2025-01-29 11:58:13.597 [INFO][3553] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1a7c2295c2e324c7408ee4b6f5ec5b41d87ebc44e7ece0a6d5155f63439613f4 Jan 29 11:58:13.736186 containerd[1460]: 2025-01-29 11:58:13.642 [INFO][3553] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.83.192/26 handle="k8s-pod-network.1a7c2295c2e324c7408ee4b6f5ec5b41d87ebc44e7ece0a6d5155f63439613f4" host="10.0.0.125" Jan 29 11:58:13.736186 containerd[1460]: 2025-01-29 11:58:13.687 [INFO][3553] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.83.196/26] block=192.168.83.192/26 handle="k8s-pod-network.1a7c2295c2e324c7408ee4b6f5ec5b41d87ebc44e7ece0a6d5155f63439613f4" host="10.0.0.125" Jan 29 11:58:13.736186 containerd[1460]: 2025-01-29 11:58:13.687 [INFO][3553] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.83.196/26] handle="k8s-pod-network.1a7c2295c2e324c7408ee4b6f5ec5b41d87ebc44e7ece0a6d5155f63439613f4" host="10.0.0.125" Jan 29 11:58:13.736186 containerd[1460]: 2025-01-29 11:58:13.687 [INFO][3553] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:58:13.736186 containerd[1460]: 2025-01-29 11:58:13.687 [INFO][3553] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.196/26] IPv6=[] ContainerID="1a7c2295c2e324c7408ee4b6f5ec5b41d87ebc44e7ece0a6d5155f63439613f4" HandleID="k8s-pod-network.1a7c2295c2e324c7408ee4b6f5ec5b41d87ebc44e7ece0a6d5155f63439613f4" Workload="10.0.0.125-k8s-test--pod--1-eth0" Jan 29 11:58:13.736186 containerd[1460]: 2025-01-29 11:58:13.690 [INFO][3541] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1a7c2295c2e324c7408ee4b6f5ec5b41d87ebc44e7ece0a6d5155f63439613f4" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.125-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.125-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"11a26aff-fcd8-459b-9d68-660d658f7eda", ResourceVersion:"1223", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.125", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.83.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:58:13.737084 containerd[1460]: 2025-01-29 11:58:13.690 [INFO][3541] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.83.196/32] ContainerID="1a7c2295c2e324c7408ee4b6f5ec5b41d87ebc44e7ece0a6d5155f63439613f4" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.125-k8s-test--pod--1-eth0" Jan 29 11:58:13.737084 containerd[1460]: 2025-01-29 11:58:13.690 [INFO][3541] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="1a7c2295c2e324c7408ee4b6f5ec5b41d87ebc44e7ece0a6d5155f63439613f4" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.125-k8s-test--pod--1-eth0" Jan 29 11:58:13.737084 containerd[1460]: 2025-01-29 11:58:13.693 [INFO][3541] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a7c2295c2e324c7408ee4b6f5ec5b41d87ebc44e7ece0a6d5155f63439613f4" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.125-k8s-test--pod--1-eth0" Jan 29 11:58:13.737084 containerd[1460]: 2025-01-29 11:58:13.694 [INFO][3541] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1a7c2295c2e324c7408ee4b6f5ec5b41d87ebc44e7ece0a6d5155f63439613f4" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.125-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.125-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"11a26aff-fcd8-459b-9d68-660d658f7eda", ResourceVersion:"1223", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.125", ContainerID:"1a7c2295c2e324c7408ee4b6f5ec5b41d87ebc44e7ece0a6d5155f63439613f4", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.83.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"fa:3a:0f:98:9e:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:58:13.737084 containerd[1460]: 2025-01-29 11:58:13.733 [INFO][3541] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1a7c2295c2e324c7408ee4b6f5ec5b41d87ebc44e7ece0a6d5155f63439613f4" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.125-k8s-test--pod--1-eth0" Jan 29 11:58:13.761648 containerd[1460]: time="2025-01-29T11:58:13.761506943Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:58:13.761648 containerd[1460]: time="2025-01-29T11:58:13.761590260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:58:13.761648 containerd[1460]: time="2025-01-29T11:58:13.761607242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:58:13.761855 containerd[1460]: time="2025-01-29T11:58:13.761730423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:58:13.785267 systemd[1]: Started cri-containerd-1a7c2295c2e324c7408ee4b6f5ec5b41d87ebc44e7ece0a6d5155f63439613f4.scope - libcontainer container 1a7c2295c2e324c7408ee4b6f5ec5b41d87ebc44e7ece0a6d5155f63439613f4. Jan 29 11:58:13.799343 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:58:13.823690 containerd[1460]: time="2025-01-29T11:58:13.823640816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:11a26aff-fcd8-459b-9d68-660d658f7eda,Namespace:default,Attempt:0,} returns sandbox id \"1a7c2295c2e324c7408ee4b6f5ec5b41d87ebc44e7ece0a6d5155f63439613f4\"" Jan 29 11:58:13.826023 containerd[1460]: time="2025-01-29T11:58:13.825987515Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 11:58:14.250199 containerd[1460]: time="2025-01-29T11:58:14.250120904Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:14.251082 containerd[1460]: time="2025-01-29T11:58:14.250976212Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 29 11:58:14.254347 containerd[1460]: time="2025-01-29T11:58:14.254297721Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 428.258349ms" Jan 29 11:58:14.254347 containerd[1460]: time="2025-01-29T11:58:14.254342946Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 11:58:14.257006 containerd[1460]: time="2025-01-29T11:58:14.256969580Z" level=info msg="CreateContainer within sandbox \"1a7c2295c2e324c7408ee4b6f5ec5b41d87ebc44e7ece0a6d5155f63439613f4\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 29 11:58:14.269778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1200951991.mount: Deactivated successfully. Jan 29 11:58:14.272183 containerd[1460]: time="2025-01-29T11:58:14.272132326Z" level=info msg="CreateContainer within sandbox \"1a7c2295c2e324c7408ee4b6f5ec5b41d87ebc44e7ece0a6d5155f63439613f4\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"a8032d37e64c7c92801d75eef97261cf4ef55dd73c426128500549c149c5dbe3\"" Jan 29 11:58:14.272907 containerd[1460]: time="2025-01-29T11:58:14.272879320Z" level=info msg="StartContainer for \"a8032d37e64c7c92801d75eef97261cf4ef55dd73c426128500549c149c5dbe3\"" Jan 29 11:58:14.308105 systemd[1]: Started cri-containerd-a8032d37e64c7c92801d75eef97261cf4ef55dd73c426128500549c149c5dbe3.scope - libcontainer container a8032d37e64c7c92801d75eef97261cf4ef55dd73c426128500549c149c5dbe3. Jan 29 11:58:14.338542 containerd[1460]: time="2025-01-29T11:58:14.338472586Z" level=info msg="StartContainer for \"a8032d37e64c7c92801d75eef97261cf4ef55dd73c426128500549c149c5dbe3\" returns successfully" Jan 29 11:58:14.709431 kubelet[1768]: I0129 11:58:14.709336 1768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.279538758 podStartE2EDuration="17.709313124s" podCreationTimestamp="2025-01-29 11:57:57 +0000 UTC" firstStartedPulling="2025-01-29 11:58:13.825330762 +0000 UTC m=+62.609277524" lastFinishedPulling="2025-01-29 11:58:14.255105128 +0000 UTC m=+63.039051890" observedRunningTime="2025-01-29 11:58:14.709011597 +0000 UTC m=+63.492958379" watchObservedRunningTime="2025-01-29 11:58:14.709313124 +0000 UTC m=+63.493259886" Jan 29 11:58:14.724263 kubelet[1768]: E0129 11:58:14.724195 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:58:14.905216 systemd-networkd[1388]: cali5ec59c6bf6e: Gained IPv6LL Jan 29 11:58:15.725079 kubelet[1768]: E0129 11:58:15.725027 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:58:16.725634 kubelet[1768]: E0129 11:58:16.725569 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:58:17.725822 kubelet[1768]: E0129 11:58:17.725741 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:58:18.726647 kubelet[1768]: E0129 11:58:18.726565 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:58:19.727100 kubelet[1768]: E0129 11:58:19.727026 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:58:20.727560 kubelet[1768]: E0129 11:58:20.727489 1768 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"