Feb 13 19:28:15.214775 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:40:15 -00 2025 Feb 13 19:28:15.214833 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f28373bbaddf11103b551b595069cf5faacb27d62f1aab4f9911393ba418b416 Feb 13 19:28:15.214850 kernel: BIOS-provided physical RAM map: Feb 13 19:28:15.214862 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 19:28:15.214874 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 19:28:15.214884 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 19:28:15.214900 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Feb 13 19:28:15.214912 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Feb 13 19:28:15.214924 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Feb 13 19:28:15.214936 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 19:28:15.214949 kernel: NX (Execute Disable) protection: active Feb 13 19:28:15.214961 kernel: APIC: Static calls initialized Feb 13 19:28:15.214974 kernel: SMBIOS 2.7 present. Feb 13 19:28:15.214986 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Feb 13 19:28:15.215005 kernel: Hypervisor detected: KVM Feb 13 19:28:15.215020 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 19:28:15.215035 kernel: kvm-clock: using sched offset of 8855171241 cycles Feb 13 19:28:15.215050 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 19:28:15.215065 kernel: tsc: Detected 2499.994 MHz processor Feb 13 19:28:15.215078 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:28:15.215093 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:28:15.215111 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Feb 13 19:28:15.215126 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 19:28:15.215141 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:28:15.215155 kernel: Using GB pages for direct mapping Feb 13 19:28:15.215169 kernel: ACPI: Early table checksum verification disabled Feb 13 19:28:15.215183 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Feb 13 19:28:15.215198 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Feb 13 19:28:15.215212 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 19:28:15.215227 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Feb 13 19:28:15.215245 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Feb 13 19:28:15.215258 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 13 19:28:15.215270 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 19:28:15.215281 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Feb 13 19:28:15.215293 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 19:28:15.215306 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Feb 13 19:28:15.215319 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Feb 13 19:28:15.215332 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 13 19:28:15.215344 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Feb 13 19:28:15.215383 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Feb 13 19:28:15.215404 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Feb 13 19:28:15.215419 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Feb 13 19:28:15.215434 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Feb 13 19:28:15.215449 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Feb 13 19:28:15.215468 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Feb 13 19:28:15.215483 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Feb 13 19:28:15.215497 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Feb 13 19:28:15.215512 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Feb 13 19:28:15.215527 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 19:28:15.215542 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 19:28:15.215557 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Feb 13 19:28:15.215572 kernel: NUMA: Initialized distance table, cnt=1 Feb 13 19:28:15.215587 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Feb 13 19:28:15.215605 kernel: Zone ranges: Feb 13 19:28:15.215620 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:28:15.215635 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Feb 13 19:28:15.215650 kernel: Normal empty Feb 13 19:28:15.215665 kernel: Movable zone start for each node Feb 13 19:28:15.215833 kernel: Early memory node ranges Feb 13 19:28:15.215851 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 19:28:15.215936 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Feb 13 19:28:15.215951 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Feb 13 19:28:15.215966 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:28:15.215985 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 19:28:15.215999 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Feb 13 19:28:15.216014 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 13 19:28:15.216029 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 19:28:15.216044 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Feb 13 19:28:15.216059 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 19:28:15.216074 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 19:28:15.216090 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 19:28:15.216105 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 19:28:15.216124 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:28:15.216139 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 19:28:15.216154 kernel: TSC deadline timer available Feb 13 19:28:15.216169 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 19:28:15.216184 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 19:28:15.216199 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Feb 13 19:28:15.216215 kernel: Booting paravirtualized kernel on KVM Feb 13 19:28:15.216230 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:28:15.216246 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 19:28:15.216265 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 19:28:15.216280 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 19:28:15.218262 kernel: pcpu-alloc: [0] 0 1 Feb 13 19:28:15.218290 kernel: kvm-guest: PV spinlocks enabled Feb 13 19:28:15.218304 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 19:28:15.218319 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f28373bbaddf11103b551b595069cf5faacb27d62f1aab4f9911393ba418b416 Feb 13 19:28:15.218332 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:28:15.218344 kernel: random: crng init done Feb 13 19:28:15.218381 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:28:15.218395 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 19:28:15.218407 kernel: Fallback order for Node 0: 0 Feb 13 19:28:15.218420 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Feb 13 19:28:15.218432 kernel: Policy zone: DMA32 Feb 13 19:28:15.220015 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:28:15.220041 kernel: Memory: 1930300K/2057760K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43476K init, 1596K bss, 127200K reserved, 0K cma-reserved) Feb 13 19:28:15.220057 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 19:28:15.220072 kernel: Kernel/User page tables isolation: enabled Feb 13 19:28:15.220094 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 19:28:15.220108 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:28:15.220123 kernel: Dynamic Preempt: voluntary Feb 13 19:28:15.220138 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:28:15.220155 kernel: rcu: RCU event tracing is enabled. Feb 13 19:28:15.220170 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 19:28:15.220185 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:28:15.220200 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:28:15.220215 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:28:15.220234 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:28:15.220250 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 19:28:15.220262 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 19:28:15.220339 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:28:15.220353 kernel: Console: colour VGA+ 80x25 Feb 13 19:28:15.220385 kernel: printk: console [ttyS0] enabled Feb 13 19:28:15.220397 kernel: ACPI: Core revision 20230628 Feb 13 19:28:15.220410 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Feb 13 19:28:15.220421 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:28:15.220436 kernel: x2apic enabled Feb 13 19:28:15.220449 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 19:28:15.220472 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Feb 13 19:28:15.220488 kernel: Calibrating delay loop (skipped) preset value.. 4999.98 BogoMIPS (lpj=2499994) Feb 13 19:28:15.220501 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 19:28:15.220514 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 19:28:15.220527 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:28:15.220539 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 19:28:15.220551 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:28:15.220564 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 19:28:15.220579 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 13 19:28:15.220594 kernel: RETBleed: Vulnerable Feb 13 19:28:15.220811 kernel: Speculative Store Bypass: Vulnerable Feb 13 19:28:15.220832 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 19:28:15.220845 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 19:28:15.220859 kernel: GDS: Unknown: Dependent on hypervisor status Feb 13 19:28:15.220874 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:28:15.220889 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:28:15.221012 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:28:15.221037 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 13 19:28:15.221052 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 13 19:28:15.221068 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 13 19:28:15.221083 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 13 19:28:15.221098 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 13 19:28:15.221113 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Feb 13 19:28:15.221128 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:28:15.221143 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 13 19:28:15.221158 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 13 19:28:15.221173 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Feb 13 19:28:15.221186 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Feb 13 19:28:15.221205 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Feb 13 19:28:15.221220 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Feb 13 19:28:15.221233 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Feb 13 19:28:15.221289 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:28:15.221307 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:28:15.221323 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:28:15.221339 kernel: landlock: Up and running. Feb 13 19:28:15.221354 kernel: SELinux: Initializing. Feb 13 19:28:15.221613 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 19:28:15.221631 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 19:28:15.221646 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 13 19:28:15.221668 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:28:15.221685 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:28:15.221701 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:28:15.221718 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 13 19:28:15.221734 kernel: signal: max sigframe size: 3632 Feb 13 19:28:15.221751 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:28:15.221768 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:28:15.221785 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 19:28:15.221801 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:28:15.221820 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:28:15.221837 kernel: .... node #0, CPUs: #1 Feb 13 19:28:15.221854 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 13 19:28:15.221873 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 19:28:15.221889 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:28:15.221905 kernel: smpboot: Max logical packages: 1 Feb 13 19:28:15.221921 kernel: smpboot: Total of 2 processors activated (9999.97 BogoMIPS) Feb 13 19:28:15.221938 kernel: devtmpfs: initialized Feb 13 19:28:15.221954 kernel: x86/mm: Memory block size: 128MB Feb 13 19:28:15.221973 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:28:15.221989 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 19:28:15.222006 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:28:15.222023 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:28:15.222039 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:28:15.222056 kernel: audit: type=2000 audit(1739474893.695:1): state=initialized audit_enabled=0 res=1 Feb 13 19:28:15.222072 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:28:15.222088 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:28:15.222105 kernel: cpuidle: using governor menu Feb 13 19:28:15.222125 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:28:15.222141 kernel: dca service started, version 1.12.1 Feb 13 19:28:15.222158 kernel: PCI: Using configuration type 1 for base access Feb 13 19:28:15.222174 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:28:15.222191 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:28:15.222207 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:28:15.222224 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:28:15.222303 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:28:15.222321 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:28:15.222341 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:28:15.222374 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:28:15.222390 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:28:15.222407 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 13 19:28:15.222423 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 19:28:15.222439 kernel: ACPI: Interpreter enabled Feb 13 19:28:15.222454 kernel: ACPI: PM: (supports S0 S5) Feb 13 19:28:15.222515 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:28:15.222534 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:28:15.222592 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 19:28:15.222613 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 13 19:28:15.222629 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:28:15.223080 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:28:15.223470 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 19:28:15.223623 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 19:28:15.223643 kernel: acpiphp: Slot [3] registered Feb 13 19:28:15.223662 kernel: acpiphp: Slot [4] registered Feb 13 19:28:15.223742 kernel: acpiphp: Slot [5] registered Feb 13 19:28:15.223758 kernel: acpiphp: Slot [6] registered Feb 13 19:28:15.223772 kernel: acpiphp: Slot [7] registered Feb 13 19:28:15.223785 kernel: acpiphp: Slot [8] registered Feb 13 19:28:15.223799 kernel: acpiphp: Slot [9] registered Feb 13 19:28:15.223813 kernel: acpiphp: Slot [10] registered Feb 13 19:28:15.223827 kernel: acpiphp: Slot [11] registered Feb 13 19:28:15.223842 kernel: acpiphp: Slot [12] registered Feb 13 19:28:15.223964 kernel: acpiphp: Slot [13] registered Feb 13 19:28:15.223979 kernel: acpiphp: Slot [14] registered Feb 13 19:28:15.223992 kernel: acpiphp: Slot [15] registered Feb 13 19:28:15.224007 kernel: acpiphp: Slot [16] registered Feb 13 19:28:15.224020 kernel: acpiphp: Slot [17] registered Feb 13 19:28:15.224034 kernel: acpiphp: Slot [18] registered Feb 13 19:28:15.224047 kernel: acpiphp: Slot [19] registered Feb 13 19:28:15.224061 kernel: acpiphp: Slot [20] registered Feb 13 19:28:15.224074 kernel: acpiphp: Slot [21] registered Feb 13 19:28:15.224088 kernel: acpiphp: Slot [22] registered Feb 13 19:28:15.224106 kernel: acpiphp: Slot [23] registered Feb 13 19:28:15.224120 kernel: acpiphp: Slot [24] registered Feb 13 19:28:15.224133 kernel: acpiphp: Slot [25] registered Feb 13 19:28:15.224146 kernel: acpiphp: Slot [26] registered Feb 13 19:28:15.224159 kernel: acpiphp: Slot [27] registered Feb 13 19:28:15.224174 kernel: acpiphp: Slot [28] registered Feb 13 19:28:15.224188 kernel: acpiphp: Slot [29] registered Feb 13 19:28:15.224201 kernel: acpiphp: Slot [30] registered Feb 13 19:28:15.224215 kernel: acpiphp: Slot [31] registered Feb 13 19:28:15.224235 kernel: PCI host bridge to bus 0000:00 Feb 13 19:28:15.224413 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 19:28:15.224547 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 19:28:15.224883 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 19:28:15.225092 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 13 19:28:15.225223 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:28:15.225415 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 19:28:15.225588 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 13 19:28:15.225745 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Feb 13 19:28:15.225889 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 13 19:28:15.226024 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 13 19:28:15.226170 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Feb 13 19:28:15.226321 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Feb 13 19:28:15.226482 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Feb 13 19:28:15.226633 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Feb 13 19:28:15.226829 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Feb 13 19:28:15.226974 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Feb 13 19:28:15.227125 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Feb 13 19:28:15.227284 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Feb 13 19:28:15.227442 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 13 19:28:15.227583 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 19:28:15.227822 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 19:28:15.228039 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Feb 13 19:28:15.228263 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 19:28:15.228438 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Feb 13 19:28:15.228456 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 19:28:15.228471 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 19:28:15.228486 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 19:28:15.228506 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 19:28:15.228521 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 19:28:15.228535 kernel: iommu: Default domain type: Translated Feb 13 19:28:15.228549 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:28:15.228563 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:28:15.228578 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 19:28:15.228592 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 19:28:15.228606 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Feb 13 19:28:15.229288 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Feb 13 19:28:15.229471 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Feb 13 19:28:15.229614 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 19:28:15.229635 kernel: vgaarb: loaded Feb 13 19:28:15.229652 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 13 19:28:15.229669 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Feb 13 19:28:15.229685 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 19:28:15.229701 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:28:15.229717 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:28:15.229739 kernel: pnp: PnP ACPI init Feb 13 19:28:15.229755 kernel: pnp: PnP ACPI: found 5 devices Feb 13 19:28:15.229771 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:28:15.229788 kernel: NET: Registered PF_INET protocol family Feb 13 19:28:15.229805 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:28:15.229822 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 19:28:15.229838 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:28:15.229855 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 19:28:15.229874 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 19:28:15.229894 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 19:28:15.229911 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 19:28:15.229927 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 19:28:15.229943 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:28:15.229960 kernel: NET: Registered PF_XDP protocol family Feb 13 19:28:15.230095 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 19:28:15.230349 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 19:28:15.230496 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 19:28:15.230691 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 13 19:28:15.230845 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 19:28:15.230868 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:28:15.230885 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 19:28:15.230902 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Feb 13 19:28:15.230918 kernel: clocksource: Switched to clocksource tsc Feb 13 19:28:15.230935 kernel: Initialise system trusted keyrings Feb 13 19:28:15.230951 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 19:28:15.230972 kernel: Key type asymmetric registered Feb 13 19:28:15.230987 kernel: Asymmetric key parser 'x509' registered Feb 13 19:28:15.231079 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:28:15.231096 kernel: io scheduler mq-deadline registered Feb 13 19:28:15.231112 kernel: io scheduler kyber registered Feb 13 19:28:15.231127 kernel: io scheduler bfq registered Feb 13 19:28:15.231143 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:28:15.231160 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:28:15.231177 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:28:15.231198 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 19:28:15.231214 kernel: i8042: Warning: Keylock active Feb 13 19:28:15.231231 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 19:28:15.231247 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 19:28:15.231475 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 13 19:28:15.231966 kernel: rtc_cmos 00:00: registered as rtc0 Feb 13 19:28:15.232213 kernel: rtc_cmos 00:00: setting system clock to 2025-02-13T19:28:14 UTC (1739474894) Feb 13 19:28:15.232354 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 13 19:28:15.232410 kernel: intel_pstate: CPU model not supported Feb 13 19:28:15.232425 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:28:15.232440 kernel: Segment Routing with IPv6 Feb 13 19:28:15.232454 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:28:15.232468 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:28:15.232481 kernel: Key type dns_resolver registered Feb 13 19:28:15.232494 kernel: IPI shorthand broadcast: enabled Feb 13 19:28:15.232509 kernel: sched_clock: Marking stable (640001946, 240292031)->(1034309301, -154015324) Feb 13 19:28:15.232525 kernel: registered taskstats version 1 Feb 13 19:28:15.232544 kernel: Loading compiled-in X.509 certificates Feb 13 19:28:15.232561 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6c364ddae48101e091a28279a8d953535f596d53' Feb 13 19:28:15.232575 kernel: Key type .fscrypt registered Feb 13 19:28:15.232590 kernel: Key type fscrypt-provisioning registered Feb 13 19:28:15.232603 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:28:15.232616 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:28:15.232630 kernel: ima: No architecture policies found Feb 13 19:28:15.232843 kernel: clk: Disabling unused clocks Feb 13 19:28:15.232860 kernel: Freeing unused kernel image (initmem) memory: 43476K Feb 13 19:28:15.232881 kernel: Write protecting the kernel read-only data: 38912k Feb 13 19:28:15.232897 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Feb 13 19:28:15.232911 kernel: Run /init as init process Feb 13 19:28:15.232926 kernel: with arguments: Feb 13 19:28:15.232941 kernel: /init Feb 13 19:28:15.233016 kernel: with environment: Feb 13 19:28:15.233034 kernel: HOME=/ Feb 13 19:28:15.233049 kernel: TERM=linux Feb 13 19:28:15.233064 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:28:15.233089 systemd[1]: Successfully made /usr/ read-only. Feb 13 19:28:15.233125 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:28:15.233146 systemd[1]: Detected virtualization amazon. Feb 13 19:28:15.233163 systemd[1]: Detected architecture x86-64. Feb 13 19:28:15.233180 systemd[1]: Running in initrd. Feb 13 19:28:15.233202 systemd[1]: No hostname configured, using default hostname. Feb 13 19:28:15.233282 systemd[1]: Hostname set to . Feb 13 19:28:15.233302 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:28:15.233321 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:28:15.233339 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:28:15.233398 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:28:15.233419 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:28:15.233437 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:28:15.233460 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:28:15.233480 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:28:15.233501 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:28:15.233520 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:28:15.233538 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 19:28:15.233561 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:28:15.233583 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:28:15.233601 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:28:15.233620 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:28:15.233642 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:28:15.233660 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:28:15.233680 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:28:15.233699 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:28:15.233717 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:28:15.233736 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 19:28:15.233754 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:28:15.233773 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:28:15.233795 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:28:15.233813 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:28:15.233830 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:28:15.233849 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:28:15.233867 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:28:15.233890 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:28:15.233912 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:28:15.233930 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:28:15.233949 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:28:15.233969 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:28:15.234082 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:28:15.234143 systemd-journald[179]: Collecting audit messages is disabled. Feb 13 19:28:15.234185 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:28:15.234203 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:28:15.234226 systemd-journald[179]: Journal started Feb 13 19:28:15.234262 systemd-journald[179]: Runtime Journal (/run/log/journal/ec2a13491de6d61442fdabb136796251) is 4.8M, max 38.5M, 33.7M free. Feb 13 19:28:15.255858 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:28:15.255239 systemd-modules-load[180]: Inserted module 'overlay' Feb 13 19:28:15.293723 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:28:15.432814 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:28:15.432846 kernel: Bridge firewalling registered Feb 13 19:28:15.313705 systemd-modules-load[180]: Inserted module 'br_netfilter' Feb 13 19:28:15.436913 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:28:15.440647 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:28:15.449075 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:28:15.474845 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:28:15.478596 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:28:15.482600 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:28:15.483093 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:28:15.506377 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:28:15.509091 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:28:15.520727 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:28:15.523522 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:28:15.533782 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:28:15.554670 dracut-cmdline[215]: dracut-dracut-053 Feb 13 19:28:15.560140 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f28373bbaddf11103b551b595069cf5faacb27d62f1aab4f9911393ba418b416 Feb 13 19:28:15.604229 systemd-resolved[213]: Positive Trust Anchors: Feb 13 19:28:15.604252 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:28:15.604319 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:28:15.609114 systemd-resolved[213]: Defaulting to hostname 'linux'. Feb 13 19:28:15.613307 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:28:15.629308 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:28:15.673400 kernel: SCSI subsystem initialized Feb 13 19:28:15.684216 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:28:15.696392 kernel: iscsi: registered transport (tcp) Feb 13 19:28:15.720626 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:28:15.720709 kernel: QLogic iSCSI HBA Driver Feb 13 19:28:15.767142 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:28:15.773561 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:28:15.822607 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:28:15.822687 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:28:15.822709 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:28:15.935416 kernel: raid6: avx512x4 gen() 9297 MB/s Feb 13 19:28:15.952414 kernel: raid6: avx512x2 gen() 8594 MB/s Feb 13 19:28:15.970401 kernel: raid6: avx512x1 gen() 12645 MB/s Feb 13 19:28:15.987440 kernel: raid6: avx2x4 gen() 7655 MB/s Feb 13 19:28:16.005410 kernel: raid6: avx2x2 gen() 7949 MB/s Feb 13 19:28:16.022552 kernel: raid6: avx2x1 gen() 11399 MB/s Feb 13 19:28:16.022634 kernel: raid6: using algorithm avx512x1 gen() 12645 MB/s Feb 13 19:28:16.040403 kernel: raid6: .... xor() 15858 MB/s, rmw enabled Feb 13 19:28:16.040492 kernel: raid6: using avx512x2 recovery algorithm Feb 13 19:28:16.067398 kernel: xor: automatically using best checksumming function avx Feb 13 19:28:16.375387 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:28:16.388279 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:28:16.397044 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:28:16.452608 systemd-udevd[398]: Using default interface naming scheme 'v255'. Feb 13 19:28:16.472685 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:28:16.483649 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:28:16.547408 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Feb 13 19:28:16.603771 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:28:16.614098 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:28:16.751908 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:28:16.768803 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:28:16.797014 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:28:16.800787 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:28:16.803570 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:28:16.806347 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:28:16.821110 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:28:16.844509 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:28:16.872806 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 19:28:16.906555 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 19:28:16.906771 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:28:16.906794 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Feb 13 19:28:16.906961 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:fd:0a:a7:bb:ab Feb 13 19:28:16.909385 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 19:28:16.909447 kernel: AES CTR mode by8 optimization enabled Feb 13 19:28:16.909057 (udev-worker)[454]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:28:16.913663 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:28:16.913936 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:28:16.915772 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:28:16.917215 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:28:16.917463 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:28:16.926704 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:28:16.934244 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 19:28:16.934571 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 13 19:28:16.938038 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:28:16.940160 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:28:16.951416 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 19:28:16.983692 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:28:16.983759 kernel: GPT:9289727 != 16777215 Feb 13 19:28:16.983779 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:28:16.983797 kernel: GPT:9289727 != 16777215 Feb 13 19:28:16.983814 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:28:16.983832 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:28:17.103391 kernel: BTRFS: device fsid 60f89c25-9096-4268-99ca-ef7992742f2b devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (455) Feb 13 19:28:17.125402 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (456) Feb 13 19:28:17.159453 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:28:17.173858 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:28:17.263582 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 19:28:17.268720 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 19:28:17.270931 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:28:17.320201 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:28:17.382667 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 19:28:17.399312 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 19:28:17.409527 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:28:17.447081 disk-uuid[633]: Primary Header is updated. Feb 13 19:28:17.447081 disk-uuid[633]: Secondary Entries is updated. Feb 13 19:28:17.447081 disk-uuid[633]: Secondary Header is updated. Feb 13 19:28:17.455548 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:28:17.503388 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:28:18.493401 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:28:18.497769 disk-uuid[634]: The operation has completed successfully. Feb 13 19:28:18.832065 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:28:18.832257 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:28:18.950398 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:28:18.970738 sh[892]: Success Feb 13 19:28:19.018009 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 19:28:19.221221 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:28:19.236346 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:28:19.243179 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:28:19.290728 kernel: BTRFS info (device dm-0): first mount of filesystem 60f89c25-9096-4268-99ca-ef7992742f2b Feb 13 19:28:19.290802 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:28:19.290822 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:28:19.292794 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:28:19.292846 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:28:19.339525 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 19:28:19.355698 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:28:19.358279 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:28:19.367651 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:28:19.380715 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:28:19.403088 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:28:19.403146 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:28:19.403160 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:28:19.410389 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:28:19.423940 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:28:19.425281 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:28:19.433374 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:28:19.443648 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:28:19.500697 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:28:19.521017 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:28:19.590476 systemd-networkd[1085]: lo: Link UP Feb 13 19:28:19.590487 systemd-networkd[1085]: lo: Gained carrier Feb 13 19:28:19.595081 systemd-networkd[1085]: Enumeration completed Feb 13 19:28:19.595603 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:28:19.599581 systemd[1]: Reached target network.target - Network. Feb 13 19:28:19.600960 systemd-networkd[1085]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:28:19.600965 systemd-networkd[1085]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:28:19.608018 systemd-networkd[1085]: eth0: Link UP Feb 13 19:28:19.608026 systemd-networkd[1085]: eth0: Gained carrier Feb 13 19:28:19.608038 systemd-networkd[1085]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:28:19.630483 systemd-networkd[1085]: eth0: DHCPv4 address 172.31.17.153/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:28:19.725683 ignition[1017]: Ignition 2.20.0 Feb 13 19:28:19.725697 ignition[1017]: Stage: fetch-offline Feb 13 19:28:19.725936 ignition[1017]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:28:19.725951 ignition[1017]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:28:19.727142 ignition[1017]: Ignition finished successfully Feb 13 19:28:19.732633 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:28:19.742003 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:28:19.762203 ignition[1095]: Ignition 2.20.0 Feb 13 19:28:19.762216 ignition[1095]: Stage: fetch Feb 13 19:28:19.762893 ignition[1095]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:28:19.762907 ignition[1095]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:28:19.763028 ignition[1095]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:28:19.790392 ignition[1095]: PUT result: OK Feb 13 19:28:19.795419 ignition[1095]: parsed url from cmdline: "" Feb 13 19:28:19.795549 ignition[1095]: no config URL provided Feb 13 19:28:19.795561 ignition[1095]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:28:19.795575 ignition[1095]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:28:19.795596 ignition[1095]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:28:19.798768 ignition[1095]: PUT result: OK Feb 13 19:28:19.798824 ignition[1095]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 19:28:19.800863 ignition[1095]: GET result: OK Feb 13 19:28:19.801195 ignition[1095]: parsing config with SHA512: b7cf0524a5981d222c70a2f127625009a98527be026671730fe53bc44fe029ab20f0dcb4fe0f2396096f7986988ea938b8864007d060c18c7e55fa24263c1a4b Feb 13 19:28:19.812288 unknown[1095]: fetched base config from "system" Feb 13 19:28:19.812304 unknown[1095]: fetched base config from "system" Feb 13 19:28:19.812312 unknown[1095]: fetched user config from "aws" Feb 13 19:28:19.813785 ignition[1095]: fetch: fetch complete Feb 13 19:28:19.813792 ignition[1095]: fetch: fetch passed Feb 13 19:28:19.813857 ignition[1095]: Ignition finished successfully Feb 13 19:28:19.819153 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:28:19.831355 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:28:19.859024 ignition[1102]: Ignition 2.20.0 Feb 13 19:28:19.859038 ignition[1102]: Stage: kargs Feb 13 19:28:19.859496 ignition[1102]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:28:19.859507 ignition[1102]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:28:19.859593 ignition[1102]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:28:19.862706 ignition[1102]: PUT result: OK Feb 13 19:28:19.874004 ignition[1102]: kargs: kargs passed Feb 13 19:28:19.874067 ignition[1102]: Ignition finished successfully Feb 13 19:28:19.876037 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:28:19.891904 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:28:19.928573 ignition[1108]: Ignition 2.20.0 Feb 13 19:28:19.928590 ignition[1108]: Stage: disks Feb 13 19:28:19.929770 ignition[1108]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:28:19.929788 ignition[1108]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:28:19.929930 ignition[1108]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:28:19.933851 ignition[1108]: PUT result: OK Feb 13 19:28:19.945325 ignition[1108]: disks: disks passed Feb 13 19:28:19.945501 ignition[1108]: Ignition finished successfully Feb 13 19:28:19.955910 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:28:19.958189 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:28:19.966586 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:28:19.975587 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:28:19.993173 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:28:20.004779 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:28:20.015855 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:28:20.068232 systemd-fsck[1117]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:28:20.072715 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:28:20.292857 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:28:20.487906 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 157595f2-1515-4117-a2d1-73fe2ed647fc r/w with ordered data mode. Quota mode: none. Feb 13 19:28:20.491082 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:28:20.491915 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:28:20.506133 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:28:20.522142 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:28:20.527344 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:28:20.530624 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:28:20.530674 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:28:20.577922 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:28:20.621485 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1136) Feb 13 19:28:20.621836 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:28:20.638632 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:28:20.638711 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:28:20.638731 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:28:20.660705 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:28:20.677869 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:28:20.926231 initrd-setup-root[1161]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:28:20.938516 initrd-setup-root[1168]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:28:20.950011 initrd-setup-root[1175]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:28:20.959488 initrd-setup-root[1182]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:28:21.186227 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:28:21.200602 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:28:21.203126 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:28:21.227385 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:28:21.294061 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:28:21.310197 ignition[1249]: INFO : Ignition 2.20.0 Feb 13 19:28:21.310197 ignition[1249]: INFO : Stage: mount Feb 13 19:28:21.318330 ignition[1249]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:28:21.318330 ignition[1249]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:28:21.318330 ignition[1249]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:28:21.328302 ignition[1249]: INFO : PUT result: OK Feb 13 19:28:21.340792 ignition[1249]: INFO : mount: mount passed Feb 13 19:28:21.344109 ignition[1249]: INFO : Ignition finished successfully Feb 13 19:28:21.352903 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:28:21.368353 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:28:21.392261 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:28:21.441642 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:28:21.459007 systemd-networkd[1085]: eth0: Gained IPv6LL Feb 13 19:28:21.499613 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1262) Feb 13 19:28:21.507016 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:28:21.507099 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:28:21.507123 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:28:21.523457 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:28:21.529984 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:28:21.566439 ignition[1278]: INFO : Ignition 2.20.0 Feb 13 19:28:21.566439 ignition[1278]: INFO : Stage: files Feb 13 19:28:21.570876 ignition[1278]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:28:21.570876 ignition[1278]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:28:21.570876 ignition[1278]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:28:21.577048 ignition[1278]: INFO : PUT result: OK Feb 13 19:28:21.591375 ignition[1278]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:28:21.604924 ignition[1278]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:28:21.604924 ignition[1278]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:28:21.615149 ignition[1278]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:28:21.616806 ignition[1278]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:28:21.620532 ignition[1278]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:28:21.617231 unknown[1278]: wrote ssh authorized keys file for user: core Feb 13 19:28:21.628729 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:28:21.628729 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:28:21.628729 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:28:21.628729 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:28:21.628729 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 19:28:21.628729 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 19:28:21.628729 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 19:28:21.628729 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Feb 13 19:28:21.989080 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 19:28:22.419724 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 19:28:22.423539 ignition[1278]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:28:22.423539 ignition[1278]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:28:22.423539 ignition[1278]: INFO : files: files passed Feb 13 19:28:22.423539 ignition[1278]: INFO : Ignition finished successfully Feb 13 19:28:22.426324 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:28:22.438559 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:28:22.443443 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:28:22.446860 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:28:22.448150 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:28:22.465299 initrd-setup-root-after-ignition[1307]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:28:22.465299 initrd-setup-root-after-ignition[1307]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:28:22.469510 initrd-setup-root-after-ignition[1311]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:28:22.472490 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:28:22.475280 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:28:22.488726 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:28:22.515179 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:28:22.515312 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:28:22.519354 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:28:22.521561 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:28:22.524520 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:28:22.537580 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:28:22.555094 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:28:22.562078 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:28:22.587921 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:28:22.591033 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:28:22.593045 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:28:22.597116 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:28:22.597293 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:28:22.601540 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:28:22.603429 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:28:22.607063 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:28:22.608635 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:28:22.614239 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:28:22.617022 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:28:22.619753 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:28:22.621713 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:28:22.625841 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:28:22.626061 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:28:22.629424 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:28:22.629559 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:28:22.635009 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:28:22.636821 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:28:22.638734 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:28:22.640097 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:28:22.641990 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:28:22.642152 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:28:22.649502 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:28:22.649681 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:28:22.652261 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:28:22.652397 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:28:22.668698 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:28:22.673136 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:28:22.676114 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:28:22.676441 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:28:22.681192 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:28:22.681564 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:28:22.695701 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:28:22.724461 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:28:22.747015 ignition[1331]: INFO : Ignition 2.20.0 Feb 13 19:28:22.747015 ignition[1331]: INFO : Stage: umount Feb 13 19:28:22.749502 ignition[1331]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:28:22.749502 ignition[1331]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:28:22.749502 ignition[1331]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:28:22.754546 ignition[1331]: INFO : PUT result: OK Feb 13 19:28:22.750990 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:28:22.762392 ignition[1331]: INFO : umount: umount passed Feb 13 19:28:22.762392 ignition[1331]: INFO : Ignition finished successfully Feb 13 19:28:22.762902 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:28:22.763026 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:28:22.772191 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:28:22.772350 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:28:22.778003 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:28:22.778119 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:28:22.781007 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:28:22.781180 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:28:22.783742 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:28:22.783824 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:28:22.786044 systemd[1]: Stopped target network.target - Network. Feb 13 19:28:22.787251 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:28:22.787317 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:28:22.801951 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:28:22.805032 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:28:22.811081 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:28:22.816946 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:28:22.830497 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:28:22.832697 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:28:22.832784 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:28:22.836003 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:28:22.836063 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:28:22.850392 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:28:22.850554 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:28:22.852381 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:28:22.852447 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:28:22.853881 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:28:22.853940 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:28:22.884595 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:28:22.884995 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:28:22.897940 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:28:22.898078 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:28:22.905288 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 19:28:22.905597 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:28:22.905692 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:28:22.937406 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 19:28:22.940506 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:28:22.940584 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:28:22.961509 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:28:22.963041 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:28:22.963213 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:28:22.965952 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:28:22.966166 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:28:22.969045 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:28:22.969122 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:28:22.978787 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:28:22.978857 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:28:22.990070 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:28:23.011812 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 19:28:23.015767 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:28:23.033786 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:28:23.035685 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:28:23.041985 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:28:23.042081 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:28:23.050949 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:28:23.051140 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:28:23.063073 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:28:23.068980 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:28:23.080853 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:28:23.081044 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:28:23.086202 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:28:23.086791 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:28:23.116202 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:28:23.118271 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:28:23.118398 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:28:23.126049 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:28:23.126145 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:28:23.128269 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:28:23.128343 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:28:23.131500 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:28:23.131610 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:28:23.140710 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 19:28:23.140799 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:28:23.141508 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:28:23.141606 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:28:23.143771 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:28:23.144015 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:28:23.148103 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:28:23.155633 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:28:23.177542 systemd[1]: Switching root. Feb 13 19:28:23.215809 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Feb 13 19:28:23.216407 systemd-journald[179]: Journal stopped Feb 13 19:28:25.289251 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:28:25.289458 kernel: SELinux: policy capability open_perms=1 Feb 13 19:28:25.289484 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:28:25.289502 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:28:25.289518 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:28:25.289541 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:28:25.289558 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:28:25.289575 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:28:25.289593 kernel: audit: type=1403 audit(1739474903.626:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:28:25.289618 systemd[1]: Successfully loaded SELinux policy in 76.541ms. Feb 13 19:28:25.289654 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 17.032ms. Feb 13 19:28:25.289675 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:28:25.289695 systemd[1]: Detected virtualization amazon. Feb 13 19:28:25.289712 systemd[1]: Detected architecture x86-64. Feb 13 19:28:25.289731 systemd[1]: Detected first boot. Feb 13 19:28:25.289749 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:28:25.289768 zram_generator::config[1375]: No configuration found. Feb 13 19:28:25.289786 kernel: Guest personality initialized and is inactive Feb 13 19:28:25.289811 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Feb 13 19:28:25.289830 kernel: Initialized host personality Feb 13 19:28:25.289848 kernel: NET: Registered PF_VSOCK protocol family Feb 13 19:28:25.289865 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:28:25.289885 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 19:28:25.289904 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:28:25.289923 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:28:25.289941 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:28:25.291015 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:28:25.291080 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:28:25.291099 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:28:25.291118 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:28:25.291137 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:28:25.291212 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:28:25.291234 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:28:25.291252 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:28:25.291270 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:28:25.291290 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:28:25.291313 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:28:25.291332 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:28:25.291351 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:28:25.291382 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:28:25.291401 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:28:25.291419 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:28:25.291437 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:28:25.291459 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:28:25.292396 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:28:25.292438 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:28:25.292503 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:28:25.292525 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:28:25.292547 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:28:25.292567 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:28:25.292585 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:28:25.292604 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:28:25.292634 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 19:28:25.292653 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:28:25.292681 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:28:25.292699 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:28:25.292718 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:28:25.292754 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:28:25.292772 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:28:25.292789 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:28:25.292809 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:28:25.292833 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:28:25.292851 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:28:25.292871 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:28:25.292891 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:28:25.292910 systemd[1]: Reached target machines.target - Containers. Feb 13 19:28:25.292931 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:28:25.292950 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:28:25.292968 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:28:25.292988 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:28:25.293010 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:28:25.293029 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:28:25.293047 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:28:25.293071 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:28:25.293089 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:28:25.293109 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:28:25.293127 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:28:25.293145 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:28:25.293167 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:28:25.293185 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:28:25.293204 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:28:25.293305 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:28:25.293329 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:28:25.293438 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:28:25.293464 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:28:25.293483 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 19:28:25.293503 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:28:25.293526 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:28:25.293642 systemd[1]: Stopped verity-setup.service. Feb 13 19:28:25.293667 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:28:25.293730 systemd-journald[1465]: Collecting audit messages is disabled. Feb 13 19:28:25.293772 kernel: ACPI: bus type drm_connector registered Feb 13 19:28:25.293790 kernel: fuse: init (API version 7.39) Feb 13 19:28:25.293806 kernel: loop: module loaded Feb 13 19:28:25.293969 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:28:25.293996 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:28:25.294015 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:28:25.294040 systemd-journald[1465]: Journal started Feb 13 19:28:25.294076 systemd-journald[1465]: Runtime Journal (/run/log/journal/ec2a13491de6d61442fdabb136796251) is 4.8M, max 38.5M, 33.7M free. Feb 13 19:28:24.684384 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:28:25.296566 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:28:24.695850 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 19:28:24.696643 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:28:25.299768 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:28:25.302351 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:28:25.304412 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:28:25.307697 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:28:25.309710 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:28:25.312218 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:28:25.312940 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:28:25.315680 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:28:25.315977 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:28:25.318435 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:28:25.318776 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:28:25.322224 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:28:25.323448 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:28:25.325879 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:28:25.326454 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:28:25.329296 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:28:25.329678 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:28:25.342662 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:28:25.355300 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:28:25.361945 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:28:25.400324 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:28:25.414832 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:28:25.437113 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:28:25.441187 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:28:25.441256 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:28:25.444853 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 19:28:25.461668 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:28:25.468587 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:28:25.472112 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:28:25.481740 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:28:25.485614 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:28:25.487476 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:28:25.500611 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:28:25.502054 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:28:25.509249 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:28:25.516945 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:28:25.533234 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:28:25.550150 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 19:28:25.571448 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:28:25.575332 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:28:25.578985 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:28:25.581259 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:28:25.596518 systemd-journald[1465]: Time spent on flushing to /var/log/journal/ec2a13491de6d61442fdabb136796251 is 87.264ms for 961 entries. Feb 13 19:28:25.596518 systemd-journald[1465]: System Journal (/var/log/journal/ec2a13491de6d61442fdabb136796251) is 8M, max 195.6M, 187.6M free. Feb 13 19:28:25.695799 systemd-journald[1465]: Received client request to flush runtime journal. Feb 13 19:28:25.586818 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:28:25.598202 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:28:25.607628 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 19:28:25.613001 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:28:25.700834 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:28:25.710484 kernel: loop0: detected capacity change from 0 to 62832 Feb 13 19:28:25.756447 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:28:25.758120 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:28:25.761701 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 19:28:25.780485 udevadm[1516]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:28:25.797048 systemd-tmpfiles[1507]: ACLs are not supported, ignoring. Feb 13 19:28:25.797079 systemd-tmpfiles[1507]: ACLs are not supported, ignoring. Feb 13 19:28:25.808436 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:28:25.813888 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:28:25.824649 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:28:25.848395 kernel: loop1: detected capacity change from 0 to 147912 Feb 13 19:28:25.970335 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:28:25.985611 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:28:26.018387 kernel: loop2: detected capacity change from 0 to 138176 Feb 13 19:28:26.018668 systemd-tmpfiles[1529]: ACLs are not supported, ignoring. Feb 13 19:28:26.019009 systemd-tmpfiles[1529]: ACLs are not supported, ignoring. Feb 13 19:28:26.034275 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:28:26.135393 kernel: loop3: detected capacity change from 0 to 205544 Feb 13 19:28:26.296462 kernel: loop4: detected capacity change from 0 to 62832 Feb 13 19:28:26.329804 kernel: loop5: detected capacity change from 0 to 147912 Feb 13 19:28:26.362549 kernel: loop6: detected capacity change from 0 to 138176 Feb 13 19:28:26.405272 kernel: loop7: detected capacity change from 0 to 205544 Feb 13 19:28:26.456507 (sd-merge)[1535]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 19:28:26.457319 (sd-merge)[1535]: Merged extensions into '/usr'. Feb 13 19:28:26.465111 systemd[1]: Reload requested from client PID 1506 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:28:26.465414 systemd[1]: Reloading... Feb 13 19:28:26.625446 zram_generator::config[1559]: No configuration found. Feb 13 19:28:27.085198 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:28:27.271182 systemd[1]: Reloading finished in 805 ms. Feb 13 19:28:27.298697 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:28:27.314596 systemd[1]: Starting ensure-sysext.service... Feb 13 19:28:27.322904 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:28:27.353643 systemd[1]: Reload requested from client PID 1611 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:28:27.353671 systemd[1]: Reloading... Feb 13 19:28:27.401203 systemd-tmpfiles[1612]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:28:27.402144 systemd-tmpfiles[1612]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:28:27.404210 systemd-tmpfiles[1612]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:28:27.404943 systemd-tmpfiles[1612]: ACLs are not supported, ignoring. Feb 13 19:28:27.405200 systemd-tmpfiles[1612]: ACLs are not supported, ignoring. Feb 13 19:28:27.413068 systemd-tmpfiles[1612]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:28:27.414812 systemd-tmpfiles[1612]: Skipping /boot Feb 13 19:28:27.435581 ldconfig[1501]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:28:27.449783 systemd-tmpfiles[1612]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:28:27.449804 systemd-tmpfiles[1612]: Skipping /boot Feb 13 19:28:27.517399 zram_generator::config[1642]: No configuration found. Feb 13 19:28:27.736657 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:28:27.819962 systemd[1]: Reloading finished in 465 ms. Feb 13 19:28:27.844034 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:28:27.845730 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:28:27.861880 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:28:27.875819 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:28:27.881762 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:28:27.892599 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:28:27.900245 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:28:27.904515 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:28:27.915727 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:28:27.923213 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:28:27.923528 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:28:27.934469 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:28:27.941892 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:28:27.954810 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:28:27.956354 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:28:27.956555 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:28:27.956801 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:28:27.975871 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:28:27.982580 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:28:27.982893 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:28:27.983235 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:28:27.983477 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:28:27.983625 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:28:27.993205 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:28:27.993597 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:28:28.003745 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:28:28.005139 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:28:28.005321 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:28:28.005587 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:28:28.007039 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:28:28.018737 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:28:28.035043 systemd[1]: Finished ensure-sysext.service. Feb 13 19:28:28.037580 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:28:28.037941 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:28:28.045311 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:28:28.045604 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:28:28.052550 systemd-udevd[1704]: Using default interface naming scheme 'v255'. Feb 13 19:28:28.056713 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:28:28.058318 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:28:28.061764 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:28:28.062089 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:28:28.071026 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:28:28.073074 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:28:28.080699 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:28:28.085474 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:28:28.121977 augenrules[1732]: No rules Feb 13 19:28:28.122912 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:28:28.125508 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:28:28.129611 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:28:28.138701 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:28:28.168653 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:28:28.177015 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:28:28.194090 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:28:28.215917 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:28:28.321823 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:28:28.415586 (udev-worker)[1751]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:28:28.475480 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 19:28:28.482382 kernel: ACPI: button: Power Button [PWRF] Feb 13 19:28:28.482621 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Feb 13 19:28:28.482670 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Feb 13 19:28:28.483072 kernel: ACPI: button: Sleep Button [SLPF] Feb 13 19:28:28.497850 systemd-resolved[1700]: Positive Trust Anchors: Feb 13 19:28:28.498778 systemd-resolved[1700]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:28:28.498945 systemd-resolved[1700]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:28:28.503099 systemd-networkd[1745]: lo: Link UP Feb 13 19:28:28.505324 systemd-networkd[1745]: lo: Gained carrier Feb 13 19:28:28.511803 systemd-networkd[1745]: Enumeration completed Feb 13 19:28:28.512647 systemd-resolved[1700]: Defaulting to hostname 'linux'. Feb 13 19:28:28.512779 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:28:28.517373 systemd-networkd[1745]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:28:28.517839 systemd-networkd[1745]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:28:28.520241 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 19:28:28.528166 systemd-networkd[1745]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:28:28.532025 systemd-networkd[1745]: eth0: Link UP Feb 13 19:28:28.532345 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:28:28.533688 systemd-networkd[1745]: eth0: Gained carrier Feb 13 19:28:28.533721 systemd-networkd[1745]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:28:28.534493 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:28:28.541292 systemd[1]: Reached target network.target - Network. Feb 13 19:28:28.542486 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:28:28.546405 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1755) Feb 13 19:28:28.549601 systemd-networkd[1745]: eth0: DHCPv4 address 172.31.17.153/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:28:28.586108 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 19:28:28.630463 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Feb 13 19:28:28.753384 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:28:28.778187 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:28:28.800878 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:28:28.822767 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:28:28.826518 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:28:28.845716 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:28:28.892435 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:28:28.917076 lvm[1863]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:28:28.978990 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:28:28.980079 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:28:28.988071 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:28:28.995279 lvm[1868]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:28:29.044266 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:28:29.283414 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:28:29.285344 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:28:29.287455 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:28:29.289152 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:28:29.293114 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:28:29.295501 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:28:29.298241 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:28:29.301437 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:28:29.301487 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:28:29.302973 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:28:29.307997 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:28:29.310916 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:28:29.315616 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 19:28:29.317743 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 19:28:29.319372 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 19:28:29.339343 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:28:29.341205 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 19:28:29.346157 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:28:29.347778 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:28:29.356379 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:28:29.362381 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:28:29.362458 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:28:29.371731 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:28:29.382238 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 19:28:29.396876 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:28:29.402510 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:28:29.410617 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:28:29.412221 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:28:29.432409 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:28:29.500211 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 19:28:29.504171 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 19:28:29.511674 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:28:29.520926 jq[1878]: false Feb 13 19:28:29.521750 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:28:29.549836 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:28:29.554009 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:28:29.554883 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:28:29.563629 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:28:29.569006 extend-filesystems[1879]: Found loop4 Feb 13 19:28:29.571202 extend-filesystems[1879]: Found loop5 Feb 13 19:28:29.571202 extend-filesystems[1879]: Found loop6 Feb 13 19:28:29.571202 extend-filesystems[1879]: Found loop7 Feb 13 19:28:29.571202 extend-filesystems[1879]: Found nvme0n1 Feb 13 19:28:29.571202 extend-filesystems[1879]: Found nvme0n1p1 Feb 13 19:28:29.571202 extend-filesystems[1879]: Found nvme0n1p2 Feb 13 19:28:29.571202 extend-filesystems[1879]: Found nvme0n1p3 Feb 13 19:28:29.571202 extend-filesystems[1879]: Found usr Feb 13 19:28:29.571202 extend-filesystems[1879]: Found nvme0n1p4 Feb 13 19:28:29.571202 extend-filesystems[1879]: Found nvme0n1p6 Feb 13 19:28:29.571202 extend-filesystems[1879]: Found nvme0n1p7 Feb 13 19:28:29.571202 extend-filesystems[1879]: Found nvme0n1p9 Feb 13 19:28:29.571202 extend-filesystems[1879]: Checking size of /dev/nvme0n1p9 Feb 13 19:28:29.585520 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:28:29.642216 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:28:29.642624 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:28:29.643037 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:28:29.643270 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:28:29.700120 dbus-daemon[1877]: [system] SELinux support is enabled Feb 13 19:28:29.703164 dbus-daemon[1877]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1745 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 19:28:29.705864 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:28:29.709274 jq[1890]: true Feb 13 19:28:29.714472 systemd-networkd[1745]: eth0: Gained IPv6LL Feb 13 19:28:29.727443 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:28:29.729852 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:28:29.732572 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:28:29.744835 extend-filesystems[1879]: Resized partition /dev/nvme0n1p9 Feb 13 19:28:29.742856 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:28:29.744891 dbus-daemon[1877]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 19:28:29.756188 update_engine[1889]: I20250213 19:28:29.755913 1889 main.cc:92] Flatcar Update Engine starting Feb 13 19:28:29.773416 update_engine[1889]: I20250213 19:28:29.760903 1889 update_check_scheduler.cc:74] Next update check in 8m49s Feb 13 19:28:29.768544 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:28:29.773783 ntpd[1881]: 13 Feb 19:28:29 ntpd[1881]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:04:11 UTC 2025 (1): Starting Feb 13 19:28:29.773783 ntpd[1881]: 13 Feb 19:28:29 ntpd[1881]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:28:29.773783 ntpd[1881]: 13 Feb 19:28:29 ntpd[1881]: ---------------------------------------------------- Feb 13 19:28:29.773783 ntpd[1881]: 13 Feb 19:28:29 ntpd[1881]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:28:29.773783 ntpd[1881]: 13 Feb 19:28:29 ntpd[1881]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:28:29.773783 ntpd[1881]: 13 Feb 19:28:29 ntpd[1881]: corporation. Support and training for ntp-4 are Feb 13 19:28:29.773783 ntpd[1881]: 13 Feb 19:28:29 ntpd[1881]: available at https://www.nwtime.org/support Feb 13 19:28:29.773783 ntpd[1881]: 13 Feb 19:28:29 ntpd[1881]: ---------------------------------------------------- Feb 13 19:28:29.768021 ntpd[1881]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:04:11 UTC 2025 (1): Starting Feb 13 19:28:29.771098 (ntainerd)[1901]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:28:29.796967 ntpd[1881]: 13 Feb 19:28:29 ntpd[1881]: proto: precision = 0.073 usec (-24) Feb 13 19:28:29.768052 ntpd[1881]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:28:29.797153 extend-filesystems[1918]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:28:29.869664 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 19:28:29.779602 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:28:29.768064 ntpd[1881]: ---------------------------------------------------- Feb 13 19:28:29.870171 ntpd[1881]: 13 Feb 19:28:29 ntpd[1881]: basedate set to 2025-02-01 Feb 13 19:28:29.870171 ntpd[1881]: 13 Feb 19:28:29 ntpd[1881]: gps base set to 2025-02-02 (week 2352) Feb 13 19:28:29.870171 ntpd[1881]: 13 Feb 19:28:29 ntpd[1881]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:28:29.870171 ntpd[1881]: 13 Feb 19:28:29 ntpd[1881]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:28:29.870171 ntpd[1881]: 13 Feb 19:28:29 ntpd[1881]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:28:29.870171 ntpd[1881]: 13 Feb 19:28:29 ntpd[1881]: Listen normally on 3 eth0 172.31.17.153:123 Feb 13 19:28:29.870171 ntpd[1881]: 13 Feb 19:28:29 ntpd[1881]: Listen normally on 4 lo [::1]:123 Feb 13 19:28:29.870171 ntpd[1881]: 13 Feb 19:28:29 ntpd[1881]: Listen normally on 5 eth0 [fe80::4fd:aff:fea7:bbab%2]:123 Feb 13 19:28:29.870171 ntpd[1881]: 13 Feb 19:28:29 ntpd[1881]: Listening on routing socket on fd #22 for interface updates Feb 13 19:28:29.792248 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:28:29.768074 ntpd[1881]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:28:29.897063 jq[1910]: true Feb 13 19:28:29.792298 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:28:29.768084 ntpd[1881]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:28:29.794202 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:28:29.768094 ntpd[1881]: corporation. Support and training for ntp-4 are Feb 13 19:28:29.794232 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:28:29.768104 ntpd[1881]: available at https://www.nwtime.org/support Feb 13 19:28:29.798398 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:28:29.768114 ntpd[1881]: ---------------------------------------------------- Feb 13 19:28:29.795808 ntpd[1881]: proto: precision = 0.073 usec (-24) Feb 13 19:28:29.808481 ntpd[1881]: basedate set to 2025-02-01 Feb 13 19:28:29.808506 ntpd[1881]: gps base set to 2025-02-02 (week 2352) Feb 13 19:28:29.854489 ntpd[1881]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:28:29.854574 ntpd[1881]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:28:29.854781 ntpd[1881]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:28:29.854817 ntpd[1881]: Listen normally on 3 eth0 172.31.17.153:123 Feb 13 19:28:29.854859 ntpd[1881]: Listen normally on 4 lo [::1]:123 Feb 13 19:28:29.854901 ntpd[1881]: Listen normally on 5 eth0 [fe80::4fd:aff:fea7:bbab%2]:123 Feb 13 19:28:29.854937 ntpd[1881]: Listening on routing socket on fd #22 for interface updates Feb 13 19:28:29.922419 ntpd[1881]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:28:29.926529 ntpd[1881]: 13 Feb 19:28:29 ntpd[1881]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:28:29.926529 ntpd[1881]: 13 Feb 19:28:29 ntpd[1881]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:28:29.922467 ntpd[1881]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:28:29.936666 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 19:28:29.970567 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:28:29.993226 coreos-metadata[1876]: Feb 13 19:28:29.993 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:28:30.015218 coreos-metadata[1876]: Feb 13 19:28:30.010 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 19:28:30.015218 coreos-metadata[1876]: Feb 13 19:28:30.011 INFO Fetch successful Feb 13 19:28:30.015218 coreos-metadata[1876]: Feb 13 19:28:30.011 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 19:28:30.015218 coreos-metadata[1876]: Feb 13 19:28:30.014 INFO Fetch successful Feb 13 19:28:30.015218 coreos-metadata[1876]: Feb 13 19:28:30.014 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 19:28:30.020641 coreos-metadata[1876]: Feb 13 19:28:30.020 INFO Fetch successful Feb 13 19:28:30.020641 coreos-metadata[1876]: Feb 13 19:28:30.020 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 19:28:30.031646 coreos-metadata[1876]: Feb 13 19:28:30.023 INFO Fetch successful Feb 13 19:28:30.031646 coreos-metadata[1876]: Feb 13 19:28:30.027 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 19:28:30.031646 coreos-metadata[1876]: Feb 13 19:28:30.031 INFO Fetch failed with 404: resource not found Feb 13 19:28:30.033555 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 19:28:30.042630 coreos-metadata[1876]: Feb 13 19:28:30.035 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 19:28:30.043817 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 19:28:30.051897 coreos-metadata[1876]: Feb 13 19:28:30.046 INFO Fetch successful Feb 13 19:28:30.051897 coreos-metadata[1876]: Feb 13 19:28:30.046 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 19:28:30.051897 coreos-metadata[1876]: Feb 13 19:28:30.048 INFO Fetch successful Feb 13 19:28:30.051897 coreos-metadata[1876]: Feb 13 19:28:30.048 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 19:28:30.053434 coreos-metadata[1876]: Feb 13 19:28:30.052 INFO Fetch successful Feb 13 19:28:30.053434 coreos-metadata[1876]: Feb 13 19:28:30.052 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 19:28:30.053635 coreos-metadata[1876]: Feb 13 19:28:30.053 INFO Fetch successful Feb 13 19:28:30.053635 coreos-metadata[1876]: Feb 13 19:28:30.053 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 19:28:30.055043 coreos-metadata[1876]: Feb 13 19:28:30.054 INFO Fetch successful Feb 13 19:28:30.057406 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 19:28:30.120148 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1756) Feb 13 19:28:30.107471 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:28:30.120403 extend-filesystems[1918]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 19:28:30.120403 extend-filesystems[1918]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:28:30.120403 extend-filesystems[1918]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 19:28:30.109527 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:28:30.154618 extend-filesystems[1879]: Resized filesystem in /dev/nvme0n1p9 Feb 13 19:28:30.177288 systemd-logind[1888]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 19:28:30.177313 systemd-logind[1888]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 13 19:28:30.177337 systemd-logind[1888]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 19:28:30.191397 systemd-logind[1888]: New seat seat0. Feb 13 19:28:30.210801 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 19:28:30.214773 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:28:30.220437 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:28:30.289452 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:28:30.404026 bash[1981]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:28:30.410436 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:28:30.459500 systemd[1]: Starting sshkeys.service... Feb 13 19:28:30.494344 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 19:28:30.507897 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 19:28:30.525531 amazon-ssm-agent[1940]: Initializing new seelog logger Feb 13 19:28:30.554449 amazon-ssm-agent[1940]: New Seelog Logger Creation Complete Feb 13 19:28:30.554449 amazon-ssm-agent[1940]: 2025/02/13 19:28:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:28:30.554449 amazon-ssm-agent[1940]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:28:30.554449 amazon-ssm-agent[1940]: 2025/02/13 19:28:30 processing appconfig overrides Feb 13 19:28:30.554449 amazon-ssm-agent[1940]: 2025/02/13 19:28:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:28:30.554449 amazon-ssm-agent[1940]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:28:30.554449 amazon-ssm-agent[1940]: 2025/02/13 19:28:30 processing appconfig overrides Feb 13 19:28:30.554449 amazon-ssm-agent[1940]: 2025/02/13 19:28:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:28:30.554449 amazon-ssm-agent[1940]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:28:30.554449 amazon-ssm-agent[1940]: 2025/02/13 19:28:30 processing appconfig overrides Feb 13 19:28:30.559413 amazon-ssm-agent[1940]: 2025-02-13 19:28:30 INFO Proxy environment variables: Feb 13 19:28:30.613980 amazon-ssm-agent[1940]: 2025/02/13 19:28:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:28:30.613980 amazon-ssm-agent[1940]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:28:30.613980 amazon-ssm-agent[1940]: 2025/02/13 19:28:30 processing appconfig overrides Feb 13 19:28:30.642208 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:28:30.662468 amazon-ssm-agent[1940]: 2025-02-13 19:28:30 INFO https_proxy: Feb 13 19:28:30.774688 amazon-ssm-agent[1940]: 2025-02-13 19:28:30 INFO http_proxy: Feb 13 19:28:30.846234 coreos-metadata[2016]: Feb 13 19:28:30.845 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:28:30.847391 coreos-metadata[2016]: Feb 13 19:28:30.847 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 19:28:30.848135 coreos-metadata[2016]: Feb 13 19:28:30.848 INFO Fetch successful Feb 13 19:28:30.848225 coreos-metadata[2016]: Feb 13 19:28:30.848 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 19:28:30.850074 coreos-metadata[2016]: Feb 13 19:28:30.849 INFO Fetch successful Feb 13 19:28:30.851510 unknown[2016]: wrote ssh authorized keys file for user: core Feb 13 19:28:30.864759 amazon-ssm-agent[1940]: 2025-02-13 19:28:30 INFO no_proxy: Feb 13 19:28:30.906995 locksmithd[1928]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:28:30.965662 amazon-ssm-agent[1940]: 2025-02-13 19:28:30 INFO Checking if agent identity type OnPrem can be assumed Feb 13 19:28:30.967287 update-ssh-keys[2037]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:28:30.970103 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 19:28:30.978533 systemd[1]: Finished sshkeys.service. Feb 13 19:28:31.078830 amazon-ssm-agent[1940]: 2025-02-13 19:28:30 INFO Checking if agent identity type EC2 can be assumed Feb 13 19:28:31.189510 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 19:28:31.190340 dbus-daemon[1877]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 19:28:31.191185 dbus-daemon[1877]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1927 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 19:28:31.198897 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 19:28:31.212146 amazon-ssm-agent[1940]: 2025-02-13 19:28:31 INFO Agent will take identity from EC2 Feb 13 19:28:31.268555 polkitd[2084]: Started polkitd version 121 Feb 13 19:28:31.300354 polkitd[2084]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 19:28:31.302515 polkitd[2084]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 19:28:31.305642 polkitd[2084]: Finished loading, compiling and executing 2 rules Feb 13 19:28:31.306471 dbus-daemon[1877]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 19:28:31.306690 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 19:28:31.307587 polkitd[2084]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 19:28:31.318238 amazon-ssm-agent[1940]: 2025-02-13 19:28:31 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:28:31.318384 sshd_keygen[1916]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:28:31.372817 systemd-hostnamed[1927]: Hostname set to (transient) Feb 13 19:28:31.373551 systemd-resolved[1700]: System hostname changed to 'ip-172-31-17-153'. Feb 13 19:28:31.412246 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:28:31.423432 amazon-ssm-agent[1940]: 2025-02-13 19:28:31 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:28:31.423785 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:28:31.448897 systemd[1]: Started sshd@0-172.31.17.153:22-139.178.68.195:49002.service - OpenSSH per-connection server daemon (139.178.68.195:49002). Feb 13 19:28:31.480107 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:28:31.481104 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:28:31.494034 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:28:31.518122 amazon-ssm-agent[1940]: 2025-02-13 19:28:31 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:28:31.524194 containerd[1901]: time="2025-02-13T19:28:31.523213916Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:28:31.546465 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:28:31.559791 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:28:31.568827 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:28:31.570965 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:28:31.588208 containerd[1901]: time="2025-02-13T19:28:31.587948225Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:28:31.590437 containerd[1901]: time="2025-02-13T19:28:31.590186994Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:28:31.590437 containerd[1901]: time="2025-02-13T19:28:31.590233282Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:28:31.590437 containerd[1901]: time="2025-02-13T19:28:31.590258882Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:28:31.590635 containerd[1901]: time="2025-02-13T19:28:31.590466721Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:28:31.590635 containerd[1901]: time="2025-02-13T19:28:31.590492212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:28:31.590635 containerd[1901]: time="2025-02-13T19:28:31.590562088Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:28:31.590635 containerd[1901]: time="2025-02-13T19:28:31.590579699Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:28:31.590883 containerd[1901]: time="2025-02-13T19:28:31.590852992Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:28:31.590929 containerd[1901]: time="2025-02-13T19:28:31.590884214Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:28:31.590929 containerd[1901]: time="2025-02-13T19:28:31.590905057Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:28:31.590929 containerd[1901]: time="2025-02-13T19:28:31.590921139Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:28:31.591388 containerd[1901]: time="2025-02-13T19:28:31.591036274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:28:31.591388 containerd[1901]: time="2025-02-13T19:28:31.591276616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:28:31.591533 containerd[1901]: time="2025-02-13T19:28:31.591509100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:28:31.591574 containerd[1901]: time="2025-02-13T19:28:31.591536218Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:28:31.591860 containerd[1901]: time="2025-02-13T19:28:31.591641480Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:28:31.591860 containerd[1901]: time="2025-02-13T19:28:31.591701977Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:28:31.599033 containerd[1901]: time="2025-02-13T19:28:31.598989686Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:28:31.599140 containerd[1901]: time="2025-02-13T19:28:31.599074130Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:28:31.599140 containerd[1901]: time="2025-02-13T19:28:31.599100171Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:28:31.599140 containerd[1901]: time="2025-02-13T19:28:31.599122956Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:28:31.599252 containerd[1901]: time="2025-02-13T19:28:31.599142500Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:28:31.599350 containerd[1901]: time="2025-02-13T19:28:31.599324529Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:28:31.599609 containerd[1901]: time="2025-02-13T19:28:31.599584480Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:28:31.599735 containerd[1901]: time="2025-02-13T19:28:31.599712502Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:28:31.599813 containerd[1901]: time="2025-02-13T19:28:31.599736476Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:28:31.599813 containerd[1901]: time="2025-02-13T19:28:31.599759268Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:28:31.599813 containerd[1901]: time="2025-02-13T19:28:31.599778103Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:28:31.599813 containerd[1901]: time="2025-02-13T19:28:31.599796434Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:28:31.599962 containerd[1901]: time="2025-02-13T19:28:31.599822414Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:28:31.599962 containerd[1901]: time="2025-02-13T19:28:31.599844431Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:28:31.599962 containerd[1901]: time="2025-02-13T19:28:31.599866570Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:28:31.599962 containerd[1901]: time="2025-02-13T19:28:31.599885321Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:28:31.599962 containerd[1901]: time="2025-02-13T19:28:31.599905132Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:28:31.599962 containerd[1901]: time="2025-02-13T19:28:31.599922428Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:28:31.599962 containerd[1901]: time="2025-02-13T19:28:31.599956619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:28:31.600193 containerd[1901]: time="2025-02-13T19:28:31.599977608Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:28:31.600193 containerd[1901]: time="2025-02-13T19:28:31.599996739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:28:31.600193 containerd[1901]: time="2025-02-13T19:28:31.600020798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:28:31.600193 containerd[1901]: time="2025-02-13T19:28:31.600038872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:28:31.600193 containerd[1901]: time="2025-02-13T19:28:31.600058819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:28:31.600193 containerd[1901]: time="2025-02-13T19:28:31.600076726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:28:31.600193 containerd[1901]: time="2025-02-13T19:28:31.600096532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:28:31.600193 containerd[1901]: time="2025-02-13T19:28:31.600115471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:28:31.600193 containerd[1901]: time="2025-02-13T19:28:31.600138644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:28:31.600193 containerd[1901]: time="2025-02-13T19:28:31.600156664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:28:31.600193 containerd[1901]: time="2025-02-13T19:28:31.600174640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:28:31.600630 containerd[1901]: time="2025-02-13T19:28:31.600195699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:28:31.600630 containerd[1901]: time="2025-02-13T19:28:31.600218289Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:28:31.600630 containerd[1901]: time="2025-02-13T19:28:31.600253491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:28:31.600630 containerd[1901]: time="2025-02-13T19:28:31.600273515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:28:31.600630 containerd[1901]: time="2025-02-13T19:28:31.600288695Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:28:31.600630 containerd[1901]: time="2025-02-13T19:28:31.600345062Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:28:31.601235 containerd[1901]: time="2025-02-13T19:28:31.601122055Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:28:31.601310 containerd[1901]: time="2025-02-13T19:28:31.601238947Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:28:31.601310 containerd[1901]: time="2025-02-13T19:28:31.601269679Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:28:31.601310 containerd[1901]: time="2025-02-13T19:28:31.601286109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:28:31.601448 containerd[1901]: time="2025-02-13T19:28:31.601307895Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:28:31.601448 containerd[1901]: time="2025-02-13T19:28:31.601323687Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:28:31.601448 containerd[1901]: time="2025-02-13T19:28:31.601340122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:28:31.601801 containerd[1901]: time="2025-02-13T19:28:31.601744620Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:28:31.602025 containerd[1901]: time="2025-02-13T19:28:31.601803770Z" level=info msg="Connect containerd service" Feb 13 19:28:31.602025 containerd[1901]: time="2025-02-13T19:28:31.601852389Z" level=info msg="using legacy CRI server" Feb 13 19:28:31.602025 containerd[1901]: time="2025-02-13T19:28:31.601862762Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:28:31.602158 containerd[1901]: time="2025-02-13T19:28:31.602030707Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:28:31.602837 containerd[1901]: time="2025-02-13T19:28:31.602804035Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:28:31.603100 containerd[1901]: time="2025-02-13T19:28:31.603066533Z" level=info msg="Start subscribing containerd event" Feb 13 19:28:31.603153 containerd[1901]: time="2025-02-13T19:28:31.603132159Z" level=info msg="Start recovering state" Feb 13 19:28:31.604392 containerd[1901]: time="2025-02-13T19:28:31.603611120Z" level=info msg="Start event monitor" Feb 13 19:28:31.604392 containerd[1901]: time="2025-02-13T19:28:31.603650800Z" level=info msg="Start snapshots syncer" Feb 13 19:28:31.604392 containerd[1901]: time="2025-02-13T19:28:31.603664691Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:28:31.604392 containerd[1901]: time="2025-02-13T19:28:31.603675570Z" level=info msg="Start streaming server" Feb 13 19:28:31.604392 containerd[1901]: time="2025-02-13T19:28:31.604038625Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:28:31.604392 containerd[1901]: time="2025-02-13T19:28:31.604158772Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:28:31.608760 containerd[1901]: time="2025-02-13T19:28:31.608724210Z" level=info msg="containerd successfully booted in 0.092716s" Feb 13 19:28:31.608943 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:28:31.617192 amazon-ssm-agent[1940]: 2025-02-13 19:28:31 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 19:28:31.624098 amazon-ssm-agent[1940]: 2025-02-13 19:28:31 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Feb 13 19:28:31.624098 amazon-ssm-agent[1940]: 2025-02-13 19:28:31 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 19:28:31.624098 amazon-ssm-agent[1940]: 2025-02-13 19:28:31 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 19:28:31.624098 amazon-ssm-agent[1940]: 2025-02-13 19:28:31 INFO [Registrar] Starting registrar module Feb 13 19:28:31.624098 amazon-ssm-agent[1940]: 2025-02-13 19:28:31 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 19:28:31.624098 amazon-ssm-agent[1940]: 2025-02-13 19:28:31 INFO [EC2Identity] EC2 registration was successful. Feb 13 19:28:31.624098 amazon-ssm-agent[1940]: 2025-02-13 19:28:31 INFO [CredentialRefresher] credentialRefresher has started Feb 13 19:28:31.624098 amazon-ssm-agent[1940]: 2025-02-13 19:28:31 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 19:28:31.624098 amazon-ssm-agent[1940]: 2025-02-13 19:28:31 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 19:28:31.721306 amazon-ssm-agent[1940]: 2025-02-13 19:28:31 INFO [CredentialRefresher] Next credential rotation will be in 30.691660798933334 minutes Feb 13 19:28:31.725028 sshd[2103]: Accepted publickey for core from 139.178.68.195 port 49002 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:28:31.729112 sshd-session[2103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:28:31.752920 systemd-logind[1888]: New session 1 of user core. Feb 13 19:28:31.754459 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:28:31.764825 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:28:31.786397 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:28:31.822138 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:28:31.856154 (systemd)[2116]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:28:31.869206 systemd-logind[1888]: New session c1 of user core. Feb 13 19:28:32.226579 systemd[2116]: Queued start job for default target default.target. Feb 13 19:28:32.233639 systemd[2116]: Created slice app.slice - User Application Slice. Feb 13 19:28:32.233685 systemd[2116]: Reached target paths.target - Paths. Feb 13 19:28:32.233839 systemd[2116]: Reached target timers.target - Timers. Feb 13 19:28:32.235517 systemd[2116]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:28:32.250608 systemd[2116]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:28:32.250761 systemd[2116]: Reached target sockets.target - Sockets. Feb 13 19:28:32.250822 systemd[2116]: Reached target basic.target - Basic System. Feb 13 19:28:32.250871 systemd[2116]: Reached target default.target - Main User Target. Feb 13 19:28:32.250912 systemd[2116]: Startup finished in 357ms. Feb 13 19:28:32.251177 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:28:32.268637 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:28:32.437197 systemd[1]: Started sshd@1-172.31.17.153:22-139.178.68.195:49010.service - OpenSSH per-connection server daemon (139.178.68.195:49010). Feb 13 19:28:32.650610 sshd[2127]: Accepted publickey for core from 139.178.68.195 port 49010 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:28:32.658849 sshd-session[2127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:28:32.689321 amazon-ssm-agent[1940]: 2025-02-13 19:28:32 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 19:28:32.700023 systemd-logind[1888]: New session 2 of user core. Feb 13 19:28:32.711517 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:28:32.789856 amazon-ssm-agent[1940]: 2025-02-13 19:28:32 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2130) started Feb 13 19:28:32.871165 sshd[2131]: Connection closed by 139.178.68.195 port 49010 Feb 13 19:28:32.871697 sshd-session[2127]: pam_unix(sshd:session): session closed for user core Feb 13 19:28:32.881055 systemd[1]: sshd@1-172.31.17.153:22-139.178.68.195:49010.service: Deactivated successfully. Feb 13 19:28:32.886708 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:28:32.892724 systemd-logind[1888]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:28:32.896927 amazon-ssm-agent[1940]: 2025-02-13 19:28:32 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 19:28:32.914020 systemd[1]: Started sshd@2-172.31.17.153:22-139.178.68.195:49024.service - OpenSSH per-connection server daemon (139.178.68.195:49024). Feb 13 19:28:32.918875 systemd-logind[1888]: Removed session 2. Feb 13 19:28:33.098239 sshd[2142]: Accepted publickey for core from 139.178.68.195 port 49024 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:28:33.102164 sshd-session[2142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:28:33.112799 systemd-logind[1888]: New session 3 of user core. Feb 13 19:28:33.119777 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:28:33.251167 sshd[2149]: Connection closed by 139.178.68.195 port 49024 Feb 13 19:28:33.253409 sshd-session[2142]: pam_unix(sshd:session): session closed for user core Feb 13 19:28:33.257999 systemd[1]: sshd@2-172.31.17.153:22-139.178.68.195:49024.service: Deactivated successfully. Feb 13 19:28:33.261570 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:28:33.263729 systemd-logind[1888]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:28:33.266300 systemd-logind[1888]: Removed session 3. Feb 13 19:28:33.684506 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:28:33.686780 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:28:33.688823 systemd[1]: Startup finished in 863ms (kernel) + 8.784s (initrd) + 10.137s (userspace) = 19.785s. Feb 13 19:28:33.951170 (kubelet)[2159]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:28:35.563339 kubelet[2159]: E0213 19:28:35.563283 2159 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:28:35.566548 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:28:35.566747 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:28:35.567382 systemd[1]: kubelet.service: Consumed 1.006s CPU time, 239.9M memory peak. Feb 13 19:28:43.312829 systemd[1]: Started sshd@3-172.31.17.153:22-139.178.68.195:57552.service - OpenSSH per-connection server daemon (139.178.68.195:57552). Feb 13 19:28:43.500561 sshd[2171]: Accepted publickey for core from 139.178.68.195 port 57552 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:28:43.502577 sshd-session[2171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:28:43.513876 systemd-logind[1888]: New session 4 of user core. Feb 13 19:28:43.520670 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:28:43.648073 sshd[2173]: Connection closed by 139.178.68.195 port 57552 Feb 13 19:28:43.648620 sshd-session[2171]: pam_unix(sshd:session): session closed for user core Feb 13 19:28:43.656738 systemd[1]: sshd@3-172.31.17.153:22-139.178.68.195:57552.service: Deactivated successfully. Feb 13 19:28:43.662704 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:28:43.668838 systemd-logind[1888]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:28:43.695836 systemd[1]: Started sshd@4-172.31.17.153:22-139.178.68.195:57558.service - OpenSSH per-connection server daemon (139.178.68.195:57558). Feb 13 19:28:43.699022 systemd-logind[1888]: Removed session 4. Feb 13 19:28:43.871501 sshd[2178]: Accepted publickey for core from 139.178.68.195 port 57558 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:28:43.873228 sshd-session[2178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:28:43.887222 systemd-logind[1888]: New session 5 of user core. Feb 13 19:28:43.898186 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:28:44.018096 sshd[2181]: Connection closed by 139.178.68.195 port 57558 Feb 13 19:28:44.018925 sshd-session[2178]: pam_unix(sshd:session): session closed for user core Feb 13 19:28:44.023759 systemd[1]: sshd@4-172.31.17.153:22-139.178.68.195:57558.service: Deactivated successfully. Feb 13 19:28:44.028079 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:28:44.029103 systemd-logind[1888]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:28:44.030582 systemd-logind[1888]: Removed session 5. Feb 13 19:28:44.061998 systemd[1]: Started sshd@5-172.31.17.153:22-139.178.68.195:57570.service - OpenSSH per-connection server daemon (139.178.68.195:57570). Feb 13 19:28:44.236404 sshd[2187]: Accepted publickey for core from 139.178.68.195 port 57570 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:28:44.239610 sshd-session[2187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:28:44.246274 systemd-logind[1888]: New session 6 of user core. Feb 13 19:28:44.252672 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:28:44.387656 sshd[2189]: Connection closed by 139.178.68.195 port 57570 Feb 13 19:28:44.390402 sshd-session[2187]: pam_unix(sshd:session): session closed for user core Feb 13 19:28:44.405429 systemd-logind[1888]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:28:44.406616 systemd[1]: sshd@5-172.31.17.153:22-139.178.68.195:57570.service: Deactivated successfully. Feb 13 19:28:44.408881 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:28:44.417210 systemd-logind[1888]: Removed session 6. Feb 13 19:28:44.422762 systemd[1]: Started sshd@6-172.31.17.153:22-139.178.68.195:57574.service - OpenSSH per-connection server daemon (139.178.68.195:57574). Feb 13 19:28:44.589819 sshd[2194]: Accepted publickey for core from 139.178.68.195 port 57574 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:28:44.591301 sshd-session[2194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:28:44.604032 systemd-logind[1888]: New session 7 of user core. Feb 13 19:28:44.618644 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:28:44.749837 sudo[2198]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:28:44.750400 sudo[2198]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:28:44.782882 sudo[2198]: pam_unix(sudo:session): session closed for user root Feb 13 19:28:44.808354 sshd[2197]: Connection closed by 139.178.68.195 port 57574 Feb 13 19:28:44.809526 sshd-session[2194]: pam_unix(sshd:session): session closed for user core Feb 13 19:28:44.815699 systemd[1]: sshd@6-172.31.17.153:22-139.178.68.195:57574.service: Deactivated successfully. Feb 13 19:28:44.818555 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:28:44.821756 systemd-logind[1888]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:28:44.823112 systemd-logind[1888]: Removed session 7. Feb 13 19:28:44.855856 systemd[1]: Started sshd@7-172.31.17.153:22-139.178.68.195:57578.service - OpenSSH per-connection server daemon (139.178.68.195:57578). Feb 13 19:28:45.052027 sshd[2204]: Accepted publickey for core from 139.178.68.195 port 57578 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:28:45.053822 sshd-session[2204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:28:45.065339 systemd-logind[1888]: New session 8 of user core. Feb 13 19:28:45.068782 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:28:45.177570 sudo[2208]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:28:45.177981 sudo[2208]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:28:45.188307 sudo[2208]: pam_unix(sudo:session): session closed for user root Feb 13 19:28:45.196868 sudo[2207]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:28:45.197258 sudo[2207]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:28:45.217208 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:28:45.292725 augenrules[2230]: No rules Feb 13 19:28:45.294606 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:28:45.294879 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:28:45.296177 sudo[2207]: pam_unix(sudo:session): session closed for user root Feb 13 19:28:45.321898 sshd[2206]: Connection closed by 139.178.68.195 port 57578 Feb 13 19:28:45.322584 sshd-session[2204]: pam_unix(sshd:session): session closed for user core Feb 13 19:28:45.331451 systemd[1]: sshd@7-172.31.17.153:22-139.178.68.195:57578.service: Deactivated successfully. Feb 13 19:28:45.337183 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:28:45.345133 systemd-logind[1888]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:28:45.382339 systemd[1]: Started sshd@8-172.31.17.153:22-139.178.68.195:57586.service - OpenSSH per-connection server daemon (139.178.68.195:57586). Feb 13 19:28:45.389731 systemd-logind[1888]: Removed session 8. Feb 13 19:28:45.588547 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:28:45.597425 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:28:45.625913 sshd[2238]: Accepted publickey for core from 139.178.68.195 port 57586 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:28:45.633225 sshd-session[2238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:28:45.665592 systemd-logind[1888]: New session 9 of user core. Feb 13 19:28:45.666689 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:28:45.808288 sudo[2245]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:28:45.808750 sudo[2245]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:28:46.007666 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:28:46.023019 (kubelet)[2260]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:28:46.246396 kubelet[2260]: E0213 19:28:46.244227 2260 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:28:46.248972 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:28:46.249238 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:28:46.249755 systemd[1]: kubelet.service: Consumed 186ms CPU time, 97.6M memory peak. Feb 13 19:28:47.372240 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:28:47.373056 systemd[1]: kubelet.service: Consumed 186ms CPU time, 97.6M memory peak. Feb 13 19:28:47.380920 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:28:47.447795 systemd[1]: Reload requested from client PID 2289 ('systemctl') (unit session-9.scope)... Feb 13 19:28:47.447816 systemd[1]: Reloading... Feb 13 19:28:47.595392 zram_generator::config[2337]: No configuration found. Feb 13 19:28:48.013647 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:28:48.141857 systemd[1]: Reloading finished in 693 ms. Feb 13 19:28:48.202482 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:28:48.215827 (kubelet)[2384]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:28:48.220756 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:28:48.221621 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:28:48.221832 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:28:48.221875 systemd[1]: kubelet.service: Consumed 123ms CPU time, 84.6M memory peak. Feb 13 19:28:48.229224 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:28:48.483248 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:28:48.494858 (kubelet)[2397]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:28:48.572381 kubelet[2397]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:28:48.572381 kubelet[2397]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:28:48.572381 kubelet[2397]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:28:48.574649 kubelet[2397]: I0213 19:28:48.574586 2397 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:28:49.403451 kubelet[2397]: I0213 19:28:49.403409 2397 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 19:28:49.403451 kubelet[2397]: I0213 19:28:49.403440 2397 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:28:49.403775 kubelet[2397]: I0213 19:28:49.403753 2397 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 19:28:49.459037 kubelet[2397]: I0213 19:28:49.458443 2397 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:28:49.480323 kubelet[2397]: E0213 19:28:49.480285 2397 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:28:49.480323 kubelet[2397]: I0213 19:28:49.480318 2397 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:28:49.493497 kubelet[2397]: I0213 19:28:49.492881 2397 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:28:49.493497 kubelet[2397]: I0213 19:28:49.493278 2397 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 19:28:49.493959 kubelet[2397]: I0213 19:28:49.493920 2397 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:28:49.494841 kubelet[2397]: I0213 19:28:49.493955 2397 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.17.153","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:28:49.494841 kubelet[2397]: I0213 19:28:49.494445 2397 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:28:49.494841 kubelet[2397]: I0213 19:28:49.494464 2397 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 19:28:49.494841 kubelet[2397]: I0213 19:28:49.494652 2397 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:28:49.502888 kubelet[2397]: I0213 19:28:49.502847 2397 kubelet.go:408] "Attempting to sync node with API server" Feb 13 19:28:49.503625 kubelet[2397]: I0213 19:28:49.503598 2397 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:28:49.503746 kubelet[2397]: I0213 19:28:49.503650 2397 kubelet.go:314] "Adding apiserver pod source" Feb 13 19:28:49.503746 kubelet[2397]: I0213 19:28:49.503668 2397 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:28:49.506990 kubelet[2397]: E0213 19:28:49.506579 2397 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:28:49.506990 kubelet[2397]: E0213 19:28:49.506655 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:28:49.512615 kubelet[2397]: I0213 19:28:49.512485 2397 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:28:49.515270 kubelet[2397]: I0213 19:28:49.515228 2397 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:28:49.516074 kubelet[2397]: W0213 19:28:49.516049 2397 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:28:49.516745 kubelet[2397]: I0213 19:28:49.516723 2397 server.go:1269] "Started kubelet" Feb 13 19:28:49.521386 kubelet[2397]: W0213 19:28:49.520477 2397 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.17.153" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 19:28:49.521386 kubelet[2397]: E0213 19:28:49.520531 2397 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.31.17.153\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 19:28:49.521386 kubelet[2397]: W0213 19:28:49.520834 2397 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 19:28:49.521386 kubelet[2397]: E0213 19:28:49.520860 2397 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 19:28:49.521386 kubelet[2397]: I0213 19:28:49.521235 2397 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:28:49.523382 kubelet[2397]: I0213 19:28:49.523328 2397 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:28:49.526650 kubelet[2397]: I0213 19:28:49.526580 2397 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:28:49.527238 kubelet[2397]: I0213 19:28:49.527207 2397 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:28:49.531604 kubelet[2397]: I0213 19:28:49.531133 2397 server.go:460] "Adding debug handlers to kubelet server" Feb 13 19:28:49.534490 kubelet[2397]: I0213 19:28:49.532841 2397 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:28:49.535767 kubelet[2397]: I0213 19:28:49.535744 2397 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 19:28:49.536201 kubelet[2397]: E0213 19:28:49.536181 2397 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.17.153\" not found" Feb 13 19:28:49.536537 kubelet[2397]: I0213 19:28:49.536523 2397 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 19:28:49.537112 kubelet[2397]: I0213 19:28:49.537097 2397 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:28:49.550387 kubelet[2397]: I0213 19:28:49.544341 2397 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:28:49.550387 kubelet[2397]: I0213 19:28:49.544512 2397 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:28:49.560443 kubelet[2397]: E0213 19:28:49.542468 2397 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.17.153.1823db3b758a65ce default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.17.153,UID:172.31.17.153,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.17.153,},FirstTimestamp:2025-02-13 19:28:49.516701134 +0000 UTC m=+1.014093092,LastTimestamp:2025-02-13 19:28:49.516701134 +0000 UTC m=+1.014093092,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.17.153,}" Feb 13 19:28:49.571016 kubelet[2397]: W0213 19:28:49.559437 2397 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 19:28:49.571164 kubelet[2397]: E0213 19:28:49.571040 2397 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 13 19:28:49.571164 kubelet[2397]: I0213 19:28:49.562695 2397 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:28:49.577396 kubelet[2397]: E0213 19:28:49.574093 2397 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.17.153\" not found" node="172.31.17.153" Feb 13 19:28:49.593006 kubelet[2397]: E0213 19:28:49.562874 2397 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:28:49.605232 kubelet[2397]: I0213 19:28:49.602494 2397 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:28:49.605232 kubelet[2397]: I0213 19:28:49.602513 2397 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:28:49.605232 kubelet[2397]: I0213 19:28:49.602537 2397 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:28:49.605621 kubelet[2397]: I0213 19:28:49.605427 2397 policy_none.go:49] "None policy: Start" Feb 13 19:28:49.609047 kubelet[2397]: I0213 19:28:49.607582 2397 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:28:49.609047 kubelet[2397]: I0213 19:28:49.607740 2397 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:28:49.638178 kubelet[2397]: E0213 19:28:49.638149 2397 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.17.153\" not found" Feb 13 19:28:49.641355 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:28:49.655689 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:28:49.677621 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:28:49.682346 kubelet[2397]: I0213 19:28:49.682319 2397 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:28:49.682827 kubelet[2397]: I0213 19:28:49.682809 2397 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:28:49.683142 kubelet[2397]: I0213 19:28:49.683004 2397 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:28:49.688469 kubelet[2397]: I0213 19:28:49.688448 2397 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:28:49.691780 kubelet[2397]: E0213 19:28:49.691740 2397 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.17.153\" not found" Feb 13 19:28:49.709149 kubelet[2397]: I0213 19:28:49.709098 2397 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:28:49.711902 kubelet[2397]: I0213 19:28:49.711675 2397 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:28:49.711902 kubelet[2397]: I0213 19:28:49.711766 2397 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:28:49.711902 kubelet[2397]: I0213 19:28:49.711790 2397 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 19:28:49.711902 kubelet[2397]: E0213 19:28:49.711839 2397 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 13 19:28:49.784778 kubelet[2397]: I0213 19:28:49.784738 2397 kubelet_node_status.go:72] "Attempting to register node" node="172.31.17.153" Feb 13 19:28:49.791332 kubelet[2397]: I0213 19:28:49.791301 2397 kubelet_node_status.go:75] "Successfully registered node" node="172.31.17.153" Feb 13 19:28:49.791332 kubelet[2397]: E0213 19:28:49.791335 2397 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.17.153\": node \"172.31.17.153\" not found" Feb 13 19:28:49.811247 kubelet[2397]: E0213 19:28:49.811215 2397 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.17.153\" not found" Feb 13 19:28:49.912303 kubelet[2397]: E0213 19:28:49.912250 2397 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.17.153\" not found" Feb 13 19:28:50.013542 kubelet[2397]: E0213 19:28:50.013410 2397 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.17.153\" not found" Feb 13 19:28:50.038343 sudo[2245]: pam_unix(sudo:session): session closed for user root Feb 13 19:28:50.062504 sshd[2244]: Connection closed by 139.178.68.195 port 57586 Feb 13 19:28:50.063470 sshd-session[2238]: pam_unix(sshd:session): session closed for user core Feb 13 19:28:50.069213 systemd-logind[1888]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:28:50.072000 systemd[1]: sshd@8-172.31.17.153:22-139.178.68.195:57586.service: Deactivated successfully. Feb 13 19:28:50.074876 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:28:50.075289 systemd[1]: session-9.scope: Consumed 518ms CPU time, 73.1M memory peak. Feb 13 19:28:50.078164 systemd-logind[1888]: Removed session 9. Feb 13 19:28:50.113666 kubelet[2397]: E0213 19:28:50.113612 2397 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.17.153\" not found" Feb 13 19:28:50.214443 kubelet[2397]: E0213 19:28:50.214322 2397 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.17.153\" not found" Feb 13 19:28:50.315022 kubelet[2397]: E0213 19:28:50.314888 2397 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.17.153\" not found" Feb 13 19:28:50.415949 kubelet[2397]: E0213 19:28:50.415834 2397 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.17.153\" not found" Feb 13 19:28:50.418198 kubelet[2397]: I0213 19:28:50.418048 2397 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 19:28:50.418382 kubelet[2397]: W0213 19:28:50.418348 2397 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:28:50.507139 kubelet[2397]: E0213 19:28:50.507041 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:28:50.516653 kubelet[2397]: E0213 19:28:50.516609 2397 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.17.153\" not found" Feb 13 19:28:50.618292 kubelet[2397]: I0213 19:28:50.618169 2397 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 19:28:50.619823 containerd[1901]: time="2025-02-13T19:28:50.619718440Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:28:50.624099 kubelet[2397]: I0213 19:28:50.620094 2397 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 19:28:51.507571 kubelet[2397]: E0213 19:28:51.507512 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:28:51.507571 kubelet[2397]: I0213 19:28:51.507514 2397 apiserver.go:52] "Watching apiserver" Feb 13 19:28:51.516569 kubelet[2397]: E0213 19:28:51.515824 2397 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2slcf" podUID="75d90e50-6992-4166-9b4f-bb7871aaa223" Feb 13 19:28:51.525144 systemd[1]: Created slice kubepods-besteffort-podee7ca8ce_9123_4ddc_b57a_01903262aa2d.slice - libcontainer container kubepods-besteffort-podee7ca8ce_9123_4ddc_b57a_01903262aa2d.slice. Feb 13 19:28:51.539201 kubelet[2397]: I0213 19:28:51.539156 2397 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 19:28:51.545005 systemd[1]: Created slice kubepods-besteffort-pod284acb36_b461_448e_aa70_59ce0c307569.slice - libcontainer container kubepods-besteffort-pod284acb36_b461_448e_aa70_59ce0c307569.slice. Feb 13 19:28:51.559312 kubelet[2397]: I0213 19:28:51.559273 2397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qrmn\" (UniqueName: \"kubernetes.io/projected/ee7ca8ce-9123-4ddc-b57a-01903262aa2d-kube-api-access-6qrmn\") pod \"kube-proxy-pqv86\" (UID: \"ee7ca8ce-9123-4ddc-b57a-01903262aa2d\") " pod="kube-system/kube-proxy-pqv86" Feb 13 19:28:51.559481 kubelet[2397]: I0213 19:28:51.559342 2397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/284acb36-b461-448e-aa70-59ce0c307569-policysync\") pod \"calico-node-hjkkp\" (UID: \"284acb36-b461-448e-aa70-59ce0c307569\") " pod="calico-system/calico-node-hjkkp" Feb 13 19:28:51.559481 kubelet[2397]: I0213 19:28:51.559396 2397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/284acb36-b461-448e-aa70-59ce0c307569-flexvol-driver-host\") pod \"calico-node-hjkkp\" (UID: \"284acb36-b461-448e-aa70-59ce0c307569\") " pod="calico-system/calico-node-hjkkp" Feb 13 19:28:51.559481 kubelet[2397]: I0213 19:28:51.559424 2397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/75d90e50-6992-4166-9b4f-bb7871aaa223-registration-dir\") pod \"csi-node-driver-2slcf\" (UID: \"75d90e50-6992-4166-9b4f-bb7871aaa223\") " pod="calico-system/csi-node-driver-2slcf" Feb 13 19:28:51.559481 kubelet[2397]: I0213 19:28:51.559454 2397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wchh2\" (UniqueName: \"kubernetes.io/projected/75d90e50-6992-4166-9b4f-bb7871aaa223-kube-api-access-wchh2\") pod \"csi-node-driver-2slcf\" (UID: \"75d90e50-6992-4166-9b4f-bb7871aaa223\") " pod="calico-system/csi-node-driver-2slcf" Feb 13 19:28:51.559481 kubelet[2397]: I0213 19:28:51.559480 2397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/284acb36-b461-448e-aa70-59ce0c307569-node-certs\") pod \"calico-node-hjkkp\" (UID: \"284acb36-b461-448e-aa70-59ce0c307569\") " pod="calico-system/calico-node-hjkkp" Feb 13 19:28:51.559702 kubelet[2397]: I0213 19:28:51.559508 2397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/284acb36-b461-448e-aa70-59ce0c307569-var-run-calico\") pod \"calico-node-hjkkp\" (UID: \"284acb36-b461-448e-aa70-59ce0c307569\") " pod="calico-system/calico-node-hjkkp" Feb 13 19:28:51.559702 kubelet[2397]: I0213 19:28:51.559539 2397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/284acb36-b461-448e-aa70-59ce0c307569-cni-net-dir\") pod \"calico-node-hjkkp\" (UID: \"284acb36-b461-448e-aa70-59ce0c307569\") " pod="calico-system/calico-node-hjkkp" Feb 13 19:28:51.559702 kubelet[2397]: I0213 19:28:51.559568 2397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/284acb36-b461-448e-aa70-59ce0c307569-tigera-ca-bundle\") pod \"calico-node-hjkkp\" (UID: \"284acb36-b461-448e-aa70-59ce0c307569\") " pod="calico-system/calico-node-hjkkp" Feb 13 19:28:51.559702 kubelet[2397]: I0213 19:28:51.559597 2397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/75d90e50-6992-4166-9b4f-bb7871aaa223-varrun\") pod \"csi-node-driver-2slcf\" (UID: \"75d90e50-6992-4166-9b4f-bb7871aaa223\") " pod="calico-system/csi-node-driver-2slcf" Feb 13 19:28:51.559702 kubelet[2397]: I0213 19:28:51.559627 2397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/75d90e50-6992-4166-9b4f-bb7871aaa223-kubelet-dir\") pod \"csi-node-driver-2slcf\" (UID: \"75d90e50-6992-4166-9b4f-bb7871aaa223\") " pod="calico-system/csi-node-driver-2slcf" Feb 13 19:28:51.559990 kubelet[2397]: I0213 19:28:51.559659 2397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/75d90e50-6992-4166-9b4f-bb7871aaa223-socket-dir\") pod \"csi-node-driver-2slcf\" (UID: \"75d90e50-6992-4166-9b4f-bb7871aaa223\") " pod="calico-system/csi-node-driver-2slcf" Feb 13 19:28:51.559990 kubelet[2397]: I0213 19:28:51.559689 2397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ee7ca8ce-9123-4ddc-b57a-01903262aa2d-kube-proxy\") pod \"kube-proxy-pqv86\" (UID: \"ee7ca8ce-9123-4ddc-b57a-01903262aa2d\") " pod="kube-system/kube-proxy-pqv86" Feb 13 19:28:51.559990 kubelet[2397]: I0213 19:28:51.559729 2397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee7ca8ce-9123-4ddc-b57a-01903262aa2d-xtables-lock\") pod \"kube-proxy-pqv86\" (UID: \"ee7ca8ce-9123-4ddc-b57a-01903262aa2d\") " pod="kube-system/kube-proxy-pqv86" Feb 13 19:28:51.559990 kubelet[2397]: I0213 19:28:51.559759 2397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee7ca8ce-9123-4ddc-b57a-01903262aa2d-lib-modules\") pod \"kube-proxy-pqv86\" (UID: \"ee7ca8ce-9123-4ddc-b57a-01903262aa2d\") " pod="kube-system/kube-proxy-pqv86" Feb 13 19:28:51.559990 kubelet[2397]: I0213 19:28:51.559852 2397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/284acb36-b461-448e-aa70-59ce0c307569-lib-modules\") pod \"calico-node-hjkkp\" (UID: \"284acb36-b461-448e-aa70-59ce0c307569\") " pod="calico-system/calico-node-hjkkp" Feb 13 19:28:51.560176 kubelet[2397]: I0213 19:28:51.559885 2397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/284acb36-b461-448e-aa70-59ce0c307569-xtables-lock\") pod \"calico-node-hjkkp\" (UID: \"284acb36-b461-448e-aa70-59ce0c307569\") " pod="calico-system/calico-node-hjkkp" Feb 13 19:28:51.560176 kubelet[2397]: I0213 19:28:51.559926 2397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/284acb36-b461-448e-aa70-59ce0c307569-var-lib-calico\") pod \"calico-node-hjkkp\" (UID: \"284acb36-b461-448e-aa70-59ce0c307569\") " pod="calico-system/calico-node-hjkkp" Feb 13 19:28:51.560176 kubelet[2397]: I0213 19:28:51.559954 2397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/284acb36-b461-448e-aa70-59ce0c307569-cni-bin-dir\") pod \"calico-node-hjkkp\" (UID: \"284acb36-b461-448e-aa70-59ce0c307569\") " pod="calico-system/calico-node-hjkkp" Feb 13 19:28:51.560176 kubelet[2397]: I0213 19:28:51.559986 2397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/284acb36-b461-448e-aa70-59ce0c307569-cni-log-dir\") pod \"calico-node-hjkkp\" (UID: \"284acb36-b461-448e-aa70-59ce0c307569\") " pod="calico-system/calico-node-hjkkp" Feb 13 19:28:51.560176 kubelet[2397]: I0213 19:28:51.560017 2397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9245j\" (UniqueName: \"kubernetes.io/projected/284acb36-b461-448e-aa70-59ce0c307569-kube-api-access-9245j\") pod \"calico-node-hjkkp\" (UID: \"284acb36-b461-448e-aa70-59ce0c307569\") " pod="calico-system/calico-node-hjkkp" Feb 13 19:28:51.663010 kubelet[2397]: E0213 19:28:51.662339 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.663010 kubelet[2397]: W0213 19:28:51.662385 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.663010 kubelet[2397]: E0213 19:28:51.662410 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.663010 kubelet[2397]: E0213 19:28:51.662628 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.663010 kubelet[2397]: W0213 19:28:51.662640 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.663010 kubelet[2397]: E0213 19:28:51.662656 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.663010 kubelet[2397]: E0213 19:28:51.662845 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.663010 kubelet[2397]: W0213 19:28:51.662856 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.663010 kubelet[2397]: E0213 19:28:51.662868 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.664770 kubelet[2397]: E0213 19:28:51.664167 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.664770 kubelet[2397]: W0213 19:28:51.664186 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.664770 kubelet[2397]: E0213 19:28:51.664205 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.664770 kubelet[2397]: E0213 19:28:51.664531 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.664770 kubelet[2397]: W0213 19:28:51.664543 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.664770 kubelet[2397]: E0213 19:28:51.664557 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.665416 kubelet[2397]: E0213 19:28:51.665153 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.665416 kubelet[2397]: W0213 19:28:51.665169 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.665416 kubelet[2397]: E0213 19:28:51.665198 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.667594 kubelet[2397]: E0213 19:28:51.667485 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.667594 kubelet[2397]: W0213 19:28:51.667506 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.667594 kubelet[2397]: E0213 19:28:51.667544 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.668335 kubelet[2397]: E0213 19:28:51.668207 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.668335 kubelet[2397]: W0213 19:28:51.668224 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.668335 kubelet[2397]: E0213 19:28:51.668243 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.669064 kubelet[2397]: E0213 19:28:51.668668 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.669064 kubelet[2397]: W0213 19:28:51.668746 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.669064 kubelet[2397]: E0213 19:28:51.668762 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.669517 kubelet[2397]: E0213 19:28:51.669347 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.669517 kubelet[2397]: W0213 19:28:51.669376 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.669517 kubelet[2397]: E0213 19:28:51.669391 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.669930 kubelet[2397]: E0213 19:28:51.669805 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.669930 kubelet[2397]: W0213 19:28:51.669819 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.669930 kubelet[2397]: E0213 19:28:51.669832 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.670387 kubelet[2397]: E0213 19:28:51.670341 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.670387 kubelet[2397]: W0213 19:28:51.670354 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.670595 kubelet[2397]: E0213 19:28:51.670514 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.670876 kubelet[2397]: E0213 19:28:51.670839 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.670876 kubelet[2397]: W0213 19:28:51.670851 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.671305 kubelet[2397]: E0213 19:28:51.671211 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.671305 kubelet[2397]: W0213 19:28:51.671226 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.672295 kubelet[2397]: E0213 19:28:51.672129 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.672295 kubelet[2397]: E0213 19:28:51.672203 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.672295 kubelet[2397]: W0213 19:28:51.672212 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.672637 kubelet[2397]: E0213 19:28:51.672501 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.672637 kubelet[2397]: E0213 19:28:51.672525 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.672867 kubelet[2397]: E0213 19:28:51.672772 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.672867 kubelet[2397]: W0213 19:28:51.672785 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.673091 kubelet[2397]: E0213 19:28:51.672979 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.673271 kubelet[2397]: E0213 19:28:51.673182 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.673271 kubelet[2397]: W0213 19:28:51.673193 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.673528 kubelet[2397]: E0213 19:28:51.673400 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.674375 kubelet[2397]: E0213 19:28:51.674343 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.679809 kubelet[2397]: W0213 19:28:51.674464 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.687151 kubelet[2397]: E0213 19:28:51.686514 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.687151 kubelet[2397]: E0213 19:28:51.686867 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.687151 kubelet[2397]: W0213 19:28:51.686885 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.689816 kubelet[2397]: E0213 19:28:51.689790 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.689940 kubelet[2397]: W0213 19:28:51.689923 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.702567 kubelet[2397]: E0213 19:28:51.702290 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.702567 kubelet[2397]: E0213 19:28:51.702338 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.702970 kubelet[2397]: E0213 19:28:51.702953 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.703148 kubelet[2397]: W0213 19:28:51.703041 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.703375 kubelet[2397]: E0213 19:28:51.703347 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.703551 kubelet[2397]: W0213 19:28:51.703462 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.703751 kubelet[2397]: E0213 19:28:51.703740 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.703820 kubelet[2397]: W0213 19:28:51.703810 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.704282 kubelet[2397]: E0213 19:28:51.704198 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.704282 kubelet[2397]: W0213 19:28:51.704211 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.704640 kubelet[2397]: E0213 19:28:51.704557 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.704640 kubelet[2397]: W0213 19:28:51.704568 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.704866 kubelet[2397]: E0213 19:28:51.704857 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.705175 kubelet[2397]: W0213 19:28:51.705084 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.705388 kubelet[2397]: E0213 19:28:51.705355 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.705529 kubelet[2397]: W0213 19:28:51.705450 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.705738 kubelet[2397]: E0213 19:28:51.705726 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.705810 kubelet[2397]: W0213 19:28:51.705799 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.706479 kubelet[2397]: E0213 19:28:51.706390 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.706479 kubelet[2397]: W0213 19:28:51.706404 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.706788 kubelet[2397]: E0213 19:28:51.706702 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.706788 kubelet[2397]: W0213 19:28:51.706713 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.707003 kubelet[2397]: E0213 19:28:51.706992 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.707136 kubelet[2397]: W0213 19:28:51.707062 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.707301 kubelet[2397]: E0213 19:28:51.707292 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.707574 kubelet[2397]: W0213 19:28:51.707444 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.707758 kubelet[2397]: E0213 19:28:51.707747 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.707972 kubelet[2397]: W0213 19:28:51.707812 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.708701 kubelet[2397]: E0213 19:28:51.708620 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.708701 kubelet[2397]: W0213 19:28:51.708634 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.709009 kubelet[2397]: E0213 19:28:51.708653 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.709554 kubelet[2397]: E0213 19:28:51.709346 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.709554 kubelet[2397]: W0213 19:28:51.709378 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.709554 kubelet[2397]: E0213 19:28:51.709394 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.710390 kubelet[2397]: E0213 19:28:51.709820 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.710390 kubelet[2397]: W0213 19:28:51.710341 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.710628 kubelet[2397]: E0213 19:28:51.710371 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.710628 kubelet[2397]: E0213 19:28:51.710542 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.710872 kubelet[2397]: E0213 19:28:51.710859 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.710945 kubelet[2397]: W0213 19:28:51.710933 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.711096 kubelet[2397]: E0213 19:28:51.710999 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.711295 kubelet[2397]: E0213 19:28:51.711285 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.711415 kubelet[2397]: W0213 19:28:51.711350 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.711513 kubelet[2397]: E0213 19:28:51.711500 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.712404 kubelet[2397]: E0213 19:28:51.712099 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.712404 kubelet[2397]: W0213 19:28:51.712113 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.712404 kubelet[2397]: E0213 19:28:51.712127 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.714658 kubelet[2397]: E0213 19:28:51.714636 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.714973 kubelet[2397]: E0213 19:28:51.714938 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.715045 kubelet[2397]: E0213 19:28:51.714977 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.715045 kubelet[2397]: E0213 19:28:51.714992 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.715045 kubelet[2397]: E0213 19:28:51.715038 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.715158 kubelet[2397]: E0213 19:28:51.715076 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.715158 kubelet[2397]: E0213 19:28:51.715089 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.715158 kubelet[2397]: E0213 19:28:51.715136 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.715158 kubelet[2397]: E0213 19:28:51.715148 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.715318 kubelet[2397]: E0213 19:28:51.715161 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.715318 kubelet[2397]: E0213 19:28:51.715177 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.715318 kubelet[2397]: E0213 19:28:51.715222 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.715711 kubelet[2397]: E0213 19:28:51.715548 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.715711 kubelet[2397]: W0213 19:28:51.715566 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.715711 kubelet[2397]: E0213 19:28:51.715588 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.716645 kubelet[2397]: E0213 19:28:51.716420 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.716645 kubelet[2397]: W0213 19:28:51.716434 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.716645 kubelet[2397]: E0213 19:28:51.716459 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.717088 kubelet[2397]: E0213 19:28:51.717055 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.717088 kubelet[2397]: W0213 19:28:51.717069 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.717350 kubelet[2397]: E0213 19:28:51.717318 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.717717 kubelet[2397]: E0213 19:28:51.717705 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.717947 kubelet[2397]: W0213 19:28:51.717809 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.718241 kubelet[2397]: E0213 19:28:51.718035 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.721054 kubelet[2397]: E0213 19:28:51.718979 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.721054 kubelet[2397]: W0213 19:28:51.718993 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.722939 kubelet[2397]: E0213 19:28:51.722498 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.725076 kubelet[2397]: E0213 19:28:51.724568 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.725076 kubelet[2397]: W0213 19:28:51.725017 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.727414 kubelet[2397]: E0213 19:28:51.727200 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.728275 kubelet[2397]: E0213 19:28:51.728216 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.728275 kubelet[2397]: W0213 19:28:51.728233 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.728599 kubelet[2397]: E0213 19:28:51.728458 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.729591 kubelet[2397]: E0213 19:28:51.729562 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.730389 kubelet[2397]: W0213 19:28:51.729684 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.730389 kubelet[2397]: E0213 19:28:51.729757 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.731603 kubelet[2397]: E0213 19:28:51.731588 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.731709 kubelet[2397]: W0213 19:28:51.731696 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.732010 kubelet[2397]: E0213 19:28:51.731994 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.732289 kubelet[2397]: E0213 19:28:51.732278 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.732395 kubelet[2397]: W0213 19:28:51.732383 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.732556 kubelet[2397]: E0213 19:28:51.732536 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.733029 kubelet[2397]: E0213 19:28:51.732928 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.733029 kubelet[2397]: W0213 19:28:51.732941 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.733612 kubelet[2397]: E0213 19:28:51.733479 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.733612 kubelet[2397]: W0213 19:28:51.733494 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.733954 kubelet[2397]: E0213 19:28:51.733879 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.733954 kubelet[2397]: W0213 19:28:51.733893 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.734208 kubelet[2397]: E0213 19:28:51.734090 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.734655 kubelet[2397]: E0213 19:28:51.734410 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.734655 kubelet[2397]: E0213 19:28:51.734535 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.734951 kubelet[2397]: E0213 19:28:51.734813 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.734951 kubelet[2397]: W0213 19:28:51.734825 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.735150 kubelet[2397]: E0213 19:28:51.735135 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.735493 kubelet[2397]: E0213 19:28:51.735481 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.735592 kubelet[2397]: W0213 19:28:51.735556 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.736132 kubelet[2397]: E0213 19:28:51.735806 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.736298 kubelet[2397]: E0213 19:28:51.736278 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.736394 kubelet[2397]: W0213 19:28:51.736381 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.736561 kubelet[2397]: E0213 19:28:51.736548 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.737181 kubelet[2397]: E0213 19:28:51.737168 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.738692 kubelet[2397]: W0213 19:28:51.738423 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.738692 kubelet[2397]: E0213 19:28:51.738656 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.739079 kubelet[2397]: E0213 19:28:51.739064 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.739714 kubelet[2397]: W0213 19:28:51.739080 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.739714 kubelet[2397]: E0213 19:28:51.739107 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.739714 kubelet[2397]: E0213 19:28:51.739697 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.739714 kubelet[2397]: W0213 19:28:51.739708 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.740260 kubelet[2397]: E0213 19:28:51.740223 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.740260 kubelet[2397]: E0213 19:28:51.740242 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.740260 kubelet[2397]: W0213 19:28:51.740251 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.740623 kubelet[2397]: E0213 19:28:51.740277 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.740623 kubelet[2397]: E0213 19:28:51.740510 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.740623 kubelet[2397]: W0213 19:28:51.740521 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.740623 kubelet[2397]: E0213 19:28:51.740543 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.741314 kubelet[2397]: E0213 19:28:51.740820 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.741314 kubelet[2397]: W0213 19:28:51.741007 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.741314 kubelet[2397]: E0213 19:28:51.741201 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.741314 kubelet[2397]: E0213 19:28:51.741283 2397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:51.741314 kubelet[2397]: W0213 19:28:51.741293 2397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:51.741314 kubelet[2397]: E0213 19:28:51.741305 2397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:51.844493 containerd[1901]: time="2025-02-13T19:28:51.844349251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pqv86,Uid:ee7ca8ce-9123-4ddc-b57a-01903262aa2d,Namespace:kube-system,Attempt:0,}" Feb 13 19:28:51.862390 containerd[1901]: time="2025-02-13T19:28:51.862342100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hjkkp,Uid:284acb36-b461-448e-aa70-59ce0c307569,Namespace:calico-system,Attempt:0,}" Feb 13 19:28:52.401748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3973796360.mount: Deactivated successfully. Feb 13 19:28:52.409559 containerd[1901]: time="2025-02-13T19:28:52.409509870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:28:52.411448 containerd[1901]: time="2025-02-13T19:28:52.411407254Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:28:52.412307 containerd[1901]: time="2025-02-13T19:28:52.412254661Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 19:28:52.415021 containerd[1901]: time="2025-02-13T19:28:52.414970547Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:28:52.416259 containerd[1901]: time="2025-02-13T19:28:52.416200601Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:28:52.420410 containerd[1901]: time="2025-02-13T19:28:52.419855810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:28:52.422423 containerd[1901]: time="2025-02-13T19:28:52.420920091Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 558.337794ms" Feb 13 19:28:52.431889 containerd[1901]: time="2025-02-13T19:28:52.431841979Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 587.345682ms" Feb 13 19:28:52.508336 kubelet[2397]: E0213 19:28:52.508268 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:28:52.579814 containerd[1901]: time="2025-02-13T19:28:52.578689369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:28:52.580070 containerd[1901]: time="2025-02-13T19:28:52.580016347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:28:52.580210 containerd[1901]: time="2025-02-13T19:28:52.578522009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:28:52.580210 containerd[1901]: time="2025-02-13T19:28:52.580168147Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:28:52.580349 containerd[1901]: time="2025-02-13T19:28:52.580198865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:28:52.580549 containerd[1901]: time="2025-02-13T19:28:52.580515221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:28:52.580758 containerd[1901]: time="2025-02-13T19:28:52.580729738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:28:52.581196 containerd[1901]: time="2025-02-13T19:28:52.581090339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:28:52.696741 systemd[1]: Started cri-containerd-661dc031d8a987d5c5274effbc63459e05b8c781d3917fb9bbaa5a2a1422e32d.scope - libcontainer container 661dc031d8a987d5c5274effbc63459e05b8c781d3917fb9bbaa5a2a1422e32d. Feb 13 19:28:52.704148 systemd[1]: Started cri-containerd-7708fc7827839ed7c3fdb6ec3abb8614e1f58dfa29ec58f582d1dcc5fb779d0b.scope - libcontainer container 7708fc7827839ed7c3fdb6ec3abb8614e1f58dfa29ec58f582d1dcc5fb779d0b. Feb 13 19:28:52.751164 containerd[1901]: time="2025-02-13T19:28:52.751110840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hjkkp,Uid:284acb36-b461-448e-aa70-59ce0c307569,Namespace:calico-system,Attempt:0,} returns sandbox id \"661dc031d8a987d5c5274effbc63459e05b8c781d3917fb9bbaa5a2a1422e32d\"" Feb 13 19:28:52.755259 containerd[1901]: time="2025-02-13T19:28:52.755106577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 19:28:52.761139 containerd[1901]: time="2025-02-13T19:28:52.761102249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pqv86,Uid:ee7ca8ce-9123-4ddc-b57a-01903262aa2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"7708fc7827839ed7c3fdb6ec3abb8614e1f58dfa29ec58f582d1dcc5fb779d0b\"" Feb 13 19:28:53.508965 kubelet[2397]: E0213 19:28:53.508936 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:28:53.713296 kubelet[2397]: E0213 19:28:53.712817 2397 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2slcf" podUID="75d90e50-6992-4166-9b4f-bb7871aaa223" Feb 13 19:28:54.145321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount187729241.mount: Deactivated successfully. Feb 13 19:28:54.303206 containerd[1901]: time="2025-02-13T19:28:54.303150560Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:54.305471 containerd[1901]: time="2025-02-13T19:28:54.305075025Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Feb 13 19:28:54.306427 containerd[1901]: time="2025-02-13T19:28:54.306386907Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:54.309820 containerd[1901]: time="2025-02-13T19:28:54.309752243Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:54.310788 containerd[1901]: time="2025-02-13T19:28:54.310464337Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.555314079s" Feb 13 19:28:54.310788 containerd[1901]: time="2025-02-13T19:28:54.310507086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 19:28:54.312004 containerd[1901]: time="2025-02-13T19:28:54.311975951Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 19:28:54.314729 containerd[1901]: time="2025-02-13T19:28:54.314688493Z" level=info msg="CreateContainer within sandbox \"661dc031d8a987d5c5274effbc63459e05b8c781d3917fb9bbaa5a2a1422e32d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 19:28:54.339130 containerd[1901]: time="2025-02-13T19:28:54.339077839Z" level=info msg="CreateContainer within sandbox \"661dc031d8a987d5c5274effbc63459e05b8c781d3917fb9bbaa5a2a1422e32d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"efecec7b7ab4ac4c6536dbc5c33ec2f06afc6323d6538ef8f8a2bf93afc80c99\"" Feb 13 19:28:54.341376 containerd[1901]: time="2025-02-13T19:28:54.341039526Z" level=info msg="StartContainer for \"efecec7b7ab4ac4c6536dbc5c33ec2f06afc6323d6538ef8f8a2bf93afc80c99\"" Feb 13 19:28:54.392877 systemd[1]: Started cri-containerd-efecec7b7ab4ac4c6536dbc5c33ec2f06afc6323d6538ef8f8a2bf93afc80c99.scope - libcontainer container efecec7b7ab4ac4c6536dbc5c33ec2f06afc6323d6538ef8f8a2bf93afc80c99. Feb 13 19:28:54.456656 containerd[1901]: time="2025-02-13T19:28:54.456101246Z" level=info msg="StartContainer for \"efecec7b7ab4ac4c6536dbc5c33ec2f06afc6323d6538ef8f8a2bf93afc80c99\" returns successfully" Feb 13 19:28:54.497949 systemd[1]: cri-containerd-efecec7b7ab4ac4c6536dbc5c33ec2f06afc6323d6538ef8f8a2bf93afc80c99.scope: Deactivated successfully. Feb 13 19:28:54.510661 kubelet[2397]: E0213 19:28:54.510618 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:28:54.617402 containerd[1901]: time="2025-02-13T19:28:54.617304378Z" level=info msg="shim disconnected" id=efecec7b7ab4ac4c6536dbc5c33ec2f06afc6323d6538ef8f8a2bf93afc80c99 namespace=k8s.io Feb 13 19:28:54.617402 containerd[1901]: time="2025-02-13T19:28:54.617386822Z" level=warning msg="cleaning up after shim disconnected" id=efecec7b7ab4ac4c6536dbc5c33ec2f06afc6323d6538ef8f8a2bf93afc80c99 namespace=k8s.io Feb 13 19:28:54.617402 containerd[1901]: time="2025-02-13T19:28:54.617400249Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:28:55.084837 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-efecec7b7ab4ac4c6536dbc5c33ec2f06afc6323d6538ef8f8a2bf93afc80c99-rootfs.mount: Deactivated successfully. Feb 13 19:28:55.510990 kubelet[2397]: E0213 19:28:55.510932 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:28:55.714899 kubelet[2397]: E0213 19:28:55.714837 2397 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2slcf" podUID="75d90e50-6992-4166-9b4f-bb7871aaa223" Feb 13 19:28:55.844160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1930743355.mount: Deactivated successfully. Feb 13 19:28:56.511173 kubelet[2397]: E0213 19:28:56.511096 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:28:56.782310 containerd[1901]: time="2025-02-13T19:28:56.781913162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:56.785816 containerd[1901]: time="2025-02-13T19:28:56.784621536Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=30229108" Feb 13 19:28:56.788536 containerd[1901]: time="2025-02-13T19:28:56.788453444Z" level=info msg="ImageCreate event name:\"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:56.802073 containerd[1901]: time="2025-02-13T19:28:56.801951632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:56.804241 containerd[1901]: time="2025-02-13T19:28:56.802860097Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"30228127\" in 2.490842376s" Feb 13 19:28:56.804241 containerd[1901]: time="2025-02-13T19:28:56.802905630Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\"" Feb 13 19:28:56.808797 containerd[1901]: time="2025-02-13T19:28:56.808750253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 19:28:56.815483 containerd[1901]: time="2025-02-13T19:28:56.815011822Z" level=info msg="CreateContainer within sandbox \"7708fc7827839ed7c3fdb6ec3abb8614e1f58dfa29ec58f582d1dcc5fb779d0b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:28:56.860839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2286600147.mount: Deactivated successfully. Feb 13 19:28:56.863886 containerd[1901]: time="2025-02-13T19:28:56.863308374Z" level=info msg="CreateContainer within sandbox \"7708fc7827839ed7c3fdb6ec3abb8614e1f58dfa29ec58f582d1dcc5fb779d0b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b04acf7a173d1b0acaad1b9dff95773fd0503158d5f4f5cecfae220e3995344f\"" Feb 13 19:28:56.864456 containerd[1901]: time="2025-02-13T19:28:56.864334233Z" level=info msg="StartContainer for \"b04acf7a173d1b0acaad1b9dff95773fd0503158d5f4f5cecfae220e3995344f\"" Feb 13 19:28:56.967072 systemd[1]: Started cri-containerd-b04acf7a173d1b0acaad1b9dff95773fd0503158d5f4f5cecfae220e3995344f.scope - libcontainer container b04acf7a173d1b0acaad1b9dff95773fd0503158d5f4f5cecfae220e3995344f. Feb 13 19:28:57.078830 containerd[1901]: time="2025-02-13T19:28:57.070086487Z" level=info msg="StartContainer for \"b04acf7a173d1b0acaad1b9dff95773fd0503158d5f4f5cecfae220e3995344f\" returns successfully" Feb 13 19:28:57.511740 kubelet[2397]: E0213 19:28:57.511669 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:28:57.714074 kubelet[2397]: E0213 19:28:57.714032 2397 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2slcf" podUID="75d90e50-6992-4166-9b4f-bb7871aaa223" Feb 13 19:28:57.788194 kubelet[2397]: I0213 19:28:57.788013 2397 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pqv86" podStartSLOduration=4.743094943 podStartE2EDuration="8.787996659s" podCreationTimestamp="2025-02-13 19:28:49 +0000 UTC" firstStartedPulling="2025-02-13 19:28:52.763099519 +0000 UTC m=+4.260491465" lastFinishedPulling="2025-02-13 19:28:56.808001241 +0000 UTC m=+8.305393181" observedRunningTime="2025-02-13 19:28:57.787958352 +0000 UTC m=+9.285350312" watchObservedRunningTime="2025-02-13 19:28:57.787996659 +0000 UTC m=+9.285388618" Feb 13 19:28:58.511912 kubelet[2397]: E0213 19:28:58.511850 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:28:59.512502 kubelet[2397]: E0213 19:28:59.512454 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:28:59.713187 kubelet[2397]: E0213 19:28:59.712988 2397 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2slcf" podUID="75d90e50-6992-4166-9b4f-bb7871aaa223" Feb 13 19:29:00.517796 kubelet[2397]: E0213 19:29:00.517712 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:01.407736 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 19:29:01.519050 kubelet[2397]: E0213 19:29:01.518881 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:01.713519 kubelet[2397]: E0213 19:29:01.712683 2397 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2slcf" podUID="75d90e50-6992-4166-9b4f-bb7871aaa223" Feb 13 19:29:02.519758 kubelet[2397]: E0213 19:29:02.519463 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:02.870601 containerd[1901]: time="2025-02-13T19:29:02.870464035Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:02.872764 containerd[1901]: time="2025-02-13T19:29:02.872562651Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 19:29:02.874162 containerd[1901]: time="2025-02-13T19:29:02.873867849Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:02.876193 containerd[1901]: time="2025-02-13T19:29:02.876152980Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:02.877066 containerd[1901]: time="2025-02-13T19:29:02.877028683Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 6.067878847s" Feb 13 19:29:02.877159 containerd[1901]: time="2025-02-13T19:29:02.877071814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 19:29:02.898663 containerd[1901]: time="2025-02-13T19:29:02.896949068Z" level=info msg="CreateContainer within sandbox \"661dc031d8a987d5c5274effbc63459e05b8c781d3917fb9bbaa5a2a1422e32d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:29:02.966485 containerd[1901]: time="2025-02-13T19:29:02.966431144Z" level=info msg="CreateContainer within sandbox \"661dc031d8a987d5c5274effbc63459e05b8c781d3917fb9bbaa5a2a1422e32d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"17caa4f064f97ca304596aa62a93dce52da4621451afc29385033dc251bbcd32\"" Feb 13 19:29:02.967153 containerd[1901]: time="2025-02-13T19:29:02.967112192Z" level=info msg="StartContainer for \"17caa4f064f97ca304596aa62a93dce52da4621451afc29385033dc251bbcd32\"" Feb 13 19:29:03.038072 systemd[1]: Started cri-containerd-17caa4f064f97ca304596aa62a93dce52da4621451afc29385033dc251bbcd32.scope - libcontainer container 17caa4f064f97ca304596aa62a93dce52da4621451afc29385033dc251bbcd32. Feb 13 19:29:03.080003 containerd[1901]: time="2025-02-13T19:29:03.079133383Z" level=info msg="StartContainer for \"17caa4f064f97ca304596aa62a93dce52da4621451afc29385033dc251bbcd32\" returns successfully" Feb 13 19:29:03.519960 kubelet[2397]: E0213 19:29:03.519887 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:03.715505 kubelet[2397]: E0213 19:29:03.714288 2397 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2slcf" podUID="75d90e50-6992-4166-9b4f-bb7871aaa223" Feb 13 19:29:04.008686 containerd[1901]: time="2025-02-13T19:29:04.008623485Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:29:04.011012 systemd[1]: cri-containerd-17caa4f064f97ca304596aa62a93dce52da4621451afc29385033dc251bbcd32.scope: Deactivated successfully. Feb 13 19:29:04.011551 systemd[1]: cri-containerd-17caa4f064f97ca304596aa62a93dce52da4621451afc29385033dc251bbcd32.scope: Consumed 541ms CPU time, 169.3M memory peak, 151M written to disk. Feb 13 19:29:04.039087 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17caa4f064f97ca304596aa62a93dce52da4621451afc29385033dc251bbcd32-rootfs.mount: Deactivated successfully. Feb 13 19:29:04.044142 kubelet[2397]: I0213 19:29:04.044114 2397 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 19:29:04.356875 containerd[1901]: time="2025-02-13T19:29:04.356727264Z" level=info msg="shim disconnected" id=17caa4f064f97ca304596aa62a93dce52da4621451afc29385033dc251bbcd32 namespace=k8s.io Feb 13 19:29:04.356875 containerd[1901]: time="2025-02-13T19:29:04.356785930Z" level=warning msg="cleaning up after shim disconnected" id=17caa4f064f97ca304596aa62a93dce52da4621451afc29385033dc251bbcd32 namespace=k8s.io Feb 13 19:29:04.356875 containerd[1901]: time="2025-02-13T19:29:04.356797806Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:29:04.521020 kubelet[2397]: E0213 19:29:04.520963 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:04.809620 containerd[1901]: time="2025-02-13T19:29:04.809290190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 19:29:05.521920 kubelet[2397]: E0213 19:29:05.521862 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:05.731681 systemd[1]: Created slice kubepods-besteffort-pod75d90e50_6992_4166_9b4f_bb7871aaa223.slice - libcontainer container kubepods-besteffort-pod75d90e50_6992_4166_9b4f_bb7871aaa223.slice. Feb 13 19:29:05.741036 containerd[1901]: time="2025-02-13T19:29:05.740954606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2slcf,Uid:75d90e50-6992-4166-9b4f-bb7871aaa223,Namespace:calico-system,Attempt:0,}" Feb 13 19:29:05.848017 containerd[1901]: time="2025-02-13T19:29:05.847878885Z" level=error msg="Failed to destroy network for sandbox \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:05.850919 containerd[1901]: time="2025-02-13T19:29:05.850843020Z" level=error msg="encountered an error cleaning up failed sandbox \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:05.851195 containerd[1901]: time="2025-02-13T19:29:05.850961995Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2slcf,Uid:75d90e50-6992-4166-9b4f-bb7871aaa223,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:05.851291 kubelet[2397]: E0213 19:29:05.851248 2397 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:05.851350 kubelet[2397]: E0213 19:29:05.851330 2397 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2slcf" Feb 13 19:29:05.851420 kubelet[2397]: E0213 19:29:05.851374 2397 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2slcf" Feb 13 19:29:05.851463 kubelet[2397]: E0213 19:29:05.851428 2397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2slcf_calico-system(75d90e50-6992-4166-9b4f-bb7871aaa223)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2slcf_calico-system(75d90e50-6992-4166-9b4f-bb7871aaa223)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2slcf" podUID="75d90e50-6992-4166-9b4f-bb7871aaa223" Feb 13 19:29:05.852458 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b-shm.mount: Deactivated successfully. Feb 13 19:29:06.522946 kubelet[2397]: E0213 19:29:06.522902 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:06.815645 kubelet[2397]: I0213 19:29:06.813804 2397 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b" Feb 13 19:29:06.815916 containerd[1901]: time="2025-02-13T19:29:06.814882538Z" level=info msg="StopPodSandbox for \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\"" Feb 13 19:29:06.815916 containerd[1901]: time="2025-02-13T19:29:06.815131548Z" level=info msg="Ensure that sandbox b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b in task-service has been cleanup successfully" Feb 13 19:29:06.818625 containerd[1901]: time="2025-02-13T19:29:06.817789547Z" level=info msg="TearDown network for sandbox \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\" successfully" Feb 13 19:29:06.818625 containerd[1901]: time="2025-02-13T19:29:06.817824260Z" level=info msg="StopPodSandbox for \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\" returns successfully" Feb 13 19:29:06.833772 containerd[1901]: time="2025-02-13T19:29:06.828266462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2slcf,Uid:75d90e50-6992-4166-9b4f-bb7871aaa223,Namespace:calico-system,Attempt:1,}" Feb 13 19:29:06.840297 systemd[1]: run-netns-cni\x2d0265db43\x2d4e5f\x2d4de9\x2dd035\x2da80d9c2ae337.mount: Deactivated successfully. Feb 13 19:29:07.025407 containerd[1901]: time="2025-02-13T19:29:07.024204611Z" level=error msg="Failed to destroy network for sandbox \"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:07.025407 containerd[1901]: time="2025-02-13T19:29:07.024618143Z" level=error msg="encountered an error cleaning up failed sandbox \"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:07.025407 containerd[1901]: time="2025-02-13T19:29:07.024685534Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2slcf,Uid:75d90e50-6992-4166-9b4f-bb7871aaa223,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:07.031389 kubelet[2397]: E0213 19:29:07.031133 2397 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:07.031389 kubelet[2397]: E0213 19:29:07.031203 2397 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2slcf" Feb 13 19:29:07.031389 kubelet[2397]: E0213 19:29:07.031233 2397 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2slcf" Feb 13 19:29:07.031634 kubelet[2397]: E0213 19:29:07.031282 2397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2slcf_calico-system(75d90e50-6992-4166-9b4f-bb7871aaa223)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2slcf_calico-system(75d90e50-6992-4166-9b4f-bb7871aaa223)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2slcf" podUID="75d90e50-6992-4166-9b4f-bb7871aaa223" Feb 13 19:29:07.031779 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d-shm.mount: Deactivated successfully. Feb 13 19:29:07.523992 kubelet[2397]: E0213 19:29:07.523621 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:07.824629 kubelet[2397]: I0213 19:29:07.824341 2397 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d" Feb 13 19:29:07.827177 containerd[1901]: time="2025-02-13T19:29:07.827051312Z" level=info msg="StopPodSandbox for \"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\"" Feb 13 19:29:07.828075 containerd[1901]: time="2025-02-13T19:29:07.828024144Z" level=info msg="Ensure that sandbox 1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d in task-service has been cleanup successfully" Feb 13 19:29:07.828652 containerd[1901]: time="2025-02-13T19:29:07.828624635Z" level=info msg="TearDown network for sandbox \"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\" successfully" Feb 13 19:29:07.828850 containerd[1901]: time="2025-02-13T19:29:07.828750003Z" level=info msg="StopPodSandbox for \"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\" returns successfully" Feb 13 19:29:07.830620 containerd[1901]: time="2025-02-13T19:29:07.830590780Z" level=info msg="StopPodSandbox for \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\"" Feb 13 19:29:07.830719 containerd[1901]: time="2025-02-13T19:29:07.830698630Z" level=info msg="TearDown network for sandbox \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\" successfully" Feb 13 19:29:07.830862 containerd[1901]: time="2025-02-13T19:29:07.830720794Z" level=info msg="StopPodSandbox for \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\" returns successfully" Feb 13 19:29:07.839909 containerd[1901]: time="2025-02-13T19:29:07.832594433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2slcf,Uid:75d90e50-6992-4166-9b4f-bb7871aaa223,Namespace:calico-system,Attempt:2,}" Feb 13 19:29:07.839777 systemd[1]: run-netns-cni\x2d09c7019a\x2df06d\x2d1b64\x2daaa1\x2dd9a0de4aefd8.mount: Deactivated successfully. Feb 13 19:29:08.013530 containerd[1901]: time="2025-02-13T19:29:08.013479924Z" level=error msg="Failed to destroy network for sandbox \"c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:08.015253 containerd[1901]: time="2025-02-13T19:29:08.015200732Z" level=error msg="encountered an error cleaning up failed sandbox \"c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:08.015457 containerd[1901]: time="2025-02-13T19:29:08.015295195Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2slcf,Uid:75d90e50-6992-4166-9b4f-bb7871aaa223,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:08.016553 kubelet[2397]: E0213 19:29:08.016510 2397 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:08.016851 kubelet[2397]: E0213 19:29:08.016581 2397 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2slcf" Feb 13 19:29:08.016851 kubelet[2397]: E0213 19:29:08.016609 2397 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2slcf" Feb 13 19:29:08.017503 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383-shm.mount: Deactivated successfully. Feb 13 19:29:08.022601 kubelet[2397]: E0213 19:29:08.016866 2397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2slcf_calico-system(75d90e50-6992-4166-9b4f-bb7871aaa223)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2slcf_calico-system(75d90e50-6992-4166-9b4f-bb7871aaa223)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2slcf" podUID="75d90e50-6992-4166-9b4f-bb7871aaa223" Feb 13 19:29:08.524745 kubelet[2397]: E0213 19:29:08.524678 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:08.829780 kubelet[2397]: I0213 19:29:08.828603 2397 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383" Feb 13 19:29:08.829972 containerd[1901]: time="2025-02-13T19:29:08.829647239Z" level=info msg="StopPodSandbox for \"c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383\"" Feb 13 19:29:08.829972 containerd[1901]: time="2025-02-13T19:29:08.829947639Z" level=info msg="Ensure that sandbox c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383 in task-service has been cleanup successfully" Feb 13 19:29:08.833199 containerd[1901]: time="2025-02-13T19:29:08.832738494Z" level=info msg="TearDown network for sandbox \"c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383\" successfully" Feb 13 19:29:08.833199 containerd[1901]: time="2025-02-13T19:29:08.832764715Z" level=info msg="StopPodSandbox for \"c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383\" returns successfully" Feb 13 19:29:08.833343 containerd[1901]: time="2025-02-13T19:29:08.833238312Z" level=info msg="StopPodSandbox for \"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\"" Feb 13 19:29:08.833343 containerd[1901]: time="2025-02-13T19:29:08.833330741Z" level=info msg="TearDown network for sandbox \"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\" successfully" Feb 13 19:29:08.833524 containerd[1901]: time="2025-02-13T19:29:08.833345405Z" level=info msg="StopPodSandbox for \"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\" returns successfully" Feb 13 19:29:08.835220 systemd[1]: run-netns-cni\x2d7a7dd3cc\x2dc38c\x2d95e3\x2d457d\x2d252c9eec9404.mount: Deactivated successfully. Feb 13 19:29:08.837691 containerd[1901]: time="2025-02-13T19:29:08.835251652Z" level=info msg="StopPodSandbox for \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\"" Feb 13 19:29:08.837691 containerd[1901]: time="2025-02-13T19:29:08.835440422Z" level=info msg="TearDown network for sandbox \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\" successfully" Feb 13 19:29:08.837691 containerd[1901]: time="2025-02-13T19:29:08.835458583Z" level=info msg="StopPodSandbox for \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\" returns successfully" Feb 13 19:29:08.837691 containerd[1901]: time="2025-02-13T19:29:08.837122243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2slcf,Uid:75d90e50-6992-4166-9b4f-bb7871aaa223,Namespace:calico-system,Attempt:3,}" Feb 13 19:29:08.919272 systemd[1]: Created slice kubepods-besteffort-podc744dade_8122_442e_a043_303bbaac5bf4.slice - libcontainer container kubepods-besteffort-podc744dade_8122_442e_a043_303bbaac5bf4.slice. Feb 13 19:29:09.036308 kubelet[2397]: I0213 19:29:09.036229 2397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndwzp\" (UniqueName: \"kubernetes.io/projected/c744dade-8122-442e-a043-303bbaac5bf4-kube-api-access-ndwzp\") pod \"nginx-deployment-8587fbcb89-hjtrf\" (UID: \"c744dade-8122-442e-a043-303bbaac5bf4\") " pod="default/nginx-deployment-8587fbcb89-hjtrf" Feb 13 19:29:09.091428 containerd[1901]: time="2025-02-13T19:29:09.091287043Z" level=error msg="Failed to destroy network for sandbox \"81057b0d25700a64c019094576bb7601777eff8f9d73830f992e8ded73b3bac8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:09.095258 containerd[1901]: time="2025-02-13T19:29:09.095154463Z" level=error msg="encountered an error cleaning up failed sandbox \"81057b0d25700a64c019094576bb7601777eff8f9d73830f992e8ded73b3bac8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:09.097256 containerd[1901]: time="2025-02-13T19:29:09.096500928Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2slcf,Uid:75d90e50-6992-4166-9b4f-bb7871aaa223,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"81057b0d25700a64c019094576bb7601777eff8f9d73830f992e8ded73b3bac8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:09.097325 kubelet[2397]: E0213 19:29:09.096786 2397 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81057b0d25700a64c019094576bb7601777eff8f9d73830f992e8ded73b3bac8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:09.097325 kubelet[2397]: E0213 19:29:09.096852 2397 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81057b0d25700a64c019094576bb7601777eff8f9d73830f992e8ded73b3bac8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2slcf" Feb 13 19:29:09.097325 kubelet[2397]: E0213 19:29:09.096882 2397 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81057b0d25700a64c019094576bb7601777eff8f9d73830f992e8ded73b3bac8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2slcf" Feb 13 19:29:09.096611 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-81057b0d25700a64c019094576bb7601777eff8f9d73830f992e8ded73b3bac8-shm.mount: Deactivated successfully. Feb 13 19:29:09.098428 kubelet[2397]: E0213 19:29:09.096933 2397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2slcf_calico-system(75d90e50-6992-4166-9b4f-bb7871aaa223)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2slcf_calico-system(75d90e50-6992-4166-9b4f-bb7871aaa223)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"81057b0d25700a64c019094576bb7601777eff8f9d73830f992e8ded73b3bac8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2slcf" podUID="75d90e50-6992-4166-9b4f-bb7871aaa223" Feb 13 19:29:09.227235 containerd[1901]: time="2025-02-13T19:29:09.227189952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-hjtrf,Uid:c744dade-8122-442e-a043-303bbaac5bf4,Namespace:default,Attempt:0,}" Feb 13 19:29:09.436384 containerd[1901]: time="2025-02-13T19:29:09.436225486Z" level=error msg="Failed to destroy network for sandbox \"1226681799cce8ab80de57e0bac64fa8144a7038f692d187fbc3c6a33e6b0b7a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:09.437191 containerd[1901]: time="2025-02-13T19:29:09.437147584Z" level=error msg="encountered an error cleaning up failed sandbox \"1226681799cce8ab80de57e0bac64fa8144a7038f692d187fbc3c6a33e6b0b7a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:09.437326 containerd[1901]: time="2025-02-13T19:29:09.437234761Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-hjtrf,Uid:c744dade-8122-442e-a043-303bbaac5bf4,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1226681799cce8ab80de57e0bac64fa8144a7038f692d187fbc3c6a33e6b0b7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:09.437782 kubelet[2397]: E0213 19:29:09.437666 2397 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1226681799cce8ab80de57e0bac64fa8144a7038f692d187fbc3c6a33e6b0b7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:09.437782 kubelet[2397]: E0213 19:29:09.437759 2397 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1226681799cce8ab80de57e0bac64fa8144a7038f692d187fbc3c6a33e6b0b7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-hjtrf" Feb 13 19:29:09.437998 kubelet[2397]: E0213 19:29:09.437787 2397 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1226681799cce8ab80de57e0bac64fa8144a7038f692d187fbc3c6a33e6b0b7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-hjtrf" Feb 13 19:29:09.437998 kubelet[2397]: E0213 19:29:09.437839 2397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-hjtrf_default(c744dade-8122-442e-a043-303bbaac5bf4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-hjtrf_default(c744dade-8122-442e-a043-303bbaac5bf4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1226681799cce8ab80de57e0bac64fa8144a7038f692d187fbc3c6a33e6b0b7a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-hjtrf" podUID="c744dade-8122-442e-a043-303bbaac5bf4" Feb 13 19:29:09.505030 kubelet[2397]: E0213 19:29:09.504778 2397 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:09.527024 kubelet[2397]: E0213 19:29:09.526607 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:09.839108 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1226681799cce8ab80de57e0bac64fa8144a7038f692d187fbc3c6a33e6b0b7a-shm.mount: Deactivated successfully. Feb 13 19:29:09.842031 kubelet[2397]: I0213 19:29:09.841939 2397 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81057b0d25700a64c019094576bb7601777eff8f9d73830f992e8ded73b3bac8" Feb 13 19:29:09.843108 containerd[1901]: time="2025-02-13T19:29:09.843066070Z" level=info msg="StopPodSandbox for \"81057b0d25700a64c019094576bb7601777eff8f9d73830f992e8ded73b3bac8\"" Feb 13 19:29:09.843809 containerd[1901]: time="2025-02-13T19:29:09.843387259Z" level=info msg="Ensure that sandbox 81057b0d25700a64c019094576bb7601777eff8f9d73830f992e8ded73b3bac8 in task-service has been cleanup successfully" Feb 13 19:29:09.846640 containerd[1901]: time="2025-02-13T19:29:09.846491572Z" level=info msg="TearDown network for sandbox \"81057b0d25700a64c019094576bb7601777eff8f9d73830f992e8ded73b3bac8\" successfully" Feb 13 19:29:09.846640 containerd[1901]: time="2025-02-13T19:29:09.846553453Z" level=info msg="StopPodSandbox for \"81057b0d25700a64c019094576bb7601777eff8f9d73830f992e8ded73b3bac8\" returns successfully" Feb 13 19:29:09.849930 systemd[1]: run-netns-cni\x2d05ad75f7\x2dcf58\x2d54a5\x2d54d9\x2d1b6c5faa8f94.mount: Deactivated successfully. Feb 13 19:29:09.851671 containerd[1901]: time="2025-02-13T19:29:09.851633512Z" level=info msg="StopPodSandbox for \"c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383\"" Feb 13 19:29:09.851766 containerd[1901]: time="2025-02-13T19:29:09.851750862Z" level=info msg="TearDown network for sandbox \"c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383\" successfully" Feb 13 19:29:09.851866 containerd[1901]: time="2025-02-13T19:29:09.851767333Z" level=info msg="StopPodSandbox for \"c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383\" returns successfully" Feb 13 19:29:09.853312 containerd[1901]: time="2025-02-13T19:29:09.853283743Z" level=info msg="StopPodSandbox for \"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\"" Feb 13 19:29:09.853921 kubelet[2397]: I0213 19:29:09.853893 2397 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1226681799cce8ab80de57e0bac64fa8144a7038f692d187fbc3c6a33e6b0b7a" Feb 13 19:29:09.854523 containerd[1901]: time="2025-02-13T19:29:09.854496418Z" level=info msg="TearDown network for sandbox \"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\" successfully" Feb 13 19:29:09.854523 containerd[1901]: time="2025-02-13T19:29:09.854520239Z" level=info msg="StopPodSandbox for \"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\" returns successfully" Feb 13 19:29:09.854702 containerd[1901]: time="2025-02-13T19:29:09.854642010Z" level=info msg="StopPodSandbox for \"1226681799cce8ab80de57e0bac64fa8144a7038f692d187fbc3c6a33e6b0b7a\"" Feb 13 19:29:09.854858 containerd[1901]: time="2025-02-13T19:29:09.854834251Z" level=info msg="Ensure that sandbox 1226681799cce8ab80de57e0bac64fa8144a7038f692d187fbc3c6a33e6b0b7a in task-service has been cleanup successfully" Feb 13 19:29:09.858440 containerd[1901]: time="2025-02-13T19:29:09.857341050Z" level=info msg="StopPodSandbox for \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\"" Feb 13 19:29:09.858440 containerd[1901]: time="2025-02-13T19:29:09.857520688Z" level=info msg="TearDown network for sandbox \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\" successfully" Feb 13 19:29:09.858440 containerd[1901]: time="2025-02-13T19:29:09.857539691Z" level=info msg="StopPodSandbox for \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\" returns successfully" Feb 13 19:29:09.858440 containerd[1901]: time="2025-02-13T19:29:09.857632280Z" level=info msg="TearDown network for sandbox \"1226681799cce8ab80de57e0bac64fa8144a7038f692d187fbc3c6a33e6b0b7a\" successfully" Feb 13 19:29:09.858440 containerd[1901]: time="2025-02-13T19:29:09.857645790Z" level=info msg="StopPodSandbox for \"1226681799cce8ab80de57e0bac64fa8144a7038f692d187fbc3c6a33e6b0b7a\" returns successfully" Feb 13 19:29:09.860758 systemd[1]: run-netns-cni\x2d7f5e9838\x2d888b\x2d1ac1\x2d527f\x2d1a25288b5fac.mount: Deactivated successfully. Feb 13 19:29:09.862236 containerd[1901]: time="2025-02-13T19:29:09.862180725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-hjtrf,Uid:c744dade-8122-442e-a043-303bbaac5bf4,Namespace:default,Attempt:1,}" Feb 13 19:29:09.863859 containerd[1901]: time="2025-02-13T19:29:09.863824325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2slcf,Uid:75d90e50-6992-4166-9b4f-bb7871aaa223,Namespace:calico-system,Attempt:4,}" Feb 13 19:29:10.076396 containerd[1901]: time="2025-02-13T19:29:10.076222381Z" level=error msg="Failed to destroy network for sandbox \"62f6e833830825836b2d9cd4ef2468d7f8eb1447b5819f3817664e63f552ef09\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:10.076794 containerd[1901]: time="2025-02-13T19:29:10.076694249Z" level=error msg="encountered an error cleaning up failed sandbox \"62f6e833830825836b2d9cd4ef2468d7f8eb1447b5819f3817664e63f552ef09\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:10.076794 containerd[1901]: time="2025-02-13T19:29:10.076777064Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2slcf,Uid:75d90e50-6992-4166-9b4f-bb7871aaa223,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"62f6e833830825836b2d9cd4ef2468d7f8eb1447b5819f3817664e63f552ef09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:10.079510 kubelet[2397]: E0213 19:29:10.078731 2397 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62f6e833830825836b2d9cd4ef2468d7f8eb1447b5819f3817664e63f552ef09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:10.079510 kubelet[2397]: E0213 19:29:10.078803 2397 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62f6e833830825836b2d9cd4ef2468d7f8eb1447b5819f3817664e63f552ef09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2slcf" Feb 13 19:29:10.079510 kubelet[2397]: E0213 19:29:10.078835 2397 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62f6e833830825836b2d9cd4ef2468d7f8eb1447b5819f3817664e63f552ef09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2slcf" Feb 13 19:29:10.079771 kubelet[2397]: E0213 19:29:10.079009 2397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2slcf_calico-system(75d90e50-6992-4166-9b4f-bb7871aaa223)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2slcf_calico-system(75d90e50-6992-4166-9b4f-bb7871aaa223)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62f6e833830825836b2d9cd4ef2468d7f8eb1447b5819f3817664e63f552ef09\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2slcf" podUID="75d90e50-6992-4166-9b4f-bb7871aaa223" Feb 13 19:29:10.125432 containerd[1901]: time="2025-02-13T19:29:10.125289224Z" level=error msg="Failed to destroy network for sandbox \"63c75c37227579ed81f499e5b4f1d45e6d4e5d2122138039ec4326681c947823\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:10.127299 containerd[1901]: time="2025-02-13T19:29:10.127244253Z" level=error msg="encountered an error cleaning up failed sandbox \"63c75c37227579ed81f499e5b4f1d45e6d4e5d2122138039ec4326681c947823\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:10.127458 containerd[1901]: time="2025-02-13T19:29:10.127337279Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-hjtrf,Uid:c744dade-8122-442e-a043-303bbaac5bf4,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"63c75c37227579ed81f499e5b4f1d45e6d4e5d2122138039ec4326681c947823\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:10.127615 kubelet[2397]: E0213 19:29:10.127575 2397 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63c75c37227579ed81f499e5b4f1d45e6d4e5d2122138039ec4326681c947823\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:10.127688 kubelet[2397]: E0213 19:29:10.127639 2397 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63c75c37227579ed81f499e5b4f1d45e6d4e5d2122138039ec4326681c947823\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-hjtrf" Feb 13 19:29:10.127688 kubelet[2397]: E0213 19:29:10.127668 2397 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63c75c37227579ed81f499e5b4f1d45e6d4e5d2122138039ec4326681c947823\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-hjtrf" Feb 13 19:29:10.127782 kubelet[2397]: E0213 19:29:10.127719 2397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-hjtrf_default(c744dade-8122-442e-a043-303bbaac5bf4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-hjtrf_default(c744dade-8122-442e-a043-303bbaac5bf4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63c75c37227579ed81f499e5b4f1d45e6d4e5d2122138039ec4326681c947823\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-hjtrf" podUID="c744dade-8122-442e-a043-303bbaac5bf4" Feb 13 19:29:10.527877 kubelet[2397]: E0213 19:29:10.527839 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:10.833309 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-62f6e833830825836b2d9cd4ef2468d7f8eb1447b5819f3817664e63f552ef09-shm.mount: Deactivated successfully. Feb 13 19:29:10.834707 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-63c75c37227579ed81f499e5b4f1d45e6d4e5d2122138039ec4326681c947823-shm.mount: Deactivated successfully. Feb 13 19:29:10.864028 kubelet[2397]: I0213 19:29:10.863994 2397 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62f6e833830825836b2d9cd4ef2468d7f8eb1447b5819f3817664e63f552ef09" Feb 13 19:29:10.866934 containerd[1901]: time="2025-02-13T19:29:10.866505334Z" level=info msg="StopPodSandbox for \"62f6e833830825836b2d9cd4ef2468d7f8eb1447b5819f3817664e63f552ef09\"" Feb 13 19:29:10.866934 containerd[1901]: time="2025-02-13T19:29:10.866762456Z" level=info msg="Ensure that sandbox 62f6e833830825836b2d9cd4ef2468d7f8eb1447b5819f3817664e63f552ef09 in task-service has been cleanup successfully" Feb 13 19:29:10.867742 containerd[1901]: time="2025-02-13T19:29:10.867594910Z" level=info msg="TearDown network for sandbox \"62f6e833830825836b2d9cd4ef2468d7f8eb1447b5819f3817664e63f552ef09\" successfully" Feb 13 19:29:10.867742 containerd[1901]: time="2025-02-13T19:29:10.867643940Z" level=info msg="StopPodSandbox for \"62f6e833830825836b2d9cd4ef2468d7f8eb1447b5819f3817664e63f552ef09\" returns successfully" Feb 13 19:29:10.872826 systemd[1]: run-netns-cni\x2d8b096026\x2d78a3\x2d3fae\x2d8945\x2d948a1d7d6c06.mount: Deactivated successfully. Feb 13 19:29:10.874448 containerd[1901]: time="2025-02-13T19:29:10.873820243Z" level=info msg="StopPodSandbox for \"81057b0d25700a64c019094576bb7601777eff8f9d73830f992e8ded73b3bac8\"" Feb 13 19:29:10.874448 containerd[1901]: time="2025-02-13T19:29:10.873950095Z" level=info msg="TearDown network for sandbox \"81057b0d25700a64c019094576bb7601777eff8f9d73830f992e8ded73b3bac8\" successfully" Feb 13 19:29:10.874448 containerd[1901]: time="2025-02-13T19:29:10.874009744Z" level=info msg="StopPodSandbox for \"81057b0d25700a64c019094576bb7601777eff8f9d73830f992e8ded73b3bac8\" returns successfully" Feb 13 19:29:10.875602 containerd[1901]: time="2025-02-13T19:29:10.875029245Z" level=info msg="StopPodSandbox for \"c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383\"" Feb 13 19:29:10.875602 containerd[1901]: time="2025-02-13T19:29:10.875338541Z" level=info msg="TearDown network for sandbox \"c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383\" successfully" Feb 13 19:29:10.875602 containerd[1901]: time="2025-02-13T19:29:10.875357538Z" level=info msg="StopPodSandbox for \"c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383\" returns successfully" Feb 13 19:29:10.877092 containerd[1901]: time="2025-02-13T19:29:10.876066946Z" level=info msg="StopPodSandbox for \"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\"" Feb 13 19:29:10.877092 containerd[1901]: time="2025-02-13T19:29:10.876162230Z" level=info msg="TearDown network for sandbox \"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\" successfully" Feb 13 19:29:10.877092 containerd[1901]: time="2025-02-13T19:29:10.876175755Z" level=info msg="StopPodSandbox for \"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\" returns successfully" Feb 13 19:29:10.877398 kubelet[2397]: I0213 19:29:10.876790 2397 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63c75c37227579ed81f499e5b4f1d45e6d4e5d2122138039ec4326681c947823" Feb 13 19:29:10.877833 containerd[1901]: time="2025-02-13T19:29:10.877807115Z" level=info msg="StopPodSandbox for \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\"" Feb 13 19:29:10.878007 containerd[1901]: time="2025-02-13T19:29:10.877915886Z" level=info msg="TearDown network for sandbox \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\" successfully" Feb 13 19:29:10.878007 containerd[1901]: time="2025-02-13T19:29:10.877932916Z" level=info msg="StopPodSandbox for \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\" returns successfully" Feb 13 19:29:10.879804 containerd[1901]: time="2025-02-13T19:29:10.878706207Z" level=info msg="StopPodSandbox for \"63c75c37227579ed81f499e5b4f1d45e6d4e5d2122138039ec4326681c947823\"" Feb 13 19:29:10.879804 containerd[1901]: time="2025-02-13T19:29:10.879308691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2slcf,Uid:75d90e50-6992-4166-9b4f-bb7871aaa223,Namespace:calico-system,Attempt:5,}" Feb 13 19:29:10.880248 containerd[1901]: time="2025-02-13T19:29:10.880129966Z" level=info msg="Ensure that sandbox 63c75c37227579ed81f499e5b4f1d45e6d4e5d2122138039ec4326681c947823 in task-service has been cleanup successfully" Feb 13 19:29:10.881356 containerd[1901]: time="2025-02-13T19:29:10.881022244Z" level=info msg="TearDown network for sandbox \"63c75c37227579ed81f499e5b4f1d45e6d4e5d2122138039ec4326681c947823\" successfully" Feb 13 19:29:10.881356 containerd[1901]: time="2025-02-13T19:29:10.881064821Z" level=info msg="StopPodSandbox for \"63c75c37227579ed81f499e5b4f1d45e6d4e5d2122138039ec4326681c947823\" returns successfully" Feb 13 19:29:10.884179 containerd[1901]: time="2025-02-13T19:29:10.884032052Z" level=info msg="StopPodSandbox for \"1226681799cce8ab80de57e0bac64fa8144a7038f692d187fbc3c6a33e6b0b7a\"" Feb 13 19:29:10.884179 containerd[1901]: time="2025-02-13T19:29:10.884138332Z" level=info msg="TearDown network for sandbox \"1226681799cce8ab80de57e0bac64fa8144a7038f692d187fbc3c6a33e6b0b7a\" successfully" Feb 13 19:29:10.884179 containerd[1901]: time="2025-02-13T19:29:10.884154545Z" level=info msg="StopPodSandbox for \"1226681799cce8ab80de57e0bac64fa8144a7038f692d187fbc3c6a33e6b0b7a\" returns successfully" Feb 13 19:29:10.884605 systemd[1]: run-netns-cni\x2dfb6070aa\x2d1be2\x2dbc5f\x2dbc71\x2dffa9f4538286.mount: Deactivated successfully. Feb 13 19:29:10.886968 containerd[1901]: time="2025-02-13T19:29:10.886934841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-hjtrf,Uid:c744dade-8122-442e-a043-303bbaac5bf4,Namespace:default,Attempt:2,}" Feb 13 19:29:11.148899 containerd[1901]: time="2025-02-13T19:29:11.148505309Z" level=error msg="Failed to destroy network for sandbox \"c74977e7e2c7dc753524c0e4b42deededb40d1eea7715fe6f033759eda0cf5a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:11.149392 containerd[1901]: time="2025-02-13T19:29:11.148925555Z" level=error msg="encountered an error cleaning up failed sandbox \"c74977e7e2c7dc753524c0e4b42deededb40d1eea7715fe6f033759eda0cf5a7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:11.149392 containerd[1901]: time="2025-02-13T19:29:11.149002464Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-hjtrf,Uid:c744dade-8122-442e-a043-303bbaac5bf4,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"c74977e7e2c7dc753524c0e4b42deededb40d1eea7715fe6f033759eda0cf5a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:11.151765 kubelet[2397]: E0213 19:29:11.149252 2397 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c74977e7e2c7dc753524c0e4b42deededb40d1eea7715fe6f033759eda0cf5a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:11.151765 kubelet[2397]: E0213 19:29:11.149319 2397 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c74977e7e2c7dc753524c0e4b42deededb40d1eea7715fe6f033759eda0cf5a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-hjtrf" Feb 13 19:29:11.152537 kubelet[2397]: E0213 19:29:11.149349 2397 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c74977e7e2c7dc753524c0e4b42deededb40d1eea7715fe6f033759eda0cf5a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-hjtrf" Feb 13 19:29:11.152537 kubelet[2397]: E0213 19:29:11.152208 2397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-hjtrf_default(c744dade-8122-442e-a043-303bbaac5bf4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-hjtrf_default(c744dade-8122-442e-a043-303bbaac5bf4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c74977e7e2c7dc753524c0e4b42deededb40d1eea7715fe6f033759eda0cf5a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-hjtrf" podUID="c744dade-8122-442e-a043-303bbaac5bf4" Feb 13 19:29:11.153391 containerd[1901]: time="2025-02-13T19:29:11.153192837Z" level=error msg="Failed to destroy network for sandbox \"916207604eb217076ba61a7e571db68500d1003652ba46ac001d4d31d033c2b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:11.153844 containerd[1901]: time="2025-02-13T19:29:11.153810328Z" level=error msg="encountered an error cleaning up failed sandbox \"916207604eb217076ba61a7e571db68500d1003652ba46ac001d4d31d033c2b3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:11.154005 containerd[1901]: time="2025-02-13T19:29:11.153977636Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2slcf,Uid:75d90e50-6992-4166-9b4f-bb7871aaa223,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"916207604eb217076ba61a7e571db68500d1003652ba46ac001d4d31d033c2b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:11.155023 kubelet[2397]: E0213 19:29:11.154808 2397 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"916207604eb217076ba61a7e571db68500d1003652ba46ac001d4d31d033c2b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:11.155023 kubelet[2397]: E0213 19:29:11.154874 2397 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"916207604eb217076ba61a7e571db68500d1003652ba46ac001d4d31d033c2b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2slcf" Feb 13 19:29:11.155023 kubelet[2397]: E0213 19:29:11.154909 2397 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"916207604eb217076ba61a7e571db68500d1003652ba46ac001d4d31d033c2b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2slcf" Feb 13 19:29:11.155191 kubelet[2397]: E0213 19:29:11.154975 2397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2slcf_calico-system(75d90e50-6992-4166-9b4f-bb7871aaa223)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2slcf_calico-system(75d90e50-6992-4166-9b4f-bb7871aaa223)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"916207604eb217076ba61a7e571db68500d1003652ba46ac001d4d31d033c2b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2slcf" podUID="75d90e50-6992-4166-9b4f-bb7871aaa223" Feb 13 19:29:11.529023 kubelet[2397]: E0213 19:29:11.528752 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:11.848320 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c74977e7e2c7dc753524c0e4b42deededb40d1eea7715fe6f033759eda0cf5a7-shm.mount: Deactivated successfully. Feb 13 19:29:11.849590 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-916207604eb217076ba61a7e571db68500d1003652ba46ac001d4d31d033c2b3-shm.mount: Deactivated successfully. Feb 13 19:29:11.894769 kubelet[2397]: I0213 19:29:11.894737 2397 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="916207604eb217076ba61a7e571db68500d1003652ba46ac001d4d31d033c2b3" Feb 13 19:29:11.896114 containerd[1901]: time="2025-02-13T19:29:11.896078459Z" level=info msg="StopPodSandbox for \"916207604eb217076ba61a7e571db68500d1003652ba46ac001d4d31d033c2b3\"" Feb 13 19:29:11.900557 containerd[1901]: time="2025-02-13T19:29:11.898292608Z" level=info msg="Ensure that sandbox 916207604eb217076ba61a7e571db68500d1003652ba46ac001d4d31d033c2b3 in task-service has been cleanup successfully" Feb 13 19:29:11.900803 containerd[1901]: time="2025-02-13T19:29:11.900754025Z" level=info msg="TearDown network for sandbox \"916207604eb217076ba61a7e571db68500d1003652ba46ac001d4d31d033c2b3\" successfully" Feb 13 19:29:11.900898 containerd[1901]: time="2025-02-13T19:29:11.900880738Z" level=info msg="StopPodSandbox for \"916207604eb217076ba61a7e571db68500d1003652ba46ac001d4d31d033c2b3\" returns successfully" Feb 13 19:29:11.905647 systemd[1]: run-netns-cni\x2df51a76d0\x2d2798\x2d9671\x2dc238\x2d1f6cc6401f16.mount: Deactivated successfully. Feb 13 19:29:11.908667 containerd[1901]: time="2025-02-13T19:29:11.908146227Z" level=info msg="StopPodSandbox for \"62f6e833830825836b2d9cd4ef2468d7f8eb1447b5819f3817664e63f552ef09\"" Feb 13 19:29:11.908667 containerd[1901]: time="2025-02-13T19:29:11.908251832Z" level=info msg="TearDown network for sandbox \"62f6e833830825836b2d9cd4ef2468d7f8eb1447b5819f3817664e63f552ef09\" successfully" Feb 13 19:29:11.908667 containerd[1901]: time="2025-02-13T19:29:11.908267160Z" level=info msg="StopPodSandbox for \"62f6e833830825836b2d9cd4ef2468d7f8eb1447b5819f3817664e63f552ef09\" returns successfully" Feb 13 19:29:11.909461 containerd[1901]: time="2025-02-13T19:29:11.909046591Z" level=info msg="StopPodSandbox for \"81057b0d25700a64c019094576bb7601777eff8f9d73830f992e8ded73b3bac8\"" Feb 13 19:29:11.909829 containerd[1901]: time="2025-02-13T19:29:11.909639544Z" level=info msg="TearDown network for sandbox \"81057b0d25700a64c019094576bb7601777eff8f9d73830f992e8ded73b3bac8\" successfully" Feb 13 19:29:11.909829 containerd[1901]: time="2025-02-13T19:29:11.909660868Z" level=info msg="StopPodSandbox for \"81057b0d25700a64c019094576bb7601777eff8f9d73830f992e8ded73b3bac8\" returns successfully" Feb 13 19:29:11.911370 containerd[1901]: time="2025-02-13T19:29:11.910978646Z" level=info msg="StopPodSandbox for \"c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383\"" Feb 13 19:29:11.911370 containerd[1901]: time="2025-02-13T19:29:11.911078733Z" level=info msg="TearDown network for sandbox \"c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383\" successfully" Feb 13 19:29:11.911370 containerd[1901]: time="2025-02-13T19:29:11.911092314Z" level=info msg="StopPodSandbox for \"c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383\" returns successfully" Feb 13 19:29:11.911719 kubelet[2397]: I0213 19:29:11.911666 2397 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c74977e7e2c7dc753524c0e4b42deededb40d1eea7715fe6f033759eda0cf5a7" Feb 13 19:29:11.912505 containerd[1901]: time="2025-02-13T19:29:11.912481376Z" level=info msg="StopPodSandbox for \"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\"" Feb 13 19:29:11.913534 containerd[1901]: time="2025-02-13T19:29:11.913226138Z" level=info msg="TearDown network for sandbox \"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\" successfully" Feb 13 19:29:11.913534 containerd[1901]: time="2025-02-13T19:29:11.913250170Z" level=info msg="StopPodSandbox for \"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\" returns successfully" Feb 13 19:29:11.913534 containerd[1901]: time="2025-02-13T19:29:11.913383382Z" level=info msg="StopPodSandbox for \"c74977e7e2c7dc753524c0e4b42deededb40d1eea7715fe6f033759eda0cf5a7\"" Feb 13 19:29:11.914018 containerd[1901]: time="2025-02-13T19:29:11.913600960Z" level=info msg="Ensure that sandbox c74977e7e2c7dc753524c0e4b42deededb40d1eea7715fe6f033759eda0cf5a7 in task-service has been cleanup successfully" Feb 13 19:29:11.914104 containerd[1901]: time="2025-02-13T19:29:11.914021190Z" level=info msg="TearDown network for sandbox \"c74977e7e2c7dc753524c0e4b42deededb40d1eea7715fe6f033759eda0cf5a7\" successfully" Feb 13 19:29:11.914104 containerd[1901]: time="2025-02-13T19:29:11.914048234Z" level=info msg="StopPodSandbox for \"c74977e7e2c7dc753524c0e4b42deededb40d1eea7715fe6f033759eda0cf5a7\" returns successfully" Feb 13 19:29:11.917671 containerd[1901]: time="2025-02-13T19:29:11.917480855Z" level=info msg="StopPodSandbox for \"63c75c37227579ed81f499e5b4f1d45e6d4e5d2122138039ec4326681c947823\"" Feb 13 19:29:11.917671 containerd[1901]: time="2025-02-13T19:29:11.917615065Z" level=info msg="TearDown network for sandbox \"63c75c37227579ed81f499e5b4f1d45e6d4e5d2122138039ec4326681c947823\" successfully" Feb 13 19:29:11.917671 containerd[1901]: time="2025-02-13T19:29:11.917630577Z" level=info msg="StopPodSandbox for \"63c75c37227579ed81f499e5b4f1d45e6d4e5d2122138039ec4326681c947823\" returns successfully" Feb 13 19:29:11.917879 containerd[1901]: time="2025-02-13T19:29:11.917707021Z" level=info msg="StopPodSandbox for \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\"" Feb 13 19:29:11.917879 containerd[1901]: time="2025-02-13T19:29:11.917779325Z" level=info msg="TearDown network for sandbox \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\" successfully" Feb 13 19:29:11.917879 containerd[1901]: time="2025-02-13T19:29:11.917791648Z" level=info msg="StopPodSandbox for \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\" returns successfully" Feb 13 19:29:11.918978 systemd[1]: run-netns-cni\x2d2d0485cc\x2dec4c\x2dac29\x2d4361\x2d7e1bf73ee4ed.mount: Deactivated successfully. Feb 13 19:29:11.919877 containerd[1901]: time="2025-02-13T19:29:11.919242119Z" level=info msg="StopPodSandbox for \"1226681799cce8ab80de57e0bac64fa8144a7038f692d187fbc3c6a33e6b0b7a\"" Feb 13 19:29:11.919877 containerd[1901]: time="2025-02-13T19:29:11.919330227Z" level=info msg="TearDown network for sandbox \"1226681799cce8ab80de57e0bac64fa8144a7038f692d187fbc3c6a33e6b0b7a\" successfully" Feb 13 19:29:11.919877 containerd[1901]: time="2025-02-13T19:29:11.919345098Z" level=info msg="StopPodSandbox for \"1226681799cce8ab80de57e0bac64fa8144a7038f692d187fbc3c6a33e6b0b7a\" returns successfully" Feb 13 19:29:11.922024 containerd[1901]: time="2025-02-13T19:29:11.921987564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-hjtrf,Uid:c744dade-8122-442e-a043-303bbaac5bf4,Namespace:default,Attempt:3,}" Feb 13 19:29:11.924377 containerd[1901]: time="2025-02-13T19:29:11.924327668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2slcf,Uid:75d90e50-6992-4166-9b4f-bb7871aaa223,Namespace:calico-system,Attempt:6,}" Feb 13 19:29:12.238024 containerd[1901]: time="2025-02-13T19:29:12.237969090Z" level=error msg="Failed to destroy network for sandbox \"277f44c3c8153e250f840361c3ef9f9782f79bb248942e4af06a60a1bb833aba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:12.239528 containerd[1901]: time="2025-02-13T19:29:12.239229582Z" level=error msg="encountered an error cleaning up failed sandbox \"277f44c3c8153e250f840361c3ef9f9782f79bb248942e4af06a60a1bb833aba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:12.239528 containerd[1901]: time="2025-02-13T19:29:12.239329281Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-hjtrf,Uid:c744dade-8122-442e-a043-303bbaac5bf4,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"277f44c3c8153e250f840361c3ef9f9782f79bb248942e4af06a60a1bb833aba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:12.239724 kubelet[2397]: E0213 19:29:12.239656 2397 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"277f44c3c8153e250f840361c3ef9f9782f79bb248942e4af06a60a1bb833aba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:12.239788 kubelet[2397]: E0213 19:29:12.239716 2397 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"277f44c3c8153e250f840361c3ef9f9782f79bb248942e4af06a60a1bb833aba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-hjtrf" Feb 13 19:29:12.239788 kubelet[2397]: E0213 19:29:12.239745 2397 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"277f44c3c8153e250f840361c3ef9f9782f79bb248942e4af06a60a1bb833aba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-hjtrf" Feb 13 19:29:12.240605 kubelet[2397]: E0213 19:29:12.239798 2397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-hjtrf_default(c744dade-8122-442e-a043-303bbaac5bf4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-hjtrf_default(c744dade-8122-442e-a043-303bbaac5bf4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"277f44c3c8153e250f840361c3ef9f9782f79bb248942e4af06a60a1bb833aba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-hjtrf" podUID="c744dade-8122-442e-a043-303bbaac5bf4" Feb 13 19:29:12.319826 containerd[1901]: time="2025-02-13T19:29:12.319518300Z" level=error msg="Failed to destroy network for sandbox \"66546204545d44abda6676e4f2078072d294f7db45e65e8e7e6f5492cf326bbf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:12.321230 containerd[1901]: time="2025-02-13T19:29:12.320903062Z" level=error msg="encountered an error cleaning up failed sandbox \"66546204545d44abda6676e4f2078072d294f7db45e65e8e7e6f5492cf326bbf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:12.321763 containerd[1901]: time="2025-02-13T19:29:12.321252393Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2slcf,Uid:75d90e50-6992-4166-9b4f-bb7871aaa223,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"66546204545d44abda6676e4f2078072d294f7db45e65e8e7e6f5492cf326bbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:12.322261 kubelet[2397]: E0213 19:29:12.322111 2397 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66546204545d44abda6676e4f2078072d294f7db45e65e8e7e6f5492cf326bbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:12.322261 kubelet[2397]: E0213 19:29:12.322177 2397 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66546204545d44abda6676e4f2078072d294f7db45e65e8e7e6f5492cf326bbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2slcf" Feb 13 19:29:12.322261 kubelet[2397]: E0213 19:29:12.322205 2397 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66546204545d44abda6676e4f2078072d294f7db45e65e8e7e6f5492cf326bbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2slcf" Feb 13 19:29:12.322525 kubelet[2397]: E0213 19:29:12.322263 2397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2slcf_calico-system(75d90e50-6992-4166-9b4f-bb7871aaa223)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2slcf_calico-system(75d90e50-6992-4166-9b4f-bb7871aaa223)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66546204545d44abda6676e4f2078072d294f7db45e65e8e7e6f5492cf326bbf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2slcf" podUID="75d90e50-6992-4166-9b4f-bb7871aaa223" Feb 13 19:29:12.529353 kubelet[2397]: E0213 19:29:12.529122 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:12.833796 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-66546204545d44abda6676e4f2078072d294f7db45e65e8e7e6f5492cf326bbf-shm.mount: Deactivated successfully. Feb 13 19:29:12.833951 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-277f44c3c8153e250f840361c3ef9f9782f79bb248942e4af06a60a1bb833aba-shm.mount: Deactivated successfully. Feb 13 19:29:12.918445 kubelet[2397]: I0213 19:29:12.917793 2397 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66546204545d44abda6676e4f2078072d294f7db45e65e8e7e6f5492cf326bbf" Feb 13 19:29:12.919637 containerd[1901]: time="2025-02-13T19:29:12.919600719Z" level=info msg="StopPodSandbox for \"66546204545d44abda6676e4f2078072d294f7db45e65e8e7e6f5492cf326bbf\"" Feb 13 19:29:12.922387 containerd[1901]: time="2025-02-13T19:29:12.920430423Z" level=info msg="Ensure that sandbox 66546204545d44abda6676e4f2078072d294f7db45e65e8e7e6f5492cf326bbf in task-service has been cleanup successfully" Feb 13 19:29:12.922598 containerd[1901]: time="2025-02-13T19:29:12.922555989Z" level=info msg="TearDown network for sandbox \"66546204545d44abda6676e4f2078072d294f7db45e65e8e7e6f5492cf326bbf\" successfully" Feb 13 19:29:12.922819 containerd[1901]: time="2025-02-13T19:29:12.922667930Z" level=info msg="StopPodSandbox for \"66546204545d44abda6676e4f2078072d294f7db45e65e8e7e6f5492cf326bbf\" returns successfully" Feb 13 19:29:12.924675 containerd[1901]: time="2025-02-13T19:29:12.923252649Z" level=info msg="StopPodSandbox for \"916207604eb217076ba61a7e571db68500d1003652ba46ac001d4d31d033c2b3\"" Feb 13 19:29:12.924675 containerd[1901]: time="2025-02-13T19:29:12.923349412Z" level=info msg="TearDown network for sandbox \"916207604eb217076ba61a7e571db68500d1003652ba46ac001d4d31d033c2b3\" successfully" Feb 13 19:29:12.924675 containerd[1901]: time="2025-02-13T19:29:12.923388954Z" level=info msg="StopPodSandbox for \"916207604eb217076ba61a7e571db68500d1003652ba46ac001d4d31d033c2b3\" returns successfully" Feb 13 19:29:12.924183 systemd[1]: run-netns-cni\x2dd7dd4e82\x2d1102\x2d51aa\x2dda8b\x2d9dedf7d68c65.mount: Deactivated successfully. Feb 13 19:29:12.925291 containerd[1901]: time="2025-02-13T19:29:12.925269167Z" level=info msg="StopPodSandbox for \"62f6e833830825836b2d9cd4ef2468d7f8eb1447b5819f3817664e63f552ef09\"" Feb 13 19:29:12.925502 containerd[1901]: time="2025-02-13T19:29:12.925484844Z" level=info msg="TearDown network for sandbox \"62f6e833830825836b2d9cd4ef2468d7f8eb1447b5819f3817664e63f552ef09\" successfully" Feb 13 19:29:12.925642 containerd[1901]: time="2025-02-13T19:29:12.925626270Z" level=info msg="StopPodSandbox for \"62f6e833830825836b2d9cd4ef2468d7f8eb1447b5819f3817664e63f552ef09\" returns successfully" Feb 13 19:29:12.926283 containerd[1901]: time="2025-02-13T19:29:12.926166294Z" level=info msg="StopPodSandbox for \"81057b0d25700a64c019094576bb7601777eff8f9d73830f992e8ded73b3bac8\"" Feb 13 19:29:12.926507 containerd[1901]: time="2025-02-13T19:29:12.926487593Z" level=info msg="TearDown network for sandbox \"81057b0d25700a64c019094576bb7601777eff8f9d73830f992e8ded73b3bac8\" successfully" Feb 13 19:29:12.926614 containerd[1901]: time="2025-02-13T19:29:12.926597059Z" level=info msg="StopPodSandbox for \"81057b0d25700a64c019094576bb7601777eff8f9d73830f992e8ded73b3bac8\" returns successfully" Feb 13 19:29:12.927071 kubelet[2397]: I0213 19:29:12.927043 2397 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="277f44c3c8153e250f840361c3ef9f9782f79bb248942e4af06a60a1bb833aba" Feb 13 19:29:12.927640 containerd[1901]: time="2025-02-13T19:29:12.927244024Z" level=info msg="StopPodSandbox for \"c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383\"" Feb 13 19:29:12.927640 containerd[1901]: time="2025-02-13T19:29:12.927328999Z" level=info msg="TearDown network for sandbox \"c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383\" successfully" Feb 13 19:29:12.927640 containerd[1901]: time="2025-02-13T19:29:12.927342504Z" level=info msg="StopPodSandbox for \"c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383\" returns successfully" Feb 13 19:29:12.928087 containerd[1901]: time="2025-02-13T19:29:12.928065944Z" level=info msg="StopPodSandbox for \"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\"" Feb 13 19:29:12.928291 containerd[1901]: time="2025-02-13T19:29:12.928235233Z" level=info msg="TearDown network for sandbox \"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\" successfully" Feb 13 19:29:12.928390 containerd[1901]: time="2025-02-13T19:29:12.928355683Z" level=info msg="StopPodSandbox for \"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\" returns successfully" Feb 13 19:29:12.928538 containerd[1901]: time="2025-02-13T19:29:12.928521215Z" level=info msg="StopPodSandbox for \"277f44c3c8153e250f840361c3ef9f9782f79bb248942e4af06a60a1bb833aba\"" Feb 13 19:29:12.928883 containerd[1901]: time="2025-02-13T19:29:12.928833564Z" level=info msg="Ensure that sandbox 277f44c3c8153e250f840361c3ef9f9782f79bb248942e4af06a60a1bb833aba in task-service has been cleanup successfully" Feb 13 19:29:12.931469 containerd[1901]: time="2025-02-13T19:29:12.931438944Z" level=info msg="TearDown network for sandbox \"277f44c3c8153e250f840361c3ef9f9782f79bb248942e4af06a60a1bb833aba\" successfully" Feb 13 19:29:12.931780 containerd[1901]: time="2025-02-13T19:29:12.931567419Z" level=info msg="StopPodSandbox for \"277f44c3c8153e250f840361c3ef9f9782f79bb248942e4af06a60a1bb833aba\" returns successfully" Feb 13 19:29:12.932536 containerd[1901]: time="2025-02-13T19:29:12.932033267Z" level=info msg="StopPodSandbox for \"c74977e7e2c7dc753524c0e4b42deededb40d1eea7715fe6f033759eda0cf5a7\"" Feb 13 19:29:12.932536 containerd[1901]: time="2025-02-13T19:29:12.932130471Z" level=info msg="TearDown network for sandbox \"c74977e7e2c7dc753524c0e4b42deededb40d1eea7715fe6f033759eda0cf5a7\" successfully" Feb 13 19:29:12.932536 containerd[1901]: time="2025-02-13T19:29:12.932144407Z" level=info msg="StopPodSandbox for \"c74977e7e2c7dc753524c0e4b42deededb40d1eea7715fe6f033759eda0cf5a7\" returns successfully" Feb 13 19:29:12.932536 containerd[1901]: time="2025-02-13T19:29:12.932227081Z" level=info msg="StopPodSandbox for \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\"" Feb 13 19:29:12.932536 containerd[1901]: time="2025-02-13T19:29:12.932297321Z" level=info msg="TearDown network for sandbox \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\" successfully" Feb 13 19:29:12.932536 containerd[1901]: time="2025-02-13T19:29:12.932309402Z" level=info msg="StopPodSandbox for \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\" returns successfully" Feb 13 19:29:12.932264 systemd[1]: run-netns-cni\x2d802d1af1\x2d5cdd\x2db667\x2d43ac\x2dde26ccec58ad.mount: Deactivated successfully. Feb 13 19:29:12.934807 containerd[1901]: time="2025-02-13T19:29:12.934189397Z" level=info msg="StopPodSandbox for \"63c75c37227579ed81f499e5b4f1d45e6d4e5d2122138039ec4326681c947823\"" Feb 13 19:29:12.934807 containerd[1901]: time="2025-02-13T19:29:12.934284666Z" level=info msg="TearDown network for sandbox \"63c75c37227579ed81f499e5b4f1d45e6d4e5d2122138039ec4326681c947823\" successfully" Feb 13 19:29:12.934807 containerd[1901]: time="2025-02-13T19:29:12.934299513Z" level=info msg="StopPodSandbox for \"63c75c37227579ed81f499e5b4f1d45e6d4e5d2122138039ec4326681c947823\" returns successfully" Feb 13 19:29:12.935125 containerd[1901]: time="2025-02-13T19:29:12.935102665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2slcf,Uid:75d90e50-6992-4166-9b4f-bb7871aaa223,Namespace:calico-system,Attempt:7,}" Feb 13 19:29:12.937972 containerd[1901]: time="2025-02-13T19:29:12.937940610Z" level=info msg="StopPodSandbox for \"1226681799cce8ab80de57e0bac64fa8144a7038f692d187fbc3c6a33e6b0b7a\"" Feb 13 19:29:12.938073 containerd[1901]: time="2025-02-13T19:29:12.938044768Z" level=info msg="TearDown network for sandbox \"1226681799cce8ab80de57e0bac64fa8144a7038f692d187fbc3c6a33e6b0b7a\" successfully" Feb 13 19:29:12.938129 containerd[1901]: time="2025-02-13T19:29:12.938075768Z" level=info msg="StopPodSandbox for \"1226681799cce8ab80de57e0bac64fa8144a7038f692d187fbc3c6a33e6b0b7a\" returns successfully" Feb 13 19:29:12.938953 containerd[1901]: time="2025-02-13T19:29:12.938922429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-hjtrf,Uid:c744dade-8122-442e-a043-303bbaac5bf4,Namespace:default,Attempt:4,}" Feb 13 19:29:13.200796 containerd[1901]: time="2025-02-13T19:29:13.200669887Z" level=error msg="Failed to destroy network for sandbox \"4a7647c5c3655b2ebbc1a4299824f9e2d37981503ecce81e0d91e7875e6de31f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:13.202623 containerd[1901]: time="2025-02-13T19:29:13.202573125Z" level=error msg="encountered an error cleaning up failed sandbox \"4a7647c5c3655b2ebbc1a4299824f9e2d37981503ecce81e0d91e7875e6de31f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:13.202760 containerd[1901]: time="2025-02-13T19:29:13.202697968Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2slcf,Uid:75d90e50-6992-4166-9b4f-bb7871aaa223,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"4a7647c5c3655b2ebbc1a4299824f9e2d37981503ecce81e0d91e7875e6de31f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:13.203396 kubelet[2397]: E0213 19:29:13.203290 2397 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a7647c5c3655b2ebbc1a4299824f9e2d37981503ecce81e0d91e7875e6de31f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:13.203995 kubelet[2397]: E0213 19:29:13.203356 2397 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a7647c5c3655b2ebbc1a4299824f9e2d37981503ecce81e0d91e7875e6de31f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2slcf" Feb 13 19:29:13.203995 kubelet[2397]: E0213 19:29:13.203654 2397 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a7647c5c3655b2ebbc1a4299824f9e2d37981503ecce81e0d91e7875e6de31f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2slcf" Feb 13 19:29:13.203995 kubelet[2397]: E0213 19:29:13.203773 2397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2slcf_calico-system(75d90e50-6992-4166-9b4f-bb7871aaa223)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2slcf_calico-system(75d90e50-6992-4166-9b4f-bb7871aaa223)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a7647c5c3655b2ebbc1a4299824f9e2d37981503ecce81e0d91e7875e6de31f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2slcf" podUID="75d90e50-6992-4166-9b4f-bb7871aaa223" Feb 13 19:29:13.208559 containerd[1901]: time="2025-02-13T19:29:13.208482874Z" level=error msg="Failed to destroy network for sandbox \"592066a662e72cbd71bb3b1a132a4bf62d419c5a08fdc195a45f9abeca4bf82d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:13.208857 containerd[1901]: time="2025-02-13T19:29:13.208821969Z" level=error msg="encountered an error cleaning up failed sandbox \"592066a662e72cbd71bb3b1a132a4bf62d419c5a08fdc195a45f9abeca4bf82d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:13.208934 containerd[1901]: time="2025-02-13T19:29:13.208894481Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-hjtrf,Uid:c744dade-8122-442e-a043-303bbaac5bf4,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"592066a662e72cbd71bb3b1a132a4bf62d419c5a08fdc195a45f9abeca4bf82d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:13.209252 kubelet[2397]: E0213 19:29:13.209117 2397 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"592066a662e72cbd71bb3b1a132a4bf62d419c5a08fdc195a45f9abeca4bf82d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:13.209252 kubelet[2397]: E0213 19:29:13.209188 2397 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"592066a662e72cbd71bb3b1a132a4bf62d419c5a08fdc195a45f9abeca4bf82d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-hjtrf" Feb 13 19:29:13.209252 kubelet[2397]: E0213 19:29:13.209217 2397 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"592066a662e72cbd71bb3b1a132a4bf62d419c5a08fdc195a45f9abeca4bf82d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-hjtrf" Feb 13 19:29:13.209899 kubelet[2397]: E0213 19:29:13.209589 2397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-hjtrf_default(c744dade-8122-442e-a043-303bbaac5bf4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-hjtrf_default(c744dade-8122-442e-a043-303bbaac5bf4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"592066a662e72cbd71bb3b1a132a4bf62d419c5a08fdc195a45f9abeca4bf82d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-hjtrf" podUID="c744dade-8122-442e-a043-303bbaac5bf4" Feb 13 19:29:13.529457 kubelet[2397]: E0213 19:29:13.529398 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:13.835344 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-592066a662e72cbd71bb3b1a132a4bf62d419c5a08fdc195a45f9abeca4bf82d-shm.mount: Deactivated successfully. Feb 13 19:29:13.837309 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4a7647c5c3655b2ebbc1a4299824f9e2d37981503ecce81e0d91e7875e6de31f-shm.mount: Deactivated successfully. Feb 13 19:29:13.942344 containerd[1901]: time="2025-02-13T19:29:13.942021681Z" level=info msg="StopPodSandbox for \"4a7647c5c3655b2ebbc1a4299824f9e2d37981503ecce81e0d91e7875e6de31f\"" Feb 13 19:29:13.942344 containerd[1901]: time="2025-02-13T19:29:13.942327945Z" level=info msg="Ensure that sandbox 4a7647c5c3655b2ebbc1a4299824f9e2d37981503ecce81e0d91e7875e6de31f in task-service has been cleanup successfully" Feb 13 19:29:13.942838 kubelet[2397]: I0213 19:29:13.940433 2397 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a7647c5c3655b2ebbc1a4299824f9e2d37981503ecce81e0d91e7875e6de31f" Feb 13 19:29:13.944478 containerd[1901]: time="2025-02-13T19:29:13.944439156Z" level=info msg="TearDown network for sandbox \"4a7647c5c3655b2ebbc1a4299824f9e2d37981503ecce81e0d91e7875e6de31f\" successfully" Feb 13 19:29:13.944724 containerd[1901]: time="2025-02-13T19:29:13.944475481Z" level=info msg="StopPodSandbox for \"4a7647c5c3655b2ebbc1a4299824f9e2d37981503ecce81e0d91e7875e6de31f\" returns successfully" Feb 13 19:29:13.947159 systemd[1]: run-netns-cni\x2da7014160\x2dd45c\x2d7484\x2df42f\x2de0c03757f59b.mount: Deactivated successfully. Feb 13 19:29:13.951291 containerd[1901]: time="2025-02-13T19:29:13.951243142Z" level=info msg="StopPodSandbox for \"66546204545d44abda6676e4f2078072d294f7db45e65e8e7e6f5492cf326bbf\"" Feb 13 19:29:13.951431 containerd[1901]: time="2025-02-13T19:29:13.951385616Z" level=info msg="TearDown network for sandbox \"66546204545d44abda6676e4f2078072d294f7db45e65e8e7e6f5492cf326bbf\" successfully" Feb 13 19:29:13.951431 containerd[1901]: time="2025-02-13T19:29:13.951414826Z" level=info msg="StopPodSandbox for \"66546204545d44abda6676e4f2078072d294f7db45e65e8e7e6f5492cf326bbf\" returns successfully" Feb 13 19:29:13.953878 containerd[1901]: time="2025-02-13T19:29:13.952253475Z" level=info msg="StopPodSandbox for \"916207604eb217076ba61a7e571db68500d1003652ba46ac001d4d31d033c2b3\"" Feb 13 19:29:13.953878 containerd[1901]: time="2025-02-13T19:29:13.952376717Z" level=info msg="TearDown network for sandbox \"916207604eb217076ba61a7e571db68500d1003652ba46ac001d4d31d033c2b3\" successfully" Feb 13 19:29:13.953878 containerd[1901]: time="2025-02-13T19:29:13.952394011Z" level=info msg="StopPodSandbox for \"916207604eb217076ba61a7e571db68500d1003652ba46ac001d4d31d033c2b3\" returns successfully" Feb 13 19:29:13.954281 containerd[1901]: time="2025-02-13T19:29:13.954253180Z" level=info msg="StopPodSandbox for \"62f6e833830825836b2d9cd4ef2468d7f8eb1447b5819f3817664e63f552ef09\"" Feb 13 19:29:13.954616 containerd[1901]: time="2025-02-13T19:29:13.954474779Z" level=info msg="TearDown network for sandbox \"62f6e833830825836b2d9cd4ef2468d7f8eb1447b5819f3817664e63f552ef09\" successfully" Feb 13 19:29:13.954726 containerd[1901]: time="2025-02-13T19:29:13.954709345Z" level=info msg="StopPodSandbox for \"62f6e833830825836b2d9cd4ef2468d7f8eb1447b5819f3817664e63f552ef09\" returns successfully" Feb 13 19:29:13.955470 containerd[1901]: time="2025-02-13T19:29:13.955437447Z" level=info msg="StopPodSandbox for \"81057b0d25700a64c019094576bb7601777eff8f9d73830f992e8ded73b3bac8\"" Feb 13 19:29:13.955776 containerd[1901]: time="2025-02-13T19:29:13.955750411Z" level=info msg="TearDown network for sandbox \"81057b0d25700a64c019094576bb7601777eff8f9d73830f992e8ded73b3bac8\" successfully" Feb 13 19:29:13.955869 containerd[1901]: time="2025-02-13T19:29:13.955851312Z" level=info msg="StopPodSandbox for \"81057b0d25700a64c019094576bb7601777eff8f9d73830f992e8ded73b3bac8\" returns successfully" Feb 13 19:29:13.958536 containerd[1901]: time="2025-02-13T19:29:13.958503164Z" level=info msg="StopPodSandbox for \"c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383\"" Feb 13 19:29:13.958656 containerd[1901]: time="2025-02-13T19:29:13.958600555Z" level=info msg="TearDown network for sandbox \"c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383\" successfully" Feb 13 19:29:13.958769 containerd[1901]: time="2025-02-13T19:29:13.958658357Z" level=info msg="StopPodSandbox for \"c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383\" returns successfully" Feb 13 19:29:13.961004 kubelet[2397]: I0213 19:29:13.959002 2397 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="592066a662e72cbd71bb3b1a132a4bf62d419c5a08fdc195a45f9abeca4bf82d" Feb 13 19:29:13.961679 containerd[1901]: time="2025-02-13T19:29:13.961645636Z" level=info msg="StopPodSandbox for \"592066a662e72cbd71bb3b1a132a4bf62d419c5a08fdc195a45f9abeca4bf82d\"" Feb 13 19:29:13.963396 containerd[1901]: time="2025-02-13T19:29:13.962483721Z" level=info msg="Ensure that sandbox 592066a662e72cbd71bb3b1a132a4bf62d419c5a08fdc195a45f9abeca4bf82d in task-service has been cleanup successfully" Feb 13 19:29:13.965439 containerd[1901]: time="2025-02-13T19:29:13.963569822Z" level=info msg="TearDown network for sandbox \"592066a662e72cbd71bb3b1a132a4bf62d419c5a08fdc195a45f9abeca4bf82d\" successfully" Feb 13 19:29:13.965881 containerd[1901]: time="2025-02-13T19:29:13.965551042Z" level=info msg="StopPodSandbox for \"592066a662e72cbd71bb3b1a132a4bf62d419c5a08fdc195a45f9abeca4bf82d\" returns successfully" Feb 13 19:29:13.965881 containerd[1901]: time="2025-02-13T19:29:13.965692866Z" level=info msg="StopPodSandbox for \"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\"" Feb 13 19:29:13.965881 containerd[1901]: time="2025-02-13T19:29:13.965799421Z" level=info msg="TearDown network for sandbox \"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\" successfully" Feb 13 19:29:13.965881 containerd[1901]: time="2025-02-13T19:29:13.965814343Z" level=info msg="StopPodSandbox for \"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\" returns successfully" Feb 13 19:29:13.967876 containerd[1901]: time="2025-02-13T19:29:13.966629570Z" level=info msg="StopPodSandbox for \"277f44c3c8153e250f840361c3ef9f9782f79bb248942e4af06a60a1bb833aba\"" Feb 13 19:29:13.967129 systemd[1]: run-netns-cni\x2d7c0f6703\x2daa26\x2ddc83\x2d3302\x2d8df5db02033f.mount: Deactivated successfully. Feb 13 19:29:13.970350 containerd[1901]: time="2025-02-13T19:29:13.968933552Z" level=info msg="TearDown network for sandbox \"277f44c3c8153e250f840361c3ef9f9782f79bb248942e4af06a60a1bb833aba\" successfully" Feb 13 19:29:13.970350 containerd[1901]: time="2025-02-13T19:29:13.968960305Z" level=info msg="StopPodSandbox for \"277f44c3c8153e250f840361c3ef9f9782f79bb248942e4af06a60a1bb833aba\" returns successfully" Feb 13 19:29:13.970350 containerd[1901]: time="2025-02-13T19:29:13.969065163Z" level=info msg="StopPodSandbox for \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\"" Feb 13 19:29:13.970350 containerd[1901]: time="2025-02-13T19:29:13.969143812Z" level=info msg="TearDown network for sandbox \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\" successfully" Feb 13 19:29:13.970350 containerd[1901]: time="2025-02-13T19:29:13.969160144Z" level=info msg="StopPodSandbox for \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\" returns successfully" Feb 13 19:29:13.971310 containerd[1901]: time="2025-02-13T19:29:13.971150340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2slcf,Uid:75d90e50-6992-4166-9b4f-bb7871aaa223,Namespace:calico-system,Attempt:8,}" Feb 13 19:29:13.973436 containerd[1901]: time="2025-02-13T19:29:13.973408212Z" level=info msg="StopPodSandbox for \"c74977e7e2c7dc753524c0e4b42deededb40d1eea7715fe6f033759eda0cf5a7\"" Feb 13 19:29:13.973536 containerd[1901]: time="2025-02-13T19:29:13.973509191Z" level=info msg="TearDown network for sandbox \"c74977e7e2c7dc753524c0e4b42deededb40d1eea7715fe6f033759eda0cf5a7\" successfully" Feb 13 19:29:13.973536 containerd[1901]: time="2025-02-13T19:29:13.973524826Z" level=info msg="StopPodSandbox for \"c74977e7e2c7dc753524c0e4b42deededb40d1eea7715fe6f033759eda0cf5a7\" returns successfully" Feb 13 19:29:13.974300 containerd[1901]: time="2025-02-13T19:29:13.974026263Z" level=info msg="StopPodSandbox for \"63c75c37227579ed81f499e5b4f1d45e6d4e5d2122138039ec4326681c947823\"" Feb 13 19:29:13.974300 containerd[1901]: time="2025-02-13T19:29:13.974126619Z" level=info msg="TearDown network for sandbox \"63c75c37227579ed81f499e5b4f1d45e6d4e5d2122138039ec4326681c947823\" successfully" Feb 13 19:29:13.974300 containerd[1901]: time="2025-02-13T19:29:13.974142530Z" level=info msg="StopPodSandbox for \"63c75c37227579ed81f499e5b4f1d45e6d4e5d2122138039ec4326681c947823\" returns successfully" Feb 13 19:29:13.975823 containerd[1901]: time="2025-02-13T19:29:13.975797299Z" level=info msg="StopPodSandbox for \"1226681799cce8ab80de57e0bac64fa8144a7038f692d187fbc3c6a33e6b0b7a\"" Feb 13 19:29:13.975913 containerd[1901]: time="2025-02-13T19:29:13.975899382Z" level=info msg="TearDown network for sandbox \"1226681799cce8ab80de57e0bac64fa8144a7038f692d187fbc3c6a33e6b0b7a\" successfully" Feb 13 19:29:13.975967 containerd[1901]: time="2025-02-13T19:29:13.975916129Z" level=info msg="StopPodSandbox for \"1226681799cce8ab80de57e0bac64fa8144a7038f692d187fbc3c6a33e6b0b7a\" returns successfully" Feb 13 19:29:13.976895 containerd[1901]: time="2025-02-13T19:29:13.976865700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-hjtrf,Uid:c744dade-8122-442e-a043-303bbaac5bf4,Namespace:default,Attempt:5,}" Feb 13 19:29:14.183234 containerd[1901]: time="2025-02-13T19:29:14.181439796Z" level=error msg="Failed to destroy network for sandbox \"106302dee472be6d4e5a6e58cf074c279fb21a13b6dbfa0e013674e362b0b1a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:14.184269 containerd[1901]: time="2025-02-13T19:29:14.183965468Z" level=error msg="encountered an error cleaning up failed sandbox \"106302dee472be6d4e5a6e58cf074c279fb21a13b6dbfa0e013674e362b0b1a5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:14.184269 containerd[1901]: time="2025-02-13T19:29:14.184055047Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-hjtrf,Uid:c744dade-8122-442e-a043-303bbaac5bf4,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"106302dee472be6d4e5a6e58cf074c279fb21a13b6dbfa0e013674e362b0b1a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:14.185026 kubelet[2397]: E0213 19:29:14.184625 2397 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"106302dee472be6d4e5a6e58cf074c279fb21a13b6dbfa0e013674e362b0b1a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:14.185026 kubelet[2397]: E0213 19:29:14.184694 2397 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"106302dee472be6d4e5a6e58cf074c279fb21a13b6dbfa0e013674e362b0b1a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-hjtrf" Feb 13 19:29:14.185026 kubelet[2397]: E0213 19:29:14.184729 2397 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"106302dee472be6d4e5a6e58cf074c279fb21a13b6dbfa0e013674e362b0b1a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-hjtrf" Feb 13 19:29:14.185224 kubelet[2397]: E0213 19:29:14.184781 2397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-hjtrf_default(c744dade-8122-442e-a043-303bbaac5bf4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-hjtrf_default(c744dade-8122-442e-a043-303bbaac5bf4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"106302dee472be6d4e5a6e58cf074c279fb21a13b6dbfa0e013674e362b0b1a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-hjtrf" podUID="c744dade-8122-442e-a043-303bbaac5bf4" Feb 13 19:29:14.192305 containerd[1901]: time="2025-02-13T19:29:14.191389842Z" level=error msg="Failed to destroy network for sandbox \"0713a05d4be7a02aa6872658d0d44501bc4f0bd35cf3611896cbf4b62a75667e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:14.192305 containerd[1901]: time="2025-02-13T19:29:14.192141182Z" level=error msg="encountered an error cleaning up failed sandbox \"0713a05d4be7a02aa6872658d0d44501bc4f0bd35cf3611896cbf4b62a75667e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:14.192305 containerd[1901]: time="2025-02-13T19:29:14.192218135Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2slcf,Uid:75d90e50-6992-4166-9b4f-bb7871aaa223,Namespace:calico-system,Attempt:8,} failed, error" error="failed to setup network for sandbox \"0713a05d4be7a02aa6872658d0d44501bc4f0bd35cf3611896cbf4b62a75667e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:14.193395 kubelet[2397]: E0213 19:29:14.193004 2397 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0713a05d4be7a02aa6872658d0d44501bc4f0bd35cf3611896cbf4b62a75667e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:14.193395 kubelet[2397]: E0213 19:29:14.193064 2397 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0713a05d4be7a02aa6872658d0d44501bc4f0bd35cf3611896cbf4b62a75667e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2slcf" Feb 13 19:29:14.193395 kubelet[2397]: E0213 19:29:14.193090 2397 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0713a05d4be7a02aa6872658d0d44501bc4f0bd35cf3611896cbf4b62a75667e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2slcf" Feb 13 19:29:14.193614 kubelet[2397]: E0213 19:29:14.193142 2397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2slcf_calico-system(75d90e50-6992-4166-9b4f-bb7871aaa223)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2slcf_calico-system(75d90e50-6992-4166-9b4f-bb7871aaa223)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0713a05d4be7a02aa6872658d0d44501bc4f0bd35cf3611896cbf4b62a75667e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2slcf" podUID="75d90e50-6992-4166-9b4f-bb7871aaa223" Feb 13 19:29:14.419678 containerd[1901]: time="2025-02-13T19:29:14.419625774Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:14.420800 containerd[1901]: time="2025-02-13T19:29:14.420749566Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 19:29:14.423231 containerd[1901]: time="2025-02-13T19:29:14.421971681Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:14.425124 containerd[1901]: time="2025-02-13T19:29:14.424410805Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:14.425124 containerd[1901]: time="2025-02-13T19:29:14.424978424Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 9.615353413s" Feb 13 19:29:14.425124 containerd[1901]: time="2025-02-13T19:29:14.425013471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 19:29:14.447273 containerd[1901]: time="2025-02-13T19:29:14.447156828Z" level=info msg="CreateContainer within sandbox \"661dc031d8a987d5c5274effbc63459e05b8c781d3917fb9bbaa5a2a1422e32d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 19:29:14.471079 containerd[1901]: time="2025-02-13T19:29:14.471030080Z" level=info msg="CreateContainer within sandbox \"661dc031d8a987d5c5274effbc63459e05b8c781d3917fb9bbaa5a2a1422e32d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3d6793c984aa16ca8a3f4bb553402cf81494f9f094408eeb6f725038b83c79bd\"" Feb 13 19:29:14.471929 containerd[1901]: time="2025-02-13T19:29:14.471889456Z" level=info msg="StartContainer for \"3d6793c984aa16ca8a3f4bb553402cf81494f9f094408eeb6f725038b83c79bd\"" Feb 13 19:29:14.534417 kubelet[2397]: E0213 19:29:14.529700 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:14.609568 systemd[1]: Started cri-containerd-3d6793c984aa16ca8a3f4bb553402cf81494f9f094408eeb6f725038b83c79bd.scope - libcontainer container 3d6793c984aa16ca8a3f4bb553402cf81494f9f094408eeb6f725038b83c79bd. Feb 13 19:29:14.673008 containerd[1901]: time="2025-02-13T19:29:14.672765616Z" level=info msg="StartContainer for \"3d6793c984aa16ca8a3f4bb553402cf81494f9f094408eeb6f725038b83c79bd\" returns successfully" Feb 13 19:29:14.766454 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 19:29:14.766597 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 19:29:14.840045 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0713a05d4be7a02aa6872658d0d44501bc4f0bd35cf3611896cbf4b62a75667e-shm.mount: Deactivated successfully. Feb 13 19:29:14.841524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1126285003.mount: Deactivated successfully. Feb 13 19:29:14.905004 update_engine[1889]: I20250213 19:29:14.904176 1889 update_attempter.cc:509] Updating boot flags... Feb 13 19:29:14.973325 kubelet[2397]: I0213 19:29:14.972465 2397 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="106302dee472be6d4e5a6e58cf074c279fb21a13b6dbfa0e013674e362b0b1a5" Feb 13 19:29:14.973632 containerd[1901]: time="2025-02-13T19:29:14.973504307Z" level=info msg="StopPodSandbox for \"106302dee472be6d4e5a6e58cf074c279fb21a13b6dbfa0e013674e362b0b1a5\"" Feb 13 19:29:14.974021 containerd[1901]: time="2025-02-13T19:29:14.973990493Z" level=info msg="Ensure that sandbox 106302dee472be6d4e5a6e58cf074c279fb21a13b6dbfa0e013674e362b0b1a5 in task-service has been cleanup successfully" Feb 13 19:29:14.980220 systemd[1]: run-netns-cni\x2d78c926e4\x2d50d6\x2da564\x2d9646\x2d29160e45c21b.mount: Deactivated successfully. Feb 13 19:29:14.981125 containerd[1901]: time="2025-02-13T19:29:14.980467458Z" level=info msg="TearDown network for sandbox \"106302dee472be6d4e5a6e58cf074c279fb21a13b6dbfa0e013674e362b0b1a5\" successfully" Feb 13 19:29:14.981125 containerd[1901]: time="2025-02-13T19:29:14.980549694Z" level=info msg="StopPodSandbox for \"106302dee472be6d4e5a6e58cf074c279fb21a13b6dbfa0e013674e362b0b1a5\" returns successfully" Feb 13 19:29:14.986976 containerd[1901]: time="2025-02-13T19:29:14.986244599Z" level=info msg="StopPodSandbox for \"592066a662e72cbd71bb3b1a132a4bf62d419c5a08fdc195a45f9abeca4bf82d\"" Feb 13 19:29:14.986976 containerd[1901]: time="2025-02-13T19:29:14.986383966Z" level=info msg="TearDown network for sandbox \"592066a662e72cbd71bb3b1a132a4bf62d419c5a08fdc195a45f9abeca4bf82d\" successfully" Feb 13 19:29:14.986976 containerd[1901]: time="2025-02-13T19:29:14.986401115Z" level=info msg="StopPodSandbox for \"592066a662e72cbd71bb3b1a132a4bf62d419c5a08fdc195a45f9abeca4bf82d\" returns successfully" Feb 13 19:29:14.987904 containerd[1901]: time="2025-02-13T19:29:14.987258144Z" level=info msg="StopPodSandbox for \"277f44c3c8153e250f840361c3ef9f9782f79bb248942e4af06a60a1bb833aba\"" Feb 13 19:29:14.987904 containerd[1901]: time="2025-02-13T19:29:14.987431318Z" level=info msg="TearDown network for sandbox \"277f44c3c8153e250f840361c3ef9f9782f79bb248942e4af06a60a1bb833aba\" successfully" Feb 13 19:29:14.987904 containerd[1901]: time="2025-02-13T19:29:14.987476654Z" level=info msg="StopPodSandbox for \"277f44c3c8153e250f840361c3ef9f9782f79bb248942e4af06a60a1bb833aba\" returns successfully" Feb 13 19:29:14.988587 containerd[1901]: time="2025-02-13T19:29:14.988411765Z" level=info msg="StopPodSandbox for \"c74977e7e2c7dc753524c0e4b42deededb40d1eea7715fe6f033759eda0cf5a7\"" Feb 13 19:29:14.988587 containerd[1901]: time="2025-02-13T19:29:14.988544501Z" level=info msg="TearDown network for sandbox \"c74977e7e2c7dc753524c0e4b42deededb40d1eea7715fe6f033759eda0cf5a7\" successfully" Feb 13 19:29:14.989254 containerd[1901]: time="2025-02-13T19:29:14.988560914Z" level=info msg="StopPodSandbox for \"c74977e7e2c7dc753524c0e4b42deededb40d1eea7715fe6f033759eda0cf5a7\" returns successfully" Feb 13 19:29:14.990408 containerd[1901]: time="2025-02-13T19:29:14.990374452Z" level=info msg="StopPodSandbox for \"63c75c37227579ed81f499e5b4f1d45e6d4e5d2122138039ec4326681c947823\"" Feb 13 19:29:14.990578 containerd[1901]: time="2025-02-13T19:29:14.990483138Z" level=info msg="TearDown network for sandbox \"63c75c37227579ed81f499e5b4f1d45e6d4e5d2122138039ec4326681c947823\" successfully" Feb 13 19:29:14.990636 containerd[1901]: time="2025-02-13T19:29:14.990583563Z" level=info msg="StopPodSandbox for \"63c75c37227579ed81f499e5b4f1d45e6d4e5d2122138039ec4326681c947823\" returns successfully" Feb 13 19:29:14.995447 containerd[1901]: time="2025-02-13T19:29:14.995247584Z" level=info msg="StopPodSandbox for \"1226681799cce8ab80de57e0bac64fa8144a7038f692d187fbc3c6a33e6b0b7a\"" Feb 13 19:29:14.995447 containerd[1901]: time="2025-02-13T19:29:14.995383730Z" level=info msg="TearDown network for sandbox \"1226681799cce8ab80de57e0bac64fa8144a7038f692d187fbc3c6a33e6b0b7a\" successfully" Feb 13 19:29:14.995447 containerd[1901]: time="2025-02-13T19:29:14.995400081Z" level=info msg="StopPodSandbox for \"1226681799cce8ab80de57e0bac64fa8144a7038f692d187fbc3c6a33e6b0b7a\" returns successfully" Feb 13 19:29:14.998681 containerd[1901]: time="2025-02-13T19:29:14.998539913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-hjtrf,Uid:c744dade-8122-442e-a043-303bbaac5bf4,Namespace:default,Attempt:6,}" Feb 13 19:29:15.033475 kubelet[2397]: I0213 19:29:15.031976 2397 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hjkkp" podStartSLOduration=4.360666381 podStartE2EDuration="26.031953295s" podCreationTimestamp="2025-02-13 19:28:49 +0000 UTC" firstStartedPulling="2025-02-13 19:28:52.754669286 +0000 UTC m=+4.252061226" lastFinishedPulling="2025-02-13 19:29:14.425956199 +0000 UTC m=+25.923348140" observedRunningTime="2025-02-13 19:29:15.03020406 +0000 UTC m=+26.527596051" watchObservedRunningTime="2025-02-13 19:29:15.031953295 +0000 UTC m=+26.529345250" Feb 13 19:29:15.061067 kubelet[2397]: I0213 19:29:15.061029 2397 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0713a05d4be7a02aa6872658d0d44501bc4f0bd35cf3611896cbf4b62a75667e" Feb 13 19:29:15.063415 containerd[1901]: time="2025-02-13T19:29:15.061984615Z" level=info msg="StopPodSandbox for \"0713a05d4be7a02aa6872658d0d44501bc4f0bd35cf3611896cbf4b62a75667e\"" Feb 13 19:29:15.063415 containerd[1901]: time="2025-02-13T19:29:15.062279364Z" level=info msg="Ensure that sandbox 0713a05d4be7a02aa6872658d0d44501bc4f0bd35cf3611896cbf4b62a75667e in task-service has been cleanup successfully" Feb 13 19:29:15.080188 containerd[1901]: time="2025-02-13T19:29:15.078571543Z" level=info msg="TearDown network for sandbox \"0713a05d4be7a02aa6872658d0d44501bc4f0bd35cf3611896cbf4b62a75667e\" successfully" Feb 13 19:29:15.080188 containerd[1901]: time="2025-02-13T19:29:15.078646823Z" level=info msg="StopPodSandbox for \"0713a05d4be7a02aa6872658d0d44501bc4f0bd35cf3611896cbf4b62a75667e\" returns successfully" Feb 13 19:29:15.085494 containerd[1901]: time="2025-02-13T19:29:15.084302979Z" level=info msg="StopPodSandbox for \"4a7647c5c3655b2ebbc1a4299824f9e2d37981503ecce81e0d91e7875e6de31f\"" Feb 13 19:29:15.091893 containerd[1901]: time="2025-02-13T19:29:15.085647549Z" level=info msg="TearDown network for sandbox \"4a7647c5c3655b2ebbc1a4299824f9e2d37981503ecce81e0d91e7875e6de31f\" successfully" Feb 13 19:29:15.091893 containerd[1901]: time="2025-02-13T19:29:15.085678879Z" level=info msg="StopPodSandbox for \"4a7647c5c3655b2ebbc1a4299824f9e2d37981503ecce81e0d91e7875e6de31f\" returns successfully" Feb 13 19:29:15.091893 containerd[1901]: time="2025-02-13T19:29:15.088378685Z" level=info msg="StopPodSandbox for \"66546204545d44abda6676e4f2078072d294f7db45e65e8e7e6f5492cf326bbf\"" Feb 13 19:29:15.091869 systemd[1]: run-netns-cni\x2dffed8eab\x2df183\x2d3f5b\x2da83f\x2dd5d9d14d150c.mount: Deactivated successfully. Feb 13 19:29:15.096391 containerd[1901]: time="2025-02-13T19:29:15.095333132Z" level=info msg="TearDown network for sandbox \"66546204545d44abda6676e4f2078072d294f7db45e65e8e7e6f5492cf326bbf\" successfully" Feb 13 19:29:15.096391 containerd[1901]: time="2025-02-13T19:29:15.095500725Z" level=info msg="StopPodSandbox for \"66546204545d44abda6676e4f2078072d294f7db45e65e8e7e6f5492cf326bbf\" returns successfully" Feb 13 19:29:15.098857 containerd[1901]: time="2025-02-13T19:29:15.098117585Z" level=info msg="StopPodSandbox for \"916207604eb217076ba61a7e571db68500d1003652ba46ac001d4d31d033c2b3\"" Feb 13 19:29:15.098857 containerd[1901]: time="2025-02-13T19:29:15.098237920Z" level=info msg="TearDown network for sandbox \"916207604eb217076ba61a7e571db68500d1003652ba46ac001d4d31d033c2b3\" successfully" Feb 13 19:29:15.098857 containerd[1901]: time="2025-02-13T19:29:15.098255687Z" level=info msg="StopPodSandbox for \"916207604eb217076ba61a7e571db68500d1003652ba46ac001d4d31d033c2b3\" returns successfully" Feb 13 19:29:15.104766 containerd[1901]: time="2025-02-13T19:29:15.102865296Z" level=info msg="StopPodSandbox for \"62f6e833830825836b2d9cd4ef2468d7f8eb1447b5819f3817664e63f552ef09\"" Feb 13 19:29:15.104766 containerd[1901]: time="2025-02-13T19:29:15.103258818Z" level=info msg="TearDown network for sandbox \"62f6e833830825836b2d9cd4ef2468d7f8eb1447b5819f3817664e63f552ef09\" successfully" Feb 13 19:29:15.104766 containerd[1901]: time="2025-02-13T19:29:15.103334670Z" level=info msg="StopPodSandbox for \"62f6e833830825836b2d9cd4ef2468d7f8eb1447b5819f3817664e63f552ef09\" returns successfully" Feb 13 19:29:15.113838 containerd[1901]: time="2025-02-13T19:29:15.113789303Z" level=info msg="StopPodSandbox for \"81057b0d25700a64c019094576bb7601777eff8f9d73830f992e8ded73b3bac8\"" Feb 13 19:29:15.116234 containerd[1901]: time="2025-02-13T19:29:15.115492630Z" level=info msg="TearDown network for sandbox \"81057b0d25700a64c019094576bb7601777eff8f9d73830f992e8ded73b3bac8\" successfully" Feb 13 19:29:15.116234 containerd[1901]: time="2025-02-13T19:29:15.115525182Z" level=info msg="StopPodSandbox for \"81057b0d25700a64c019094576bb7601777eff8f9d73830f992e8ded73b3bac8\" returns successfully" Feb 13 19:29:15.127025 containerd[1901]: time="2025-02-13T19:29:15.122859609Z" level=info msg="StopPodSandbox for \"c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383\"" Feb 13 19:29:15.127025 containerd[1901]: time="2025-02-13T19:29:15.123211518Z" level=info msg="TearDown network for sandbox \"c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383\" successfully" Feb 13 19:29:15.127025 containerd[1901]: time="2025-02-13T19:29:15.123234477Z" level=info msg="StopPodSandbox for \"c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383\" returns successfully" Feb 13 19:29:15.150850 containerd[1901]: time="2025-02-13T19:29:15.149853110Z" level=info msg="StopPodSandbox for \"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\"" Feb 13 19:29:15.150850 containerd[1901]: time="2025-02-13T19:29:15.149983914Z" level=info msg="TearDown network for sandbox \"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\" successfully" Feb 13 19:29:15.150850 containerd[1901]: time="2025-02-13T19:29:15.150001544Z" level=info msg="StopPodSandbox for \"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\" returns successfully" Feb 13 19:29:15.165486 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3447) Feb 13 19:29:15.167627 containerd[1901]: time="2025-02-13T19:29:15.166382038Z" level=info msg="StopPodSandbox for \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\"" Feb 13 19:29:15.167627 containerd[1901]: time="2025-02-13T19:29:15.166517183Z" level=info msg="TearDown network for sandbox \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\" successfully" Feb 13 19:29:15.167627 containerd[1901]: time="2025-02-13T19:29:15.166532644Z" level=info msg="StopPodSandbox for \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\" returns successfully" Feb 13 19:29:15.172408 containerd[1901]: time="2025-02-13T19:29:15.171620107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2slcf,Uid:75d90e50-6992-4166-9b4f-bb7871aaa223,Namespace:calico-system,Attempt:9,}" Feb 13 19:29:15.530748 kubelet[2397]: E0213 19:29:15.530682 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:15.772940 (udev-worker)[3446]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:29:15.774691 systemd-networkd[1745]: califa5e955ace8: Link UP Feb 13 19:29:15.779165 systemd-networkd[1745]: califa5e955ace8: Gained carrier Feb 13 19:29:15.801454 containerd[1901]: 2025-02-13 19:29:15.411 [INFO][3484] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:29:15.801454 containerd[1901]: 2025-02-13 19:29:15.470 [INFO][3484] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.17.153-k8s-csi--node--driver--2slcf-eth0 csi-node-driver- calico-system 75d90e50-6992-4166-9b4f-bb7871aaa223 1042 0 2025-02-13 19:28:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.31.17.153 csi-node-driver-2slcf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califa5e955ace8 [] []}} ContainerID="b524c42c47dc018c3a6338abb143b7e7369153b04099fdbe245a73d2ea59dc0c" Namespace="calico-system" Pod="csi-node-driver-2slcf" WorkloadEndpoint="172.31.17.153-k8s-csi--node--driver--2slcf-" Feb 13 19:29:15.801454 containerd[1901]: 2025-02-13 19:29:15.471 [INFO][3484] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b524c42c47dc018c3a6338abb143b7e7369153b04099fdbe245a73d2ea59dc0c" Namespace="calico-system" Pod="csi-node-driver-2slcf" WorkloadEndpoint="172.31.17.153-k8s-csi--node--driver--2slcf-eth0" Feb 13 19:29:15.801454 containerd[1901]: 2025-02-13 19:29:15.697 [INFO][3589] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b524c42c47dc018c3a6338abb143b7e7369153b04099fdbe245a73d2ea59dc0c" HandleID="k8s-pod-network.b524c42c47dc018c3a6338abb143b7e7369153b04099fdbe245a73d2ea59dc0c" Workload="172.31.17.153-k8s-csi--node--driver--2slcf-eth0" Feb 13 19:29:15.801454 containerd[1901]: 2025-02-13 19:29:15.713 [INFO][3589] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b524c42c47dc018c3a6338abb143b7e7369153b04099fdbe245a73d2ea59dc0c" HandleID="k8s-pod-network.b524c42c47dc018c3a6338abb143b7e7369153b04099fdbe245a73d2ea59dc0c" Workload="172.31.17.153-k8s-csi--node--driver--2slcf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051350), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.17.153", "pod":"csi-node-driver-2slcf", "timestamp":"2025-02-13 19:29:15.697121278 +0000 UTC"}, Hostname:"172.31.17.153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:29:15.801454 containerd[1901]: 2025-02-13 19:29:15.713 [INFO][3589] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:29:15.801454 containerd[1901]: 2025-02-13 19:29:15.713 [INFO][3589] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:29:15.801454 containerd[1901]: 2025-02-13 19:29:15.713 [INFO][3589] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.17.153' Feb 13 19:29:15.801454 containerd[1901]: 2025-02-13 19:29:15.716 [INFO][3589] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b524c42c47dc018c3a6338abb143b7e7369153b04099fdbe245a73d2ea59dc0c" host="172.31.17.153" Feb 13 19:29:15.801454 containerd[1901]: 2025-02-13 19:29:15.722 [INFO][3589] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.17.153" Feb 13 19:29:15.801454 containerd[1901]: 2025-02-13 19:29:15.727 [INFO][3589] ipam/ipam.go 489: Trying affinity for 192.168.66.128/26 host="172.31.17.153" Feb 13 19:29:15.801454 containerd[1901]: 2025-02-13 19:29:15.730 [INFO][3589] ipam/ipam.go 155: Attempting to load block cidr=192.168.66.128/26 host="172.31.17.153" Feb 13 19:29:15.801454 containerd[1901]: 2025-02-13 19:29:15.733 [INFO][3589] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.66.128/26 host="172.31.17.153" Feb 13 19:29:15.801454 containerd[1901]: 2025-02-13 19:29:15.733 [INFO][3589] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.66.128/26 handle="k8s-pod-network.b524c42c47dc018c3a6338abb143b7e7369153b04099fdbe245a73d2ea59dc0c" host="172.31.17.153" Feb 13 19:29:15.801454 containerd[1901]: 2025-02-13 19:29:15.735 [INFO][3589] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b524c42c47dc018c3a6338abb143b7e7369153b04099fdbe245a73d2ea59dc0c Feb 13 19:29:15.801454 containerd[1901]: 2025-02-13 19:29:15.740 [INFO][3589] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.66.128/26 handle="k8s-pod-network.b524c42c47dc018c3a6338abb143b7e7369153b04099fdbe245a73d2ea59dc0c" host="172.31.17.153" Feb 13 19:29:15.801454 containerd[1901]: 2025-02-13 19:29:15.748 [INFO][3589] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.66.129/26] block=192.168.66.128/26 handle="k8s-pod-network.b524c42c47dc018c3a6338abb143b7e7369153b04099fdbe245a73d2ea59dc0c" host="172.31.17.153" Feb 13 19:29:15.801454 containerd[1901]: 2025-02-13 19:29:15.748 [INFO][3589] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.66.129/26] handle="k8s-pod-network.b524c42c47dc018c3a6338abb143b7e7369153b04099fdbe245a73d2ea59dc0c" host="172.31.17.153" Feb 13 19:29:15.801454 containerd[1901]: 2025-02-13 19:29:15.748 [INFO][3589] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:29:15.801454 containerd[1901]: 2025-02-13 19:29:15.748 [INFO][3589] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.129/26] IPv6=[] ContainerID="b524c42c47dc018c3a6338abb143b7e7369153b04099fdbe245a73d2ea59dc0c" HandleID="k8s-pod-network.b524c42c47dc018c3a6338abb143b7e7369153b04099fdbe245a73d2ea59dc0c" Workload="172.31.17.153-k8s-csi--node--driver--2slcf-eth0" Feb 13 19:29:15.803328 containerd[1901]: 2025-02-13 19:29:15.753 [INFO][3484] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b524c42c47dc018c3a6338abb143b7e7369153b04099fdbe245a73d2ea59dc0c" Namespace="calico-system" Pod="csi-node-driver-2slcf" WorkloadEndpoint="172.31.17.153-k8s-csi--node--driver--2slcf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.153-k8s-csi--node--driver--2slcf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"75d90e50-6992-4166-9b4f-bb7871aaa223", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 28, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.153", ContainerID:"", Pod:"csi-node-driver-2slcf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.66.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califa5e955ace8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:29:15.803328 containerd[1901]: 2025-02-13 19:29:15.755 [INFO][3484] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.66.129/32] ContainerID="b524c42c47dc018c3a6338abb143b7e7369153b04099fdbe245a73d2ea59dc0c" Namespace="calico-system" Pod="csi-node-driver-2slcf" WorkloadEndpoint="172.31.17.153-k8s-csi--node--driver--2slcf-eth0" Feb 13 19:29:15.803328 containerd[1901]: 2025-02-13 19:29:15.755 [INFO][3484] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califa5e955ace8 ContainerID="b524c42c47dc018c3a6338abb143b7e7369153b04099fdbe245a73d2ea59dc0c" Namespace="calico-system" Pod="csi-node-driver-2slcf" WorkloadEndpoint="172.31.17.153-k8s-csi--node--driver--2slcf-eth0" Feb 13 19:29:15.803328 containerd[1901]: 2025-02-13 19:29:15.784 [INFO][3484] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b524c42c47dc018c3a6338abb143b7e7369153b04099fdbe245a73d2ea59dc0c" Namespace="calico-system" Pod="csi-node-driver-2slcf" WorkloadEndpoint="172.31.17.153-k8s-csi--node--driver--2slcf-eth0" Feb 13 19:29:15.803328 containerd[1901]: 2025-02-13 19:29:15.784 [INFO][3484] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b524c42c47dc018c3a6338abb143b7e7369153b04099fdbe245a73d2ea59dc0c" Namespace="calico-system" Pod="csi-node-driver-2slcf" WorkloadEndpoint="172.31.17.153-k8s-csi--node--driver--2slcf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.153-k8s-csi--node--driver--2slcf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"75d90e50-6992-4166-9b4f-bb7871aaa223", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 28, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.153", ContainerID:"b524c42c47dc018c3a6338abb143b7e7369153b04099fdbe245a73d2ea59dc0c", Pod:"csi-node-driver-2slcf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.66.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califa5e955ace8", MAC:"ee:41:94:c0:7e:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:29:15.803328 containerd[1901]: 2025-02-13 19:29:15.798 [INFO][3484] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b524c42c47dc018c3a6338abb143b7e7369153b04099fdbe245a73d2ea59dc0c" Namespace="calico-system" Pod="csi-node-driver-2slcf" WorkloadEndpoint="172.31.17.153-k8s-csi--node--driver--2slcf-eth0" Feb 13 19:29:15.831039 containerd[1901]: time="2025-02-13T19:29:15.830931769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:29:15.831039 containerd[1901]: time="2025-02-13T19:29:15.831010118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:29:15.834623 containerd[1901]: time="2025-02-13T19:29:15.831026175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:29:15.834623 containerd[1901]: time="2025-02-13T19:29:15.832058463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:29:15.874622 systemd[1]: Started cri-containerd-b524c42c47dc018c3a6338abb143b7e7369153b04099fdbe245a73d2ea59dc0c.scope - libcontainer container b524c42c47dc018c3a6338abb143b7e7369153b04099fdbe245a73d2ea59dc0c. Feb 13 19:29:15.891067 systemd-networkd[1745]: caliee53b9ef67d: Link UP Feb 13 19:29:15.895699 systemd-networkd[1745]: caliee53b9ef67d: Gained carrier Feb 13 19:29:15.915292 containerd[1901]: 2025-02-13 19:29:15.250 [INFO][3459] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:29:15.915292 containerd[1901]: 2025-02-13 19:29:15.408 [INFO][3459] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.17.153-k8s-nginx--deployment--8587fbcb89--hjtrf-eth0 nginx-deployment-8587fbcb89- default c744dade-8122-442e-a043-303bbaac5bf4 1139 0 2025-02-13 19:29:08 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.17.153 nginx-deployment-8587fbcb89-hjtrf eth0 default [] [] [kns.default ksa.default.default] caliee53b9ef67d [] []}} ContainerID="66f7685ae196f3ce4ea8ee081600ebdfa0eadc3f767ba0e217a2a004031bdd1c" Namespace="default" Pod="nginx-deployment-8587fbcb89-hjtrf" WorkloadEndpoint="172.31.17.153-k8s-nginx--deployment--8587fbcb89--hjtrf-" Feb 13 19:29:15.915292 containerd[1901]: 2025-02-13 19:29:15.408 [INFO][3459] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="66f7685ae196f3ce4ea8ee081600ebdfa0eadc3f767ba0e217a2a004031bdd1c" Namespace="default" Pod="nginx-deployment-8587fbcb89-hjtrf" WorkloadEndpoint="172.31.17.153-k8s-nginx--deployment--8587fbcb89--hjtrf-eth0" Feb 13 19:29:15.915292 containerd[1901]: 2025-02-13 19:29:15.695 [INFO][3562] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="66f7685ae196f3ce4ea8ee081600ebdfa0eadc3f767ba0e217a2a004031bdd1c" HandleID="k8s-pod-network.66f7685ae196f3ce4ea8ee081600ebdfa0eadc3f767ba0e217a2a004031bdd1c" Workload="172.31.17.153-k8s-nginx--deployment--8587fbcb89--hjtrf-eth0" Feb 13 19:29:15.915292 containerd[1901]: 2025-02-13 19:29:15.718 [INFO][3562] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="66f7685ae196f3ce4ea8ee081600ebdfa0eadc3f767ba0e217a2a004031bdd1c" HandleID="k8s-pod-network.66f7685ae196f3ce4ea8ee081600ebdfa0eadc3f767ba0e217a2a004031bdd1c" Workload="172.31.17.153-k8s-nginx--deployment--8587fbcb89--hjtrf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003cba90), Attrs:map[string]string{"namespace":"default", "node":"172.31.17.153", "pod":"nginx-deployment-8587fbcb89-hjtrf", "timestamp":"2025-02-13 19:29:15.69580051 +0000 UTC"}, Hostname:"172.31.17.153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:29:15.915292 containerd[1901]: 2025-02-13 19:29:15.719 [INFO][3562] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:29:15.915292 containerd[1901]: 2025-02-13 19:29:15.748 [INFO][3562] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:29:15.915292 containerd[1901]: 2025-02-13 19:29:15.748 [INFO][3562] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.17.153' Feb 13 19:29:15.915292 containerd[1901]: 2025-02-13 19:29:15.817 [INFO][3562] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.66f7685ae196f3ce4ea8ee081600ebdfa0eadc3f767ba0e217a2a004031bdd1c" host="172.31.17.153" Feb 13 19:29:15.915292 containerd[1901]: 2025-02-13 19:29:15.828 [INFO][3562] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.17.153" Feb 13 19:29:15.915292 containerd[1901]: 2025-02-13 19:29:15.837 [INFO][3562] ipam/ipam.go 489: Trying affinity for 192.168.66.128/26 host="172.31.17.153" Feb 13 19:29:15.915292 containerd[1901]: 2025-02-13 19:29:15.847 [INFO][3562] ipam/ipam.go 155: Attempting to load block cidr=192.168.66.128/26 host="172.31.17.153" Feb 13 19:29:15.915292 containerd[1901]: 2025-02-13 19:29:15.851 [INFO][3562] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.66.128/26 host="172.31.17.153" Feb 13 19:29:15.915292 containerd[1901]: 2025-02-13 19:29:15.851 [INFO][3562] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.66.128/26 handle="k8s-pod-network.66f7685ae196f3ce4ea8ee081600ebdfa0eadc3f767ba0e217a2a004031bdd1c" host="172.31.17.153" Feb 13 19:29:15.915292 containerd[1901]: 2025-02-13 19:29:15.853 [INFO][3562] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.66f7685ae196f3ce4ea8ee081600ebdfa0eadc3f767ba0e217a2a004031bdd1c Feb 13 19:29:15.915292 containerd[1901]: 2025-02-13 19:29:15.865 [INFO][3562] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.66.128/26 handle="k8s-pod-network.66f7685ae196f3ce4ea8ee081600ebdfa0eadc3f767ba0e217a2a004031bdd1c" host="172.31.17.153" Feb 13 19:29:15.915292 containerd[1901]: 2025-02-13 19:29:15.878 [INFO][3562] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.66.130/26] block=192.168.66.128/26 handle="k8s-pod-network.66f7685ae196f3ce4ea8ee081600ebdfa0eadc3f767ba0e217a2a004031bdd1c" host="172.31.17.153" Feb 13 19:29:15.915292 containerd[1901]: 2025-02-13 19:29:15.878 [INFO][3562] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.66.130/26] handle="k8s-pod-network.66f7685ae196f3ce4ea8ee081600ebdfa0eadc3f767ba0e217a2a004031bdd1c" host="172.31.17.153" Feb 13 19:29:15.915292 containerd[1901]: 2025-02-13 19:29:15.878 [INFO][3562] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:29:15.915292 containerd[1901]: 2025-02-13 19:29:15.879 [INFO][3562] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.130/26] IPv6=[] ContainerID="66f7685ae196f3ce4ea8ee081600ebdfa0eadc3f767ba0e217a2a004031bdd1c" HandleID="k8s-pod-network.66f7685ae196f3ce4ea8ee081600ebdfa0eadc3f767ba0e217a2a004031bdd1c" Workload="172.31.17.153-k8s-nginx--deployment--8587fbcb89--hjtrf-eth0" Feb 13 19:29:15.917999 containerd[1901]: 2025-02-13 19:29:15.886 [INFO][3459] cni-plugin/k8s.go 386: Populated endpoint ContainerID="66f7685ae196f3ce4ea8ee081600ebdfa0eadc3f767ba0e217a2a004031bdd1c" Namespace="default" Pod="nginx-deployment-8587fbcb89-hjtrf" WorkloadEndpoint="172.31.17.153-k8s-nginx--deployment--8587fbcb89--hjtrf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.153-k8s-nginx--deployment--8587fbcb89--hjtrf-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"c744dade-8122-442e-a043-303bbaac5bf4", ResourceVersion:"1139", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 29, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.153", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-hjtrf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.66.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"caliee53b9ef67d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:29:15.917999 containerd[1901]: 2025-02-13 19:29:15.886 [INFO][3459] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.66.130/32] ContainerID="66f7685ae196f3ce4ea8ee081600ebdfa0eadc3f767ba0e217a2a004031bdd1c" Namespace="default" Pod="nginx-deployment-8587fbcb89-hjtrf" WorkloadEndpoint="172.31.17.153-k8s-nginx--deployment--8587fbcb89--hjtrf-eth0" Feb 13 19:29:15.917999 containerd[1901]: 2025-02-13 19:29:15.888 [INFO][3459] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliee53b9ef67d ContainerID="66f7685ae196f3ce4ea8ee081600ebdfa0eadc3f767ba0e217a2a004031bdd1c" Namespace="default" Pod="nginx-deployment-8587fbcb89-hjtrf" WorkloadEndpoint="172.31.17.153-k8s-nginx--deployment--8587fbcb89--hjtrf-eth0" Feb 13 19:29:15.917999 containerd[1901]: 2025-02-13 19:29:15.897 [INFO][3459] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="66f7685ae196f3ce4ea8ee081600ebdfa0eadc3f767ba0e217a2a004031bdd1c" Namespace="default" Pod="nginx-deployment-8587fbcb89-hjtrf" WorkloadEndpoint="172.31.17.153-k8s-nginx--deployment--8587fbcb89--hjtrf-eth0" Feb 13 19:29:15.917999 containerd[1901]: 2025-02-13 19:29:15.898 [INFO][3459] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="66f7685ae196f3ce4ea8ee081600ebdfa0eadc3f767ba0e217a2a004031bdd1c" Namespace="default" Pod="nginx-deployment-8587fbcb89-hjtrf" WorkloadEndpoint="172.31.17.153-k8s-nginx--deployment--8587fbcb89--hjtrf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.153-k8s-nginx--deployment--8587fbcb89--hjtrf-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"c744dade-8122-442e-a043-303bbaac5bf4", ResourceVersion:"1139", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 29, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.153", ContainerID:"66f7685ae196f3ce4ea8ee081600ebdfa0eadc3f767ba0e217a2a004031bdd1c", Pod:"nginx-deployment-8587fbcb89-hjtrf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.66.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"caliee53b9ef67d", MAC:"22:17:03:c0:3f:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:29:15.917999 containerd[1901]: 2025-02-13 19:29:15.911 [INFO][3459] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="66f7685ae196f3ce4ea8ee081600ebdfa0eadc3f767ba0e217a2a004031bdd1c" Namespace="default" Pod="nginx-deployment-8587fbcb89-hjtrf" WorkloadEndpoint="172.31.17.153-k8s-nginx--deployment--8587fbcb89--hjtrf-eth0" Feb 13 19:29:15.939854 containerd[1901]: time="2025-02-13T19:29:15.939583165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2slcf,Uid:75d90e50-6992-4166-9b4f-bb7871aaa223,Namespace:calico-system,Attempt:9,} returns sandbox id \"b524c42c47dc018c3a6338abb143b7e7369153b04099fdbe245a73d2ea59dc0c\"" Feb 13 19:29:15.942541 containerd[1901]: time="2025-02-13T19:29:15.942389631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 19:29:15.960600 containerd[1901]: time="2025-02-13T19:29:15.960429276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:29:15.960600 containerd[1901]: time="2025-02-13T19:29:15.960561238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:29:15.960983 containerd[1901]: time="2025-02-13T19:29:15.960585386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:29:15.960983 containerd[1901]: time="2025-02-13T19:29:15.960924476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:29:15.998660 systemd[1]: Started cri-containerd-66f7685ae196f3ce4ea8ee081600ebdfa0eadc3f767ba0e217a2a004031bdd1c.scope - libcontainer container 66f7685ae196f3ce4ea8ee081600ebdfa0eadc3f767ba0e217a2a004031bdd1c. Feb 13 19:29:16.134651 containerd[1901]: time="2025-02-13T19:29:16.134519151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-hjtrf,Uid:c744dade-8122-442e-a043-303bbaac5bf4,Namespace:default,Attempt:6,} returns sandbox id \"66f7685ae196f3ce4ea8ee081600ebdfa0eadc3f767ba0e217a2a004031bdd1c\"" Feb 13 19:29:16.537083 kubelet[2397]: E0213 19:29:16.536299 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:16.842573 systemd[1]: run-containerd-runc-k8s.io-66f7685ae196f3ce4ea8ee081600ebdfa0eadc3f767ba0e217a2a004031bdd1c-runc.R0NSA8.mount: Deactivated successfully. Feb 13 19:29:17.009409 kernel: bpftool[3840]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 19:29:17.402485 systemd-networkd[1745]: caliee53b9ef67d: Gained IPv6LL Feb 13 19:29:17.537218 kubelet[2397]: E0213 19:29:17.537178 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:17.635716 systemd-networkd[1745]: vxlan.calico: Link UP Feb 13 19:29:17.635728 systemd-networkd[1745]: vxlan.calico: Gained carrier Feb 13 19:29:17.654812 containerd[1901]: time="2025-02-13T19:29:17.654622283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:17.660550 containerd[1901]: time="2025-02-13T19:29:17.660474618Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 19:29:17.666921 containerd[1901]: time="2025-02-13T19:29:17.664336713Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:17.673217 containerd[1901]: time="2025-02-13T19:29:17.671913033Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:17.674554 containerd[1901]: time="2025-02-13T19:29:17.673150844Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.730719553s" Feb 13 19:29:17.674819 containerd[1901]: time="2025-02-13T19:29:17.674791045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 19:29:17.677721 containerd[1901]: time="2025-02-13T19:29:17.677687726Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:29:17.685438 (udev-worker)[3448]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:29:17.687693 containerd[1901]: time="2025-02-13T19:29:17.687651985Z" level=info msg="CreateContainer within sandbox \"b524c42c47dc018c3a6338abb143b7e7369153b04099fdbe245a73d2ea59dc0c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 19:29:17.717442 systemd-networkd[1745]: califa5e955ace8: Gained IPv6LL Feb 13 19:29:17.748646 containerd[1901]: time="2025-02-13T19:29:17.748354860Z" level=info msg="CreateContainer within sandbox \"b524c42c47dc018c3a6338abb143b7e7369153b04099fdbe245a73d2ea59dc0c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"28ef985d302e206d402547b58f82c0899dbe8c1d60ee0e57531febfb64889882\"" Feb 13 19:29:17.751760 containerd[1901]: time="2025-02-13T19:29:17.751676638Z" level=info msg="StartContainer for \"28ef985d302e206d402547b58f82c0899dbe8c1d60ee0e57531febfb64889882\"" Feb 13 19:29:17.835629 systemd[1]: Started cri-containerd-28ef985d302e206d402547b58f82c0899dbe8c1d60ee0e57531febfb64889882.scope - libcontainer container 28ef985d302e206d402547b58f82c0899dbe8c1d60ee0e57531febfb64889882. Feb 13 19:29:17.903629 containerd[1901]: time="2025-02-13T19:29:17.903581059Z" level=info msg="StartContainer for \"28ef985d302e206d402547b58f82c0899dbe8c1d60ee0e57531febfb64889882\" returns successfully" Feb 13 19:29:18.537848 kubelet[2397]: E0213 19:29:18.537724 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:19.123860 systemd-networkd[1745]: vxlan.calico: Gained IPv6LL Feb 13 19:29:19.540521 kubelet[2397]: E0213 19:29:19.538391 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:20.543395 kubelet[2397]: E0213 19:29:20.542154 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:21.243192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3550738771.mount: Deactivated successfully. Feb 13 19:29:21.542788 kubelet[2397]: E0213 19:29:21.542470 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:21.768845 ntpd[1881]: Listen normally on 6 vxlan.calico 192.168.66.128:123 Feb 13 19:29:21.768977 ntpd[1881]: Listen normally on 7 califa5e955ace8 [fe80::ecee:eeff:feee:eeee%3]:123 Feb 13 19:29:21.769603 ntpd[1881]: 13 Feb 19:29:21 ntpd[1881]: Listen normally on 6 vxlan.calico 192.168.66.128:123 Feb 13 19:29:21.769603 ntpd[1881]: 13 Feb 19:29:21 ntpd[1881]: Listen normally on 7 califa5e955ace8 [fe80::ecee:eeff:feee:eeee%3]:123 Feb 13 19:29:21.769603 ntpd[1881]: 13 Feb 19:29:21 ntpd[1881]: Listen normally on 8 caliee53b9ef67d [fe80::ecee:eeff:feee:eeee%4]:123 Feb 13 19:29:21.769603 ntpd[1881]: 13 Feb 19:29:21 ntpd[1881]: Listen normally on 9 vxlan.calico [fe80::64ca:35ff:fe87:76a0%5]:123 Feb 13 19:29:21.769188 ntpd[1881]: Listen normally on 8 caliee53b9ef67d [fe80::ecee:eeff:feee:eeee%4]:123 Feb 13 19:29:21.769247 ntpd[1881]: Listen normally on 9 vxlan.calico [fe80::64ca:35ff:fe87:76a0%5]:123 Feb 13 19:29:22.543697 kubelet[2397]: E0213 19:29:22.543660 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:23.232568 containerd[1901]: time="2025-02-13T19:29:23.232400946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:23.244931 containerd[1901]: time="2025-02-13T19:29:23.243740755Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73054493" Feb 13 19:29:23.249703 containerd[1901]: time="2025-02-13T19:29:23.249612052Z" level=info msg="ImageCreate event name:\"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:23.261123 containerd[1901]: time="2025-02-13T19:29:23.261051409Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:23.263133 containerd[1901]: time="2025-02-13T19:29:23.262671630Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 5.58494012s" Feb 13 19:29:23.263133 containerd[1901]: time="2025-02-13T19:29:23.262720028Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 19:29:23.267394 containerd[1901]: time="2025-02-13T19:29:23.267344076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 19:29:23.280734 containerd[1901]: time="2025-02-13T19:29:23.280567744Z" level=info msg="CreateContainer within sandbox \"66f7685ae196f3ce4ea8ee081600ebdfa0eadc3f767ba0e217a2a004031bdd1c\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 19:29:23.305075 containerd[1901]: time="2025-02-13T19:29:23.305030802Z" level=info msg="CreateContainer within sandbox \"66f7685ae196f3ce4ea8ee081600ebdfa0eadc3f767ba0e217a2a004031bdd1c\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"747ea526e6864054537aafcfe49bcfefe6fadaeb47e6aa2b95bc3ebec01bbf4c\"" Feb 13 19:29:23.306978 containerd[1901]: time="2025-02-13T19:29:23.305994843Z" level=info msg="StartContainer for \"747ea526e6864054537aafcfe49bcfefe6fadaeb47e6aa2b95bc3ebec01bbf4c\"" Feb 13 19:29:23.359766 systemd[1]: Started cri-containerd-747ea526e6864054537aafcfe49bcfefe6fadaeb47e6aa2b95bc3ebec01bbf4c.scope - libcontainer container 747ea526e6864054537aafcfe49bcfefe6fadaeb47e6aa2b95bc3ebec01bbf4c. Feb 13 19:29:23.396283 containerd[1901]: time="2025-02-13T19:29:23.396237383Z" level=info msg="StartContainer for \"747ea526e6864054537aafcfe49bcfefe6fadaeb47e6aa2b95bc3ebec01bbf4c\" returns successfully" Feb 13 19:29:23.545261 kubelet[2397]: E0213 19:29:23.545104 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:24.237003 kubelet[2397]: I0213 19:29:24.236753 2397 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-hjtrf" podStartSLOduration=9.113063585 podStartE2EDuration="16.236738036s" podCreationTimestamp="2025-02-13 19:29:08 +0000 UTC" firstStartedPulling="2025-02-13 19:29:16.140903776 +0000 UTC m=+27.638295720" lastFinishedPulling="2025-02-13 19:29:23.264578217 +0000 UTC m=+34.761970171" observedRunningTime="2025-02-13 19:29:24.236609872 +0000 UTC m=+35.734001833" watchObservedRunningTime="2025-02-13 19:29:24.236738036 +0000 UTC m=+35.734129995" Feb 13 19:29:24.545413 kubelet[2397]: E0213 19:29:24.545249 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:25.185167 containerd[1901]: time="2025-02-13T19:29:25.179411692Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:25.194518 containerd[1901]: time="2025-02-13T19:29:25.190860173Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 19:29:25.195050 containerd[1901]: time="2025-02-13T19:29:25.195006013Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:25.198525 containerd[1901]: time="2025-02-13T19:29:25.198480149Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:25.199541 containerd[1901]: time="2025-02-13T19:29:25.199503658Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.931827328s" Feb 13 19:29:25.199740 containerd[1901]: time="2025-02-13T19:29:25.199717194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 19:29:25.203544 containerd[1901]: time="2025-02-13T19:29:25.203506312Z" level=info msg="CreateContainer within sandbox \"b524c42c47dc018c3a6338abb143b7e7369153b04099fdbe245a73d2ea59dc0c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 19:29:25.227654 containerd[1901]: time="2025-02-13T19:29:25.227610812Z" level=info msg="CreateContainer within sandbox \"b524c42c47dc018c3a6338abb143b7e7369153b04099fdbe245a73d2ea59dc0c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"225b53ebd7f7a09cf0be536916bb3dbc5da0d713480139a2306d2792fb57813f\"" Feb 13 19:29:25.230294 containerd[1901]: time="2025-02-13T19:29:25.228220322Z" level=info msg="StartContainer for \"225b53ebd7f7a09cf0be536916bb3dbc5da0d713480139a2306d2792fb57813f\"" Feb 13 19:29:25.290811 systemd[1]: Started cri-containerd-225b53ebd7f7a09cf0be536916bb3dbc5da0d713480139a2306d2792fb57813f.scope - libcontainer container 225b53ebd7f7a09cf0be536916bb3dbc5da0d713480139a2306d2792fb57813f. Feb 13 19:29:25.347466 containerd[1901]: time="2025-02-13T19:29:25.347308127Z" level=info msg="StartContainer for \"225b53ebd7f7a09cf0be536916bb3dbc5da0d713480139a2306d2792fb57813f\" returns successfully" Feb 13 19:29:25.546163 kubelet[2397]: E0213 19:29:25.546113 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:25.716185 kubelet[2397]: I0213 19:29:25.715566 2397 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 19:29:25.716185 kubelet[2397]: I0213 19:29:25.715602 2397 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 19:29:26.269776 kubelet[2397]: I0213 19:29:26.269269 2397 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-2slcf" podStartSLOduration=28.009100234 podStartE2EDuration="37.269082668s" podCreationTimestamp="2025-02-13 19:28:49 +0000 UTC" firstStartedPulling="2025-02-13 19:29:15.942057536 +0000 UTC m=+27.439449489" lastFinishedPulling="2025-02-13 19:29:25.202039981 +0000 UTC m=+36.699431923" observedRunningTime="2025-02-13 19:29:26.268470999 +0000 UTC m=+37.765862960" watchObservedRunningTime="2025-02-13 19:29:26.269082668 +0000 UTC m=+37.766474626" Feb 13 19:29:26.546707 kubelet[2397]: E0213 19:29:26.546569 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:27.547190 kubelet[2397]: E0213 19:29:27.547132 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:28.547501 kubelet[2397]: E0213 19:29:28.547438 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:29.504229 kubelet[2397]: E0213 19:29:29.504056 2397 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:29.548572 kubelet[2397]: E0213 19:29:29.548519 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:30.549010 kubelet[2397]: E0213 19:29:30.548938 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:30.709986 systemd[1]: Created slice kubepods-besteffort-pod4ba7d4a2_8de1_4cd9_b12a_fe782e084aab.slice - libcontainer container kubepods-besteffort-pod4ba7d4a2_8de1_4cd9_b12a_fe782e084aab.slice. Feb 13 19:29:30.755322 kubelet[2397]: I0213 19:29:30.755267 2397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s57xv\" (UniqueName: \"kubernetes.io/projected/4ba7d4a2-8de1-4cd9-b12a-fe782e084aab-kube-api-access-s57xv\") pod \"nfs-server-provisioner-0\" (UID: \"4ba7d4a2-8de1-4cd9-b12a-fe782e084aab\") " pod="default/nfs-server-provisioner-0" Feb 13 19:29:30.755322 kubelet[2397]: I0213 19:29:30.755323 2397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/4ba7d4a2-8de1-4cd9-b12a-fe782e084aab-data\") pod \"nfs-server-provisioner-0\" (UID: \"4ba7d4a2-8de1-4cd9-b12a-fe782e084aab\") " pod="default/nfs-server-provisioner-0" Feb 13 19:29:31.029245 containerd[1901]: time="2025-02-13T19:29:31.029205617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:4ba7d4a2-8de1-4cd9-b12a-fe782e084aab,Namespace:default,Attempt:0,}" Feb 13 19:29:31.207301 systemd-networkd[1745]: cali60e51b789ff: Link UP Feb 13 19:29:31.208564 systemd-networkd[1745]: cali60e51b789ff: Gained carrier Feb 13 19:29:31.213213 (udev-worker)[4119]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:29:31.227268 containerd[1901]: 2025-02-13 19:29:31.106 [INFO][4124] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.17.153-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 4ba7d4a2-8de1-4cd9-b12a-fe782e084aab 1257 0 2025-02-13 19:29:30 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.17.153 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="1a2b2ceeab1a1be357322457d462152126acd20768c0dab48c3b71df5e71abc2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.153-k8s-nfs--server--provisioner--0-" Feb 13 19:29:31.227268 containerd[1901]: 2025-02-13 19:29:31.106 [INFO][4124] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1a2b2ceeab1a1be357322457d462152126acd20768c0dab48c3b71df5e71abc2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.153-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:29:31.227268 containerd[1901]: 2025-02-13 19:29:31.144 [INFO][4134] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a2b2ceeab1a1be357322457d462152126acd20768c0dab48c3b71df5e71abc2" HandleID="k8s-pod-network.1a2b2ceeab1a1be357322457d462152126acd20768c0dab48c3b71df5e71abc2" Workload="172.31.17.153-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:29:31.227268 containerd[1901]: 2025-02-13 19:29:31.155 [INFO][4134] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1a2b2ceeab1a1be357322457d462152126acd20768c0dab48c3b71df5e71abc2" HandleID="k8s-pod-network.1a2b2ceeab1a1be357322457d462152126acd20768c0dab48c3b71df5e71abc2" Workload="172.31.17.153-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004d9d40), Attrs:map[string]string{"namespace":"default", "node":"172.31.17.153", "pod":"nfs-server-provisioner-0", "timestamp":"2025-02-13 19:29:31.144093477 +0000 UTC"}, Hostname:"172.31.17.153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:29:31.227268 containerd[1901]: 2025-02-13 19:29:31.155 [INFO][4134] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:29:31.227268 containerd[1901]: 2025-02-13 19:29:31.155 [INFO][4134] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:29:31.227268 containerd[1901]: 2025-02-13 19:29:31.155 [INFO][4134] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.17.153' Feb 13 19:29:31.227268 containerd[1901]: 2025-02-13 19:29:31.158 [INFO][4134] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1a2b2ceeab1a1be357322457d462152126acd20768c0dab48c3b71df5e71abc2" host="172.31.17.153" Feb 13 19:29:31.227268 containerd[1901]: 2025-02-13 19:29:31.165 [INFO][4134] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.17.153" Feb 13 19:29:31.227268 containerd[1901]: 2025-02-13 19:29:31.170 [INFO][4134] ipam/ipam.go 489: Trying affinity for 192.168.66.128/26 host="172.31.17.153" Feb 13 19:29:31.227268 containerd[1901]: 2025-02-13 19:29:31.174 [INFO][4134] ipam/ipam.go 155: Attempting to load block cidr=192.168.66.128/26 host="172.31.17.153" Feb 13 19:29:31.227268 containerd[1901]: 2025-02-13 19:29:31.183 [INFO][4134] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.66.128/26 host="172.31.17.153" Feb 13 19:29:31.227268 containerd[1901]: 2025-02-13 19:29:31.183 [INFO][4134] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.66.128/26 handle="k8s-pod-network.1a2b2ceeab1a1be357322457d462152126acd20768c0dab48c3b71df5e71abc2" host="172.31.17.153" Feb 13 19:29:31.227268 containerd[1901]: 2025-02-13 19:29:31.185 [INFO][4134] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1a2b2ceeab1a1be357322457d462152126acd20768c0dab48c3b71df5e71abc2 Feb 13 19:29:31.227268 containerd[1901]: 2025-02-13 19:29:31.191 [INFO][4134] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.66.128/26 handle="k8s-pod-network.1a2b2ceeab1a1be357322457d462152126acd20768c0dab48c3b71df5e71abc2" host="172.31.17.153" Feb 13 19:29:31.227268 containerd[1901]: 2025-02-13 19:29:31.201 [INFO][4134] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.66.131/26] block=192.168.66.128/26 handle="k8s-pod-network.1a2b2ceeab1a1be357322457d462152126acd20768c0dab48c3b71df5e71abc2" host="172.31.17.153" Feb 13 19:29:31.227268 containerd[1901]: 2025-02-13 19:29:31.201 [INFO][4134] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.66.131/26] handle="k8s-pod-network.1a2b2ceeab1a1be357322457d462152126acd20768c0dab48c3b71df5e71abc2" host="172.31.17.153" Feb 13 19:29:31.227268 containerd[1901]: 2025-02-13 19:29:31.201 [INFO][4134] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:29:31.227268 containerd[1901]: 2025-02-13 19:29:31.201 [INFO][4134] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.131/26] IPv6=[] ContainerID="1a2b2ceeab1a1be357322457d462152126acd20768c0dab48c3b71df5e71abc2" HandleID="k8s-pod-network.1a2b2ceeab1a1be357322457d462152126acd20768c0dab48c3b71df5e71abc2" Workload="172.31.17.153-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:29:31.229173 containerd[1901]: 2025-02-13 19:29:31.203 [INFO][4124] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1a2b2ceeab1a1be357322457d462152126acd20768c0dab48c3b71df5e71abc2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.153-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.153-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"4ba7d4a2-8de1-4cd9-b12a-fe782e084aab", ResourceVersion:"1257", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 29, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.153", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.66.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:29:31.229173 containerd[1901]: 2025-02-13 19:29:31.204 [INFO][4124] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.66.131/32] ContainerID="1a2b2ceeab1a1be357322457d462152126acd20768c0dab48c3b71df5e71abc2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.153-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:29:31.229173 containerd[1901]: 2025-02-13 19:29:31.204 [INFO][4124] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="1a2b2ceeab1a1be357322457d462152126acd20768c0dab48c3b71df5e71abc2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.153-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:29:31.229173 containerd[1901]: 2025-02-13 19:29:31.208 [INFO][4124] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a2b2ceeab1a1be357322457d462152126acd20768c0dab48c3b71df5e71abc2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.153-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:29:31.229827 containerd[1901]: 2025-02-13 19:29:31.209 [INFO][4124] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1a2b2ceeab1a1be357322457d462152126acd20768c0dab48c3b71df5e71abc2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.153-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.153-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"4ba7d4a2-8de1-4cd9-b12a-fe782e084aab", ResourceVersion:"1257", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 29, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.153", ContainerID:"1a2b2ceeab1a1be357322457d462152126acd20768c0dab48c3b71df5e71abc2", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.66.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"12:55:45:0c:0e:56", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:29:31.229827 containerd[1901]: 2025-02-13 19:29:31.224 [INFO][4124] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1a2b2ceeab1a1be357322457d462152126acd20768c0dab48c3b71df5e71abc2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.153-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:29:31.289129 containerd[1901]: time="2025-02-13T19:29:31.288852855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:29:31.289129 containerd[1901]: time="2025-02-13T19:29:31.288937641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:29:31.289129 containerd[1901]: time="2025-02-13T19:29:31.288959891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:29:31.289129 containerd[1901]: time="2025-02-13T19:29:31.289082121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:29:31.339735 systemd[1]: Started cri-containerd-1a2b2ceeab1a1be357322457d462152126acd20768c0dab48c3b71df5e71abc2.scope - libcontainer container 1a2b2ceeab1a1be357322457d462152126acd20768c0dab48c3b71df5e71abc2. Feb 13 19:29:31.412140 containerd[1901]: time="2025-02-13T19:29:31.412092835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:4ba7d4a2-8de1-4cd9-b12a-fe782e084aab,Namespace:default,Attempt:0,} returns sandbox id \"1a2b2ceeab1a1be357322457d462152126acd20768c0dab48c3b71df5e71abc2\"" Feb 13 19:29:31.425742 containerd[1901]: time="2025-02-13T19:29:31.425590912Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 19:29:31.549812 kubelet[2397]: E0213 19:29:31.549421 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:31.870681 systemd[1]: run-containerd-runc-k8s.io-1a2b2ceeab1a1be357322457d462152126acd20768c0dab48c3b71df5e71abc2-runc.RCTPJ3.mount: Deactivated successfully. Feb 13 19:29:32.553282 kubelet[2397]: E0213 19:29:32.553208 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:33.203686 systemd-networkd[1745]: cali60e51b789ff: Gained IPv6LL Feb 13 19:29:33.555012 kubelet[2397]: E0213 19:29:33.554878 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:34.555846 kubelet[2397]: E0213 19:29:34.555416 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:34.647227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2816803593.mount: Deactivated successfully. Feb 13 19:29:35.558656 kubelet[2397]: E0213 19:29:35.558604 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:35.768628 ntpd[1881]: Listen normally on 10 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Feb 13 19:29:35.769266 ntpd[1881]: 13 Feb 19:29:35 ntpd[1881]: Listen normally on 10 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Feb 13 19:29:36.559118 kubelet[2397]: E0213 19:29:36.559077 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:37.560546 kubelet[2397]: E0213 19:29:37.560049 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:37.779088 containerd[1901]: time="2025-02-13T19:29:37.779031454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:37.780627 containerd[1901]: time="2025-02-13T19:29:37.780572425Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Feb 13 19:29:37.781800 containerd[1901]: time="2025-02-13T19:29:37.781454060Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:37.786388 containerd[1901]: time="2025-02-13T19:29:37.786322309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:37.789347 containerd[1901]: time="2025-02-13T19:29:37.789295065Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.363547749s" Feb 13 19:29:37.789762 containerd[1901]: time="2025-02-13T19:29:37.789616913Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 13 19:29:37.793172 containerd[1901]: time="2025-02-13T19:29:37.793132948Z" level=info msg="CreateContainer within sandbox \"1a2b2ceeab1a1be357322457d462152126acd20768c0dab48c3b71df5e71abc2\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 19:29:37.852871 containerd[1901]: time="2025-02-13T19:29:37.852763021Z" level=info msg="CreateContainer within sandbox \"1a2b2ceeab1a1be357322457d462152126acd20768c0dab48c3b71df5e71abc2\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"d21939522862cb181ddfbd00948308d539fd1d4f6892d5f4eb894058f3f139bd\"" Feb 13 19:29:37.852966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3821223963.mount: Deactivated successfully. Feb 13 19:29:37.855680 containerd[1901]: time="2025-02-13T19:29:37.855647184Z" level=info msg="StartContainer for \"d21939522862cb181ddfbd00948308d539fd1d4f6892d5f4eb894058f3f139bd\"" Feb 13 19:29:37.914659 systemd[1]: Started cri-containerd-d21939522862cb181ddfbd00948308d539fd1d4f6892d5f4eb894058f3f139bd.scope - libcontainer container d21939522862cb181ddfbd00948308d539fd1d4f6892d5f4eb894058f3f139bd. Feb 13 19:29:37.961462 containerd[1901]: time="2025-02-13T19:29:37.961253492Z" level=info msg="StartContainer for \"d21939522862cb181ddfbd00948308d539fd1d4f6892d5f4eb894058f3f139bd\" returns successfully" Feb 13 19:29:38.561115 kubelet[2397]: E0213 19:29:38.561048 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:39.562004 kubelet[2397]: E0213 19:29:39.561886 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:40.562868 kubelet[2397]: E0213 19:29:40.562809 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:41.563005 kubelet[2397]: E0213 19:29:41.562943 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:42.563702 kubelet[2397]: E0213 19:29:42.563645 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:43.564104 kubelet[2397]: E0213 19:29:43.564015 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:44.564754 kubelet[2397]: E0213 19:29:44.564701 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:45.565162 kubelet[2397]: E0213 19:29:45.565104 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:46.566330 kubelet[2397]: E0213 19:29:46.566267 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:47.567023 kubelet[2397]: E0213 19:29:47.566971 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:48.567952 kubelet[2397]: E0213 19:29:48.567904 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:49.504793 kubelet[2397]: E0213 19:29:49.504449 2397 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:49.568988 kubelet[2397]: E0213 19:29:49.568923 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:49.589677 containerd[1901]: time="2025-02-13T19:29:49.589641389Z" level=info msg="StopPodSandbox for \"1226681799cce8ab80de57e0bac64fa8144a7038f692d187fbc3c6a33e6b0b7a\"" Feb 13 19:29:49.590344 containerd[1901]: time="2025-02-13T19:29:49.589811587Z" level=info msg="TearDown network for sandbox \"1226681799cce8ab80de57e0bac64fa8144a7038f692d187fbc3c6a33e6b0b7a\" successfully" Feb 13 19:29:49.590344 containerd[1901]: time="2025-02-13T19:29:49.589830247Z" level=info msg="StopPodSandbox for \"1226681799cce8ab80de57e0bac64fa8144a7038f692d187fbc3c6a33e6b0b7a\" returns successfully" Feb 13 19:29:49.618380 containerd[1901]: time="2025-02-13T19:29:49.616999407Z" level=info msg="RemovePodSandbox for \"1226681799cce8ab80de57e0bac64fa8144a7038f692d187fbc3c6a33e6b0b7a\"" Feb 13 19:29:49.658950 containerd[1901]: time="2025-02-13T19:29:49.658890652Z" level=info msg="Forcibly stopping sandbox \"1226681799cce8ab80de57e0bac64fa8144a7038f692d187fbc3c6a33e6b0b7a\"" Feb 13 19:29:49.667087 containerd[1901]: time="2025-02-13T19:29:49.659048234Z" level=info msg="TearDown network for sandbox \"1226681799cce8ab80de57e0bac64fa8144a7038f692d187fbc3c6a33e6b0b7a\" successfully" Feb 13 19:29:49.685379 containerd[1901]: time="2025-02-13T19:29:49.685305621Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1226681799cce8ab80de57e0bac64fa8144a7038f692d187fbc3c6a33e6b0b7a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:49.685553 containerd[1901]: time="2025-02-13T19:29:49.685420802Z" level=info msg="RemovePodSandbox \"1226681799cce8ab80de57e0bac64fa8144a7038f692d187fbc3c6a33e6b0b7a\" returns successfully" Feb 13 19:29:49.686384 containerd[1901]: time="2025-02-13T19:29:49.686331886Z" level=info msg="StopPodSandbox for \"63c75c37227579ed81f499e5b4f1d45e6d4e5d2122138039ec4326681c947823\"" Feb 13 19:29:49.686506 containerd[1901]: time="2025-02-13T19:29:49.686485059Z" level=info msg="TearDown network for sandbox \"63c75c37227579ed81f499e5b4f1d45e6d4e5d2122138039ec4326681c947823\" successfully" Feb 13 19:29:49.686560 containerd[1901]: time="2025-02-13T19:29:49.686502285Z" level=info msg="StopPodSandbox for \"63c75c37227579ed81f499e5b4f1d45e6d4e5d2122138039ec4326681c947823\" returns successfully" Feb 13 19:29:49.687048 containerd[1901]: time="2025-02-13T19:29:49.686998948Z" level=info msg="RemovePodSandbox for \"63c75c37227579ed81f499e5b4f1d45e6d4e5d2122138039ec4326681c947823\"" Feb 13 19:29:49.687048 containerd[1901]: time="2025-02-13T19:29:49.687055915Z" level=info msg="Forcibly stopping sandbox \"63c75c37227579ed81f499e5b4f1d45e6d4e5d2122138039ec4326681c947823\"" Feb 13 19:29:49.687215 containerd[1901]: time="2025-02-13T19:29:49.687140977Z" level=info msg="TearDown network for sandbox \"63c75c37227579ed81f499e5b4f1d45e6d4e5d2122138039ec4326681c947823\" successfully" Feb 13 19:29:49.691197 containerd[1901]: time="2025-02-13T19:29:49.691148120Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"63c75c37227579ed81f499e5b4f1d45e6d4e5d2122138039ec4326681c947823\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:49.691633 containerd[1901]: time="2025-02-13T19:29:49.691206165Z" level=info msg="RemovePodSandbox \"63c75c37227579ed81f499e5b4f1d45e6d4e5d2122138039ec4326681c947823\" returns successfully" Feb 13 19:29:49.692180 containerd[1901]: time="2025-02-13T19:29:49.692151156Z" level=info msg="StopPodSandbox for \"c74977e7e2c7dc753524c0e4b42deededb40d1eea7715fe6f033759eda0cf5a7\"" Feb 13 19:29:49.692487 containerd[1901]: time="2025-02-13T19:29:49.692454269Z" level=info msg="TearDown network for sandbox \"c74977e7e2c7dc753524c0e4b42deededb40d1eea7715fe6f033759eda0cf5a7\" successfully" Feb 13 19:29:49.692487 containerd[1901]: time="2025-02-13T19:29:49.692480425Z" level=info msg="StopPodSandbox for \"c74977e7e2c7dc753524c0e4b42deededb40d1eea7715fe6f033759eda0cf5a7\" returns successfully" Feb 13 19:29:49.693542 containerd[1901]: time="2025-02-13T19:29:49.693497159Z" level=info msg="RemovePodSandbox for \"c74977e7e2c7dc753524c0e4b42deededb40d1eea7715fe6f033759eda0cf5a7\"" Feb 13 19:29:49.693542 containerd[1901]: time="2025-02-13T19:29:49.693530683Z" level=info msg="Forcibly stopping sandbox \"c74977e7e2c7dc753524c0e4b42deededb40d1eea7715fe6f033759eda0cf5a7\"" Feb 13 19:29:49.693690 containerd[1901]: time="2025-02-13T19:29:49.693624500Z" level=info msg="TearDown network for sandbox \"c74977e7e2c7dc753524c0e4b42deededb40d1eea7715fe6f033759eda0cf5a7\" successfully" Feb 13 19:29:49.696628 containerd[1901]: time="2025-02-13T19:29:49.696538540Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c74977e7e2c7dc753524c0e4b42deededb40d1eea7715fe6f033759eda0cf5a7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:49.696798 containerd[1901]: time="2025-02-13T19:29:49.696644354Z" level=info msg="RemovePodSandbox \"c74977e7e2c7dc753524c0e4b42deededb40d1eea7715fe6f033759eda0cf5a7\" returns successfully" Feb 13 19:29:49.700070 containerd[1901]: time="2025-02-13T19:29:49.700037744Z" level=info msg="StopPodSandbox for \"277f44c3c8153e250f840361c3ef9f9782f79bb248942e4af06a60a1bb833aba\"" Feb 13 19:29:49.700198 containerd[1901]: time="2025-02-13T19:29:49.700162228Z" level=info msg="TearDown network for sandbox \"277f44c3c8153e250f840361c3ef9f9782f79bb248942e4af06a60a1bb833aba\" successfully" Feb 13 19:29:49.700198 containerd[1901]: time="2025-02-13T19:29:49.700178848Z" level=info msg="StopPodSandbox for \"277f44c3c8153e250f840361c3ef9f9782f79bb248942e4af06a60a1bb833aba\" returns successfully" Feb 13 19:29:49.701040 containerd[1901]: time="2025-02-13T19:29:49.700980721Z" level=info msg="RemovePodSandbox for \"277f44c3c8153e250f840361c3ef9f9782f79bb248942e4af06a60a1bb833aba\"" Feb 13 19:29:49.701040 containerd[1901]: time="2025-02-13T19:29:49.701015025Z" level=info msg="Forcibly stopping sandbox \"277f44c3c8153e250f840361c3ef9f9782f79bb248942e4af06a60a1bb833aba\"" Feb 13 19:29:49.701230 containerd[1901]: time="2025-02-13T19:29:49.701103628Z" level=info msg="TearDown network for sandbox \"277f44c3c8153e250f840361c3ef9f9782f79bb248942e4af06a60a1bb833aba\" successfully" Feb 13 19:29:49.705884 containerd[1901]: time="2025-02-13T19:29:49.705389125Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"277f44c3c8153e250f840361c3ef9f9782f79bb248942e4af06a60a1bb833aba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:49.705884 containerd[1901]: time="2025-02-13T19:29:49.705446042Z" level=info msg="RemovePodSandbox \"277f44c3c8153e250f840361c3ef9f9782f79bb248942e4af06a60a1bb833aba\" returns successfully" Feb 13 19:29:49.706350 containerd[1901]: time="2025-02-13T19:29:49.706307690Z" level=info msg="StopPodSandbox for \"592066a662e72cbd71bb3b1a132a4bf62d419c5a08fdc195a45f9abeca4bf82d\"" Feb 13 19:29:49.707153 containerd[1901]: time="2025-02-13T19:29:49.706613656Z" level=info msg="TearDown network for sandbox \"592066a662e72cbd71bb3b1a132a4bf62d419c5a08fdc195a45f9abeca4bf82d\" successfully" Feb 13 19:29:49.707153 containerd[1901]: time="2025-02-13T19:29:49.706636557Z" level=info msg="StopPodSandbox for \"592066a662e72cbd71bb3b1a132a4bf62d419c5a08fdc195a45f9abeca4bf82d\" returns successfully" Feb 13 19:29:49.707834 containerd[1901]: time="2025-02-13T19:29:49.707413558Z" level=info msg="RemovePodSandbox for \"592066a662e72cbd71bb3b1a132a4bf62d419c5a08fdc195a45f9abeca4bf82d\"" Feb 13 19:29:49.707834 containerd[1901]: time="2025-02-13T19:29:49.707445596Z" level=info msg="Forcibly stopping sandbox \"592066a662e72cbd71bb3b1a132a4bf62d419c5a08fdc195a45f9abeca4bf82d\"" Feb 13 19:29:49.707834 containerd[1901]: time="2025-02-13T19:29:49.707774618Z" level=info msg="TearDown network for sandbox \"592066a662e72cbd71bb3b1a132a4bf62d419c5a08fdc195a45f9abeca4bf82d\" successfully" Feb 13 19:29:49.751790 containerd[1901]: time="2025-02-13T19:29:49.751695517Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"592066a662e72cbd71bb3b1a132a4bf62d419c5a08fdc195a45f9abeca4bf82d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:49.752794 containerd[1901]: time="2025-02-13T19:29:49.752638442Z" level=info msg="RemovePodSandbox \"592066a662e72cbd71bb3b1a132a4bf62d419c5a08fdc195a45f9abeca4bf82d\" returns successfully" Feb 13 19:29:49.759486 containerd[1901]: time="2025-02-13T19:29:49.757982022Z" level=info msg="StopPodSandbox for \"106302dee472be6d4e5a6e58cf074c279fb21a13b6dbfa0e013674e362b0b1a5\"" Feb 13 19:29:49.759486 containerd[1901]: time="2025-02-13T19:29:49.758127369Z" level=info msg="TearDown network for sandbox \"106302dee472be6d4e5a6e58cf074c279fb21a13b6dbfa0e013674e362b0b1a5\" successfully" Feb 13 19:29:49.759486 containerd[1901]: time="2025-02-13T19:29:49.758156656Z" level=info msg="StopPodSandbox for \"106302dee472be6d4e5a6e58cf074c279fb21a13b6dbfa0e013674e362b0b1a5\" returns successfully" Feb 13 19:29:49.764047 containerd[1901]: time="2025-02-13T19:29:49.763819316Z" level=info msg="RemovePodSandbox for \"106302dee472be6d4e5a6e58cf074c279fb21a13b6dbfa0e013674e362b0b1a5\"" Feb 13 19:29:49.764047 containerd[1901]: time="2025-02-13T19:29:49.763880943Z" level=info msg="Forcibly stopping sandbox \"106302dee472be6d4e5a6e58cf074c279fb21a13b6dbfa0e013674e362b0b1a5\"" Feb 13 19:29:49.764047 containerd[1901]: time="2025-02-13T19:29:49.764003289Z" level=info msg="TearDown network for sandbox \"106302dee472be6d4e5a6e58cf074c279fb21a13b6dbfa0e013674e362b0b1a5\" successfully" Feb 13 19:29:49.783773 containerd[1901]: time="2025-02-13T19:29:49.783697308Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"106302dee472be6d4e5a6e58cf074c279fb21a13b6dbfa0e013674e362b0b1a5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:49.783934 containerd[1901]: time="2025-02-13T19:29:49.783778257Z" level=info msg="RemovePodSandbox \"106302dee472be6d4e5a6e58cf074c279fb21a13b6dbfa0e013674e362b0b1a5\" returns successfully" Feb 13 19:29:49.784320 containerd[1901]: time="2025-02-13T19:29:49.784286463Z" level=info msg="StopPodSandbox for \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\"" Feb 13 19:29:49.784483 containerd[1901]: time="2025-02-13T19:29:49.784418487Z" level=info msg="TearDown network for sandbox \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\" successfully" Feb 13 19:29:49.784552 containerd[1901]: time="2025-02-13T19:29:49.784480373Z" level=info msg="StopPodSandbox for \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\" returns successfully" Feb 13 19:29:49.784979 containerd[1901]: time="2025-02-13T19:29:49.784950640Z" level=info msg="RemovePodSandbox for \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\"" Feb 13 19:29:49.785150 containerd[1901]: time="2025-02-13T19:29:49.784980288Z" level=info msg="Forcibly stopping sandbox \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\"" Feb 13 19:29:49.785201 containerd[1901]: time="2025-02-13T19:29:49.785152295Z" level=info msg="TearDown network for sandbox \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\" successfully" Feb 13 19:29:49.788152 containerd[1901]: time="2025-02-13T19:29:49.788112684Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:49.788268 containerd[1901]: time="2025-02-13T19:29:49.788169143Z" level=info msg="RemovePodSandbox \"b0cda9e750e928950158add96584c6ba60c8483d8f3b7b648c73c15ff98a674b\" returns successfully" Feb 13 19:29:49.788935 containerd[1901]: time="2025-02-13T19:29:49.788659218Z" level=info msg="StopPodSandbox for \"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\"" Feb 13 19:29:49.788935 containerd[1901]: time="2025-02-13T19:29:49.788833343Z" level=info msg="TearDown network for sandbox \"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\" successfully" Feb 13 19:29:49.788935 containerd[1901]: time="2025-02-13T19:29:49.788846742Z" level=info msg="StopPodSandbox for \"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\" returns successfully" Feb 13 19:29:49.789329 containerd[1901]: time="2025-02-13T19:29:49.789305685Z" level=info msg="RemovePodSandbox for \"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\"" Feb 13 19:29:49.790175 containerd[1901]: time="2025-02-13T19:29:49.789331811Z" level=info msg="Forcibly stopping sandbox \"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\"" Feb 13 19:29:49.790263 containerd[1901]: time="2025-02-13T19:29:49.790172060Z" level=info msg="TearDown network for sandbox \"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\" successfully" Feb 13 19:29:49.793962 containerd[1901]: time="2025-02-13T19:29:49.793927658Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:49.794077 containerd[1901]: time="2025-02-13T19:29:49.793999746Z" level=info msg="RemovePodSandbox \"1590bed5c2dbc2c13c3c7d2c05548e86f302b093dbc5b5effdcbe580bf629a8d\" returns successfully" Feb 13 19:29:49.794577 containerd[1901]: time="2025-02-13T19:29:49.794544990Z" level=info msg="StopPodSandbox for \"c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383\"" Feb 13 19:29:49.794682 containerd[1901]: time="2025-02-13T19:29:49.794656388Z" level=info msg="TearDown network for sandbox \"c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383\" successfully" Feb 13 19:29:49.794682 containerd[1901]: time="2025-02-13T19:29:49.794672591Z" level=info msg="StopPodSandbox for \"c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383\" returns successfully" Feb 13 19:29:49.795110 containerd[1901]: time="2025-02-13T19:29:49.795083818Z" level=info msg="RemovePodSandbox for \"c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383\"" Feb 13 19:29:49.795194 containerd[1901]: time="2025-02-13T19:29:49.795114132Z" level=info msg="Forcibly stopping sandbox \"c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383\"" Feb 13 19:29:49.795239 containerd[1901]: time="2025-02-13T19:29:49.795193111Z" level=info msg="TearDown network for sandbox \"c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383\" successfully" Feb 13 19:29:49.800597 containerd[1901]: time="2025-02-13T19:29:49.800549078Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:49.800734 containerd[1901]: time="2025-02-13T19:29:49.800616154Z" level=info msg="RemovePodSandbox \"c4f3e464a7bc28c002379e1cd6cee5562a7c75dccd998c22ba9257f1bb639383\" returns successfully" Feb 13 19:29:49.801145 containerd[1901]: time="2025-02-13T19:29:49.801116168Z" level=info msg="StopPodSandbox for \"81057b0d25700a64c019094576bb7601777eff8f9d73830f992e8ded73b3bac8\"" Feb 13 19:29:49.801254 containerd[1901]: time="2025-02-13T19:29:49.801230721Z" level=info msg="TearDown network for sandbox \"81057b0d25700a64c019094576bb7601777eff8f9d73830f992e8ded73b3bac8\" successfully" Feb 13 19:29:49.801299 containerd[1901]: time="2025-02-13T19:29:49.801250308Z" level=info msg="StopPodSandbox for \"81057b0d25700a64c019094576bb7601777eff8f9d73830f992e8ded73b3bac8\" returns successfully" Feb 13 19:29:49.801689 containerd[1901]: time="2025-02-13T19:29:49.801649871Z" level=info msg="RemovePodSandbox for \"81057b0d25700a64c019094576bb7601777eff8f9d73830f992e8ded73b3bac8\"" Feb 13 19:29:49.801870 containerd[1901]: time="2025-02-13T19:29:49.801694754Z" level=info msg="Forcibly stopping sandbox \"81057b0d25700a64c019094576bb7601777eff8f9d73830f992e8ded73b3bac8\"" Feb 13 19:29:49.801932 containerd[1901]: time="2025-02-13T19:29:49.801849543Z" level=info msg="TearDown network for sandbox \"81057b0d25700a64c019094576bb7601777eff8f9d73830f992e8ded73b3bac8\" successfully" Feb 13 19:29:49.813035 containerd[1901]: time="2025-02-13T19:29:49.812926437Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"81057b0d25700a64c019094576bb7601777eff8f9d73830f992e8ded73b3bac8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:49.813238 containerd[1901]: time="2025-02-13T19:29:49.813066565Z" level=info msg="RemovePodSandbox \"81057b0d25700a64c019094576bb7601777eff8f9d73830f992e8ded73b3bac8\" returns successfully" Feb 13 19:29:49.814180 containerd[1901]: time="2025-02-13T19:29:49.814143815Z" level=info msg="StopPodSandbox for \"62f6e833830825836b2d9cd4ef2468d7f8eb1447b5819f3817664e63f552ef09\"" Feb 13 19:29:49.814289 containerd[1901]: time="2025-02-13T19:29:49.814271893Z" level=info msg="TearDown network for sandbox \"62f6e833830825836b2d9cd4ef2468d7f8eb1447b5819f3817664e63f552ef09\" successfully" Feb 13 19:29:49.814647 containerd[1901]: time="2025-02-13T19:29:49.814287986Z" level=info msg="StopPodSandbox for \"62f6e833830825836b2d9cd4ef2468d7f8eb1447b5819f3817664e63f552ef09\" returns successfully" Feb 13 19:29:49.818009 containerd[1901]: time="2025-02-13T19:29:49.817957865Z" level=info msg="RemovePodSandbox for \"62f6e833830825836b2d9cd4ef2468d7f8eb1447b5819f3817664e63f552ef09\"" Feb 13 19:29:49.818118 containerd[1901]: time="2025-02-13T19:29:49.818011336Z" level=info msg="Forcibly stopping sandbox \"62f6e833830825836b2d9cd4ef2468d7f8eb1447b5819f3817664e63f552ef09\"" Feb 13 19:29:49.839051 containerd[1901]: time="2025-02-13T19:29:49.838974754Z" level=info msg="TearDown network for sandbox \"62f6e833830825836b2d9cd4ef2468d7f8eb1447b5819f3817664e63f552ef09\" successfully" Feb 13 19:29:49.843723 containerd[1901]: time="2025-02-13T19:29:49.843645460Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"62f6e833830825836b2d9cd4ef2468d7f8eb1447b5819f3817664e63f552ef09\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:49.843723 containerd[1901]: time="2025-02-13T19:29:49.843702330Z" level=info msg="RemovePodSandbox \"62f6e833830825836b2d9cd4ef2468d7f8eb1447b5819f3817664e63f552ef09\" returns successfully" Feb 13 19:29:49.844043 containerd[1901]: time="2025-02-13T19:29:49.844012610Z" level=info msg="StopPodSandbox for \"916207604eb217076ba61a7e571db68500d1003652ba46ac001d4d31d033c2b3\"" Feb 13 19:29:49.844139 containerd[1901]: time="2025-02-13T19:29:49.844119946Z" level=info msg="TearDown network for sandbox \"916207604eb217076ba61a7e571db68500d1003652ba46ac001d4d31d033c2b3\" successfully" Feb 13 19:29:49.844281 containerd[1901]: time="2025-02-13T19:29:49.844135636Z" level=info msg="StopPodSandbox for \"916207604eb217076ba61a7e571db68500d1003652ba46ac001d4d31d033c2b3\" returns successfully" Feb 13 19:29:49.846119 containerd[1901]: time="2025-02-13T19:29:49.845110142Z" level=info msg="RemovePodSandbox for \"916207604eb217076ba61a7e571db68500d1003652ba46ac001d4d31d033c2b3\"" Feb 13 19:29:49.846119 containerd[1901]: time="2025-02-13T19:29:49.845218417Z" level=info msg="Forcibly stopping sandbox \"916207604eb217076ba61a7e571db68500d1003652ba46ac001d4d31d033c2b3\"" Feb 13 19:29:49.846119 containerd[1901]: time="2025-02-13T19:29:49.845308284Z" level=info msg="TearDown network for sandbox \"916207604eb217076ba61a7e571db68500d1003652ba46ac001d4d31d033c2b3\" successfully" Feb 13 19:29:49.855956 containerd[1901]: time="2025-02-13T19:29:49.855888686Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"916207604eb217076ba61a7e571db68500d1003652ba46ac001d4d31d033c2b3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:49.858106 containerd[1901]: time="2025-02-13T19:29:49.855967367Z" level=info msg="RemovePodSandbox \"916207604eb217076ba61a7e571db68500d1003652ba46ac001d4d31d033c2b3\" returns successfully" Feb 13 19:29:49.858106 containerd[1901]: time="2025-02-13T19:29:49.857489116Z" level=info msg="StopPodSandbox for \"66546204545d44abda6676e4f2078072d294f7db45e65e8e7e6f5492cf326bbf\"" Feb 13 19:29:49.858106 containerd[1901]: time="2025-02-13T19:29:49.857673442Z" level=info msg="TearDown network for sandbox \"66546204545d44abda6676e4f2078072d294f7db45e65e8e7e6f5492cf326bbf\" successfully" Feb 13 19:29:49.858106 containerd[1901]: time="2025-02-13T19:29:49.857693289Z" level=info msg="StopPodSandbox for \"66546204545d44abda6676e4f2078072d294f7db45e65e8e7e6f5492cf326bbf\" returns successfully" Feb 13 19:29:49.859630 containerd[1901]: time="2025-02-13T19:29:49.858166438Z" level=info msg="RemovePodSandbox for \"66546204545d44abda6676e4f2078072d294f7db45e65e8e7e6f5492cf326bbf\"" Feb 13 19:29:49.861045 containerd[1901]: time="2025-02-13T19:29:49.859771227Z" level=info msg="Forcibly stopping sandbox \"66546204545d44abda6676e4f2078072d294f7db45e65e8e7e6f5492cf326bbf\"" Feb 13 19:29:49.861045 containerd[1901]: time="2025-02-13T19:29:49.859881799Z" level=info msg="TearDown network for sandbox \"66546204545d44abda6676e4f2078072d294f7db45e65e8e7e6f5492cf326bbf\" successfully" Feb 13 19:29:49.863079 containerd[1901]: time="2025-02-13T19:29:49.863046797Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"66546204545d44abda6676e4f2078072d294f7db45e65e8e7e6f5492cf326bbf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:49.863339 containerd[1901]: time="2025-02-13T19:29:49.863308518Z" level=info msg="RemovePodSandbox \"66546204545d44abda6676e4f2078072d294f7db45e65e8e7e6f5492cf326bbf\" returns successfully" Feb 13 19:29:49.864146 containerd[1901]: time="2025-02-13T19:29:49.864078977Z" level=info msg="StopPodSandbox for \"4a7647c5c3655b2ebbc1a4299824f9e2d37981503ecce81e0d91e7875e6de31f\"" Feb 13 19:29:49.864430 containerd[1901]: time="2025-02-13T19:29:49.864383506Z" level=info msg="TearDown network for sandbox \"4a7647c5c3655b2ebbc1a4299824f9e2d37981503ecce81e0d91e7875e6de31f\" successfully" Feb 13 19:29:49.864577 containerd[1901]: time="2025-02-13T19:29:49.864554514Z" level=info msg="StopPodSandbox for \"4a7647c5c3655b2ebbc1a4299824f9e2d37981503ecce81e0d91e7875e6de31f\" returns successfully" Feb 13 19:29:49.865075 containerd[1901]: time="2025-02-13T19:29:49.865049642Z" level=info msg="RemovePodSandbox for \"4a7647c5c3655b2ebbc1a4299824f9e2d37981503ecce81e0d91e7875e6de31f\"" Feb 13 19:29:49.865201 containerd[1901]: time="2025-02-13T19:29:49.865184111Z" level=info msg="Forcibly stopping sandbox \"4a7647c5c3655b2ebbc1a4299824f9e2d37981503ecce81e0d91e7875e6de31f\"" Feb 13 19:29:49.865430 containerd[1901]: time="2025-02-13T19:29:49.865378028Z" level=info msg="TearDown network for sandbox \"4a7647c5c3655b2ebbc1a4299824f9e2d37981503ecce81e0d91e7875e6de31f\" successfully" Feb 13 19:29:49.871569 containerd[1901]: time="2025-02-13T19:29:49.871523880Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4a7647c5c3655b2ebbc1a4299824f9e2d37981503ecce81e0d91e7875e6de31f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:49.871803 containerd[1901]: time="2025-02-13T19:29:49.871766248Z" level=info msg="RemovePodSandbox \"4a7647c5c3655b2ebbc1a4299824f9e2d37981503ecce81e0d91e7875e6de31f\" returns successfully" Feb 13 19:29:49.872323 containerd[1901]: time="2025-02-13T19:29:49.872297453Z" level=info msg="StopPodSandbox for \"0713a05d4be7a02aa6872658d0d44501bc4f0bd35cf3611896cbf4b62a75667e\"" Feb 13 19:29:49.872457 containerd[1901]: time="2025-02-13T19:29:49.872435963Z" level=info msg="TearDown network for sandbox \"0713a05d4be7a02aa6872658d0d44501bc4f0bd35cf3611896cbf4b62a75667e\" successfully" Feb 13 19:29:49.872506 containerd[1901]: time="2025-02-13T19:29:49.872453967Z" level=info msg="StopPodSandbox for \"0713a05d4be7a02aa6872658d0d44501bc4f0bd35cf3611896cbf4b62a75667e\" returns successfully" Feb 13 19:29:49.873524 containerd[1901]: time="2025-02-13T19:29:49.872896964Z" level=info msg="RemovePodSandbox for \"0713a05d4be7a02aa6872658d0d44501bc4f0bd35cf3611896cbf4b62a75667e\"" Feb 13 19:29:49.873524 containerd[1901]: time="2025-02-13T19:29:49.872982610Z" level=info msg="Forcibly stopping sandbox \"0713a05d4be7a02aa6872658d0d44501bc4f0bd35cf3611896cbf4b62a75667e\"" Feb 13 19:29:49.873524 containerd[1901]: time="2025-02-13T19:29:49.873050802Z" level=info msg="TearDown network for sandbox \"0713a05d4be7a02aa6872658d0d44501bc4f0bd35cf3611896cbf4b62a75667e\" successfully" Feb 13 19:29:49.876268 containerd[1901]: time="2025-02-13T19:29:49.876225691Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0713a05d4be7a02aa6872658d0d44501bc4f0bd35cf3611896cbf4b62a75667e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:49.876483 containerd[1901]: time="2025-02-13T19:29:49.876280606Z" level=info msg="RemovePodSandbox \"0713a05d4be7a02aa6872658d0d44501bc4f0bd35cf3611896cbf4b62a75667e\" returns successfully" Feb 13 19:29:50.573666 kubelet[2397]: E0213 19:29:50.573614 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:51.573862 kubelet[2397]: E0213 19:29:51.573817 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:52.574911 kubelet[2397]: E0213 19:29:52.574839 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:53.575330 kubelet[2397]: E0213 19:29:53.575276 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:54.575912 kubelet[2397]: E0213 19:29:54.575848 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:55.576196 kubelet[2397]: E0213 19:29:55.576072 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:56.576635 kubelet[2397]: E0213 19:29:56.576576 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:57.577509 kubelet[2397]: E0213 19:29:57.577382 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:58.577881 kubelet[2397]: E0213 19:29:58.577821 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:29:59.578837 kubelet[2397]: E0213 19:29:59.578787 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:00.579084 kubelet[2397]: E0213 19:30:00.579027 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:01.586958 kubelet[2397]: E0213 19:30:01.586891 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:02.587543 kubelet[2397]: E0213 19:30:02.587481 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:03.005489 kubelet[2397]: I0213 19:30:03.005428 2397 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=26.635670645 podStartE2EDuration="33.005303751s" podCreationTimestamp="2025-02-13 19:29:30 +0000 UTC" firstStartedPulling="2025-02-13 19:29:31.421616589 +0000 UTC m=+42.919008542" lastFinishedPulling="2025-02-13 19:29:37.791249701 +0000 UTC m=+49.288641648" observedRunningTime="2025-02-13 19:29:38.458783264 +0000 UTC m=+49.956175234" watchObservedRunningTime="2025-02-13 19:30:03.005303751 +0000 UTC m=+74.502695712" Feb 13 19:30:03.081735 systemd[1]: Created slice kubepods-besteffort-pod2ebd3e55_73e7_4ada_96aa_1a66d57d7f36.slice - libcontainer container kubepods-besteffort-pod2ebd3e55_73e7_4ada_96aa_1a66d57d7f36.slice. Feb 13 19:30:03.146017 kubelet[2397]: I0213 19:30:03.143060 2397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-88dcb4f8-8e29-4830-bc64-da692d4afd39\" (UniqueName: \"kubernetes.io/nfs/2ebd3e55-73e7-4ada-96aa-1a66d57d7f36-pvc-88dcb4f8-8e29-4830-bc64-da692d4afd39\") pod \"test-pod-1\" (UID: \"2ebd3e55-73e7-4ada-96aa-1a66d57d7f36\") " pod="default/test-pod-1" Feb 13 19:30:03.152187 kubelet[2397]: I0213 19:30:03.152127 2397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw7x7\" (UniqueName: \"kubernetes.io/projected/2ebd3e55-73e7-4ada-96aa-1a66d57d7f36-kube-api-access-jw7x7\") pod \"test-pod-1\" (UID: \"2ebd3e55-73e7-4ada-96aa-1a66d57d7f36\") " pod="default/test-pod-1" Feb 13 19:30:03.375443 kernel: FS-Cache: Loaded Feb 13 19:30:03.555212 kernel: RPC: Registered named UNIX socket transport module. Feb 13 19:30:03.555373 kernel: RPC: Registered udp transport module. Feb 13 19:30:03.555410 kernel: RPC: Registered tcp transport module. Feb 13 19:30:03.555813 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 19:30:03.556589 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 19:30:03.588238 kubelet[2397]: E0213 19:30:03.588055 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:04.138717 kernel: NFS: Registering the id_resolver key type Feb 13 19:30:04.138839 kernel: Key type id_resolver registered Feb 13 19:30:04.138862 kernel: Key type id_legacy registered Feb 13 19:30:04.250126 nfsidmap[4360]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 19:30:04.255212 nfsidmap[4361]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 19:30:04.354317 containerd[1901]: time="2025-02-13T19:30:04.353983527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:2ebd3e55-73e7-4ada-96aa-1a66d57d7f36,Namespace:default,Attempt:0,}" Feb 13 19:30:04.589061 kubelet[2397]: E0213 19:30:04.589020 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:04.660478 (udev-worker)[4347]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:30:04.663585 systemd-networkd[1745]: cali5ec59c6bf6e: Link UP Feb 13 19:30:04.666653 systemd-networkd[1745]: cali5ec59c6bf6e: Gained carrier Feb 13 19:30:04.705763 containerd[1901]: 2025-02-13 19:30:04.492 [INFO][4362] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.17.153-k8s-test--pod--1-eth0 default 2ebd3e55-73e7-4ada-96aa-1a66d57d7f36 1360 0 2025-02-13 19:29:31 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.17.153 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="7fef698a53c38993149031e67da84805bcaa41da140292f424e46c51c91d58ff" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.153-k8s-test--pod--1-" Feb 13 19:30:04.705763 containerd[1901]: 2025-02-13 19:30:04.492 [INFO][4362] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7fef698a53c38993149031e67da84805bcaa41da140292f424e46c51c91d58ff" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.153-k8s-test--pod--1-eth0" Feb 13 19:30:04.705763 containerd[1901]: 2025-02-13 19:30:04.580 [INFO][4373] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7fef698a53c38993149031e67da84805bcaa41da140292f424e46c51c91d58ff" HandleID="k8s-pod-network.7fef698a53c38993149031e67da84805bcaa41da140292f424e46c51c91d58ff" Workload="172.31.17.153-k8s-test--pod--1-eth0" Feb 13 19:30:04.705763 containerd[1901]: 2025-02-13 19:30:04.597 [INFO][4373] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7fef698a53c38993149031e67da84805bcaa41da140292f424e46c51c91d58ff" HandleID="k8s-pod-network.7fef698a53c38993149031e67da84805bcaa41da140292f424e46c51c91d58ff" Workload="172.31.17.153-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002eceb0), Attrs:map[string]string{"namespace":"default", "node":"172.31.17.153", "pod":"test-pod-1", "timestamp":"2025-02-13 19:30:04.580333496 +0000 UTC"}, Hostname:"172.31.17.153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:30:04.705763 containerd[1901]: 2025-02-13 19:30:04.597 [INFO][4373] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:30:04.705763 containerd[1901]: 2025-02-13 19:30:04.597 [INFO][4373] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:30:04.705763 containerd[1901]: 2025-02-13 19:30:04.597 [INFO][4373] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.17.153' Feb 13 19:30:04.705763 containerd[1901]: 2025-02-13 19:30:04.601 [INFO][4373] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7fef698a53c38993149031e67da84805bcaa41da140292f424e46c51c91d58ff" host="172.31.17.153" Feb 13 19:30:04.705763 containerd[1901]: 2025-02-13 19:30:04.607 [INFO][4373] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.17.153" Feb 13 19:30:04.705763 containerd[1901]: 2025-02-13 19:30:04.618 [INFO][4373] ipam/ipam.go 489: Trying affinity for 192.168.66.128/26 host="172.31.17.153" Feb 13 19:30:04.705763 containerd[1901]: 2025-02-13 19:30:04.622 [INFO][4373] ipam/ipam.go 155: Attempting to load block cidr=192.168.66.128/26 host="172.31.17.153" Feb 13 19:30:04.705763 containerd[1901]: 2025-02-13 19:30:04.626 [INFO][4373] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.66.128/26 host="172.31.17.153" Feb 13 19:30:04.705763 containerd[1901]: 2025-02-13 19:30:04.626 [INFO][4373] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.66.128/26 handle="k8s-pod-network.7fef698a53c38993149031e67da84805bcaa41da140292f424e46c51c91d58ff" host="172.31.17.153" Feb 13 19:30:04.705763 containerd[1901]: 2025-02-13 19:30:04.630 [INFO][4373] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7fef698a53c38993149031e67da84805bcaa41da140292f424e46c51c91d58ff Feb 13 19:30:04.705763 containerd[1901]: 2025-02-13 19:30:04.638 [INFO][4373] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.66.128/26 handle="k8s-pod-network.7fef698a53c38993149031e67da84805bcaa41da140292f424e46c51c91d58ff" host="172.31.17.153" Feb 13 19:30:04.705763 containerd[1901]: 2025-02-13 19:30:04.651 [INFO][4373] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.66.132/26] block=192.168.66.128/26 handle="k8s-pod-network.7fef698a53c38993149031e67da84805bcaa41da140292f424e46c51c91d58ff" host="172.31.17.153" Feb 13 19:30:04.705763 containerd[1901]: 2025-02-13 19:30:04.652 [INFO][4373] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.66.132/26] handle="k8s-pod-network.7fef698a53c38993149031e67da84805bcaa41da140292f424e46c51c91d58ff" host="172.31.17.153" Feb 13 19:30:04.705763 containerd[1901]: 2025-02-13 19:30:04.652 [INFO][4373] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:30:04.705763 containerd[1901]: 2025-02-13 19:30:04.652 [INFO][4373] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.132/26] IPv6=[] ContainerID="7fef698a53c38993149031e67da84805bcaa41da140292f424e46c51c91d58ff" HandleID="k8s-pod-network.7fef698a53c38993149031e67da84805bcaa41da140292f424e46c51c91d58ff" Workload="172.31.17.153-k8s-test--pod--1-eth0" Feb 13 19:30:04.705763 containerd[1901]: 2025-02-13 19:30:04.658 [INFO][4362] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7fef698a53c38993149031e67da84805bcaa41da140292f424e46c51c91d58ff" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.153-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.153-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"2ebd3e55-73e7-4ada-96aa-1a66d57d7f36", ResourceVersion:"1360", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 29, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.153", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.66.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:30:04.709604 containerd[1901]: 2025-02-13 19:30:04.658 [INFO][4362] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.66.132/32] ContainerID="7fef698a53c38993149031e67da84805bcaa41da140292f424e46c51c91d58ff" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.153-k8s-test--pod--1-eth0" Feb 13 19:30:04.709604 containerd[1901]: 2025-02-13 19:30:04.658 [INFO][4362] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="7fef698a53c38993149031e67da84805bcaa41da140292f424e46c51c91d58ff" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.153-k8s-test--pod--1-eth0" Feb 13 19:30:04.709604 containerd[1901]: 2025-02-13 19:30:04.671 [INFO][4362] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7fef698a53c38993149031e67da84805bcaa41da140292f424e46c51c91d58ff" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.153-k8s-test--pod--1-eth0" Feb 13 19:30:04.709604 containerd[1901]: 2025-02-13 19:30:04.672 [INFO][4362] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7fef698a53c38993149031e67da84805bcaa41da140292f424e46c51c91d58ff" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.153-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.153-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"2ebd3e55-73e7-4ada-96aa-1a66d57d7f36", ResourceVersion:"1360", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 29, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.153", ContainerID:"7fef698a53c38993149031e67da84805bcaa41da140292f424e46c51c91d58ff", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.66.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"42:ad:01:08:d4:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:30:04.709604 containerd[1901]: 2025-02-13 19:30:04.684 [INFO][4362] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7fef698a53c38993149031e67da84805bcaa41da140292f424e46c51c91d58ff" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.153-k8s-test--pod--1-eth0" Feb 13 19:30:04.818450 containerd[1901]: time="2025-02-13T19:30:04.817231568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:30:04.818621 containerd[1901]: time="2025-02-13T19:30:04.818441239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:30:04.818621 containerd[1901]: time="2025-02-13T19:30:04.818469461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:04.828807 containerd[1901]: time="2025-02-13T19:30:04.828545020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:04.866622 systemd[1]: Started cri-containerd-7fef698a53c38993149031e67da84805bcaa41da140292f424e46c51c91d58ff.scope - libcontainer container 7fef698a53c38993149031e67da84805bcaa41da140292f424e46c51c91d58ff. Feb 13 19:30:04.979005 containerd[1901]: time="2025-02-13T19:30:04.978933370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:2ebd3e55-73e7-4ada-96aa-1a66d57d7f36,Namespace:default,Attempt:0,} returns sandbox id \"7fef698a53c38993149031e67da84805bcaa41da140292f424e46c51c91d58ff\"" Feb 13 19:30:04.987456 containerd[1901]: time="2025-02-13T19:30:04.986877792Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:30:05.569807 containerd[1901]: time="2025-02-13T19:30:05.569556404Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 19:30:05.587316 containerd[1901]: time="2025-02-13T19:30:05.587262213Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 599.791798ms" Feb 13 19:30:05.587618 containerd[1901]: time="2025-02-13T19:30:05.587401649Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 19:30:05.590224 kubelet[2397]: E0213 19:30:05.590154 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:05.595814 containerd[1901]: time="2025-02-13T19:30:05.595552161Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:05.605068 containerd[1901]: time="2025-02-13T19:30:05.603503045Z" level=info msg="CreateContainer within sandbox \"7fef698a53c38993149031e67da84805bcaa41da140292f424e46c51c91d58ff\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 19:30:05.639769 containerd[1901]: time="2025-02-13T19:30:05.639719702Z" level=info msg="CreateContainer within sandbox \"7fef698a53c38993149031e67da84805bcaa41da140292f424e46c51c91d58ff\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"ebe9523a90d45d88f807921e12fc601040fa17072619b9fbf6b3fdb4798944c3\"" Feb 13 19:30:05.641653 containerd[1901]: time="2025-02-13T19:30:05.641608742Z" level=info msg="StartContainer for \"ebe9523a90d45d88f807921e12fc601040fa17072619b9fbf6b3fdb4798944c3\"" Feb 13 19:30:05.723581 systemd[1]: Started cri-containerd-ebe9523a90d45d88f807921e12fc601040fa17072619b9fbf6b3fdb4798944c3.scope - libcontainer container ebe9523a90d45d88f807921e12fc601040fa17072619b9fbf6b3fdb4798944c3. Feb 13 19:30:05.801382 containerd[1901]: time="2025-02-13T19:30:05.801316780Z" level=info msg="StartContainer for \"ebe9523a90d45d88f807921e12fc601040fa17072619b9fbf6b3fdb4798944c3\" returns successfully" Feb 13 19:30:05.970572 systemd-networkd[1745]: cali5ec59c6bf6e: Gained IPv6LL Feb 13 19:30:06.567033 kubelet[2397]: I0213 19:30:06.566968 2397 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=34.951572806 podStartE2EDuration="35.566950543s" podCreationTimestamp="2025-02-13 19:29:31 +0000 UTC" firstStartedPulling="2025-02-13 19:30:04.983256343 +0000 UTC m=+76.480648295" lastFinishedPulling="2025-02-13 19:30:05.598634086 +0000 UTC m=+77.096026032" observedRunningTime="2025-02-13 19:30:06.566653071 +0000 UTC m=+78.064045031" watchObservedRunningTime="2025-02-13 19:30:06.566950543 +0000 UTC m=+78.064342503" Feb 13 19:30:06.591111 kubelet[2397]: E0213 19:30:06.591065 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:07.591801 kubelet[2397]: E0213 19:30:07.591739 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:08.592515 kubelet[2397]: E0213 19:30:08.592456 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:08.768804 ntpd[1881]: Listen normally on 11 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Feb 13 19:30:08.769305 ntpd[1881]: 13 Feb 19:30:08 ntpd[1881]: Listen normally on 11 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Feb 13 19:30:09.504423 kubelet[2397]: E0213 19:30:09.504355 2397 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:09.593531 kubelet[2397]: E0213 19:30:09.593465 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:10.593773 kubelet[2397]: E0213 19:30:10.593718 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:11.594559 kubelet[2397]: E0213 19:30:11.594501 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:12.595111 kubelet[2397]: E0213 19:30:12.594927 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:13.595977 kubelet[2397]: E0213 19:30:13.595921 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:14.597228 kubelet[2397]: E0213 19:30:14.597152 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:15.597661 kubelet[2397]: E0213 19:30:15.597603 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:16.598738 kubelet[2397]: E0213 19:30:16.598682 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:17.598914 kubelet[2397]: E0213 19:30:17.598856 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:18.604878 kubelet[2397]: E0213 19:30:18.604817 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:19.605804 kubelet[2397]: E0213 19:30:19.605752 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:20.606300 kubelet[2397]: E0213 19:30:20.606245 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:21.608383 kubelet[2397]: E0213 19:30:21.606730 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:22.607427 kubelet[2397]: E0213 19:30:22.607356 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:23.607996 kubelet[2397]: E0213 19:30:23.607938 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:24.609239 kubelet[2397]: E0213 19:30:24.609181 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:25.610183 kubelet[2397]: E0213 19:30:25.610129 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:26.610896 kubelet[2397]: E0213 19:30:26.610686 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:27.611775 kubelet[2397]: E0213 19:30:27.611733 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:28.612917 kubelet[2397]: E0213 19:30:28.612863 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:29.504219 kubelet[2397]: E0213 19:30:29.504162 2397 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:29.613842 kubelet[2397]: E0213 19:30:29.613010 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:30.614936 kubelet[2397]: E0213 19:30:30.614878 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:31.113548 kubelet[2397]: E0213 19:30:31.113490 2397 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.153?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 19:30:31.616047 kubelet[2397]: E0213 19:30:31.615996 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:32.616492 kubelet[2397]: E0213 19:30:32.616425 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:33.616927 kubelet[2397]: E0213 19:30:33.616849 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:34.617529 kubelet[2397]: E0213 19:30:34.617471 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:35.618253 kubelet[2397]: E0213 19:30:35.618198 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:36.618687 kubelet[2397]: E0213 19:30:36.618538 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:37.619314 kubelet[2397]: E0213 19:30:37.619254 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:38.620415 kubelet[2397]: E0213 19:30:38.620347 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:39.621886 kubelet[2397]: E0213 19:30:39.621477 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:40.622518 kubelet[2397]: E0213 19:30:40.622460 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:41.114173 kubelet[2397]: E0213 19:30:41.114112 2397 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.153?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 19:30:41.623599 kubelet[2397]: E0213 19:30:41.623541 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:42.624611 kubelet[2397]: E0213 19:30:42.624552 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:43.625500 kubelet[2397]: E0213 19:30:43.625457 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:44.626402 kubelet[2397]: E0213 19:30:44.626340 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:45.626836 kubelet[2397]: E0213 19:30:45.626770 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:46.627405 kubelet[2397]: E0213 19:30:46.627334 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:47.628184 kubelet[2397]: E0213 19:30:47.628126 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:48.628718 kubelet[2397]: E0213 19:30:48.628660 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:49.504837 kubelet[2397]: E0213 19:30:49.504779 2397 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:49.632121 kubelet[2397]: E0213 19:30:49.631275 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:50.632395 kubelet[2397]: E0213 19:30:50.632330 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:51.114917 kubelet[2397]: E0213 19:30:51.114843 2397 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.153?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 19:30:51.632903 kubelet[2397]: E0213 19:30:51.632842 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:52.633927 kubelet[2397]: E0213 19:30:52.633870 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:53.634603 kubelet[2397]: E0213 19:30:53.634541 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:54.635033 kubelet[2397]: E0213 19:30:54.634976 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:55.635925 kubelet[2397]: E0213 19:30:55.635866 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:56.636978 kubelet[2397]: E0213 19:30:56.636919 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:57.638008 kubelet[2397]: E0213 19:30:57.637947 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:58.638459 kubelet[2397]: E0213 19:30:58.638399 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:30:59.639720 kubelet[2397]: E0213 19:30:59.639657 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:00.640055 kubelet[2397]: E0213 19:31:00.640001 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:01.115774 kubelet[2397]: E0213 19:31:01.115669 2397 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.153?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 19:31:01.175341 kubelet[2397]: E0213 19:31:01.172611 2397 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.153?timeout=10s\": unexpected EOF" Feb 13 19:31:01.175341 kubelet[2397]: I0213 19:31:01.172661 2397 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 13 19:31:01.641070 kubelet[2397]: E0213 19:31:01.641008 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:02.186717 kubelet[2397]: E0213 19:31:02.186640 2397 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.153?timeout=10s\": dial tcp 172.31.25.130:6443: connect: connection refused - error from a previous attempt: read tcp 172.31.17.153:57880->172.31.25.130:6443: read: connection reset by peer" interval="200ms" Feb 13 19:31:02.641446 kubelet[2397]: E0213 19:31:02.641385 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:03.641570 kubelet[2397]: E0213 19:31:03.641510 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:04.642617 kubelet[2397]: E0213 19:31:04.642568 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:05.643594 kubelet[2397]: E0213 19:31:05.643534 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:06.644621 kubelet[2397]: E0213 19:31:06.644563 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:07.645175 kubelet[2397]: E0213 19:31:07.645115 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:08.646195 kubelet[2397]: E0213 19:31:08.646131 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:09.504728 kubelet[2397]: E0213 19:31:09.504666 2397 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:09.647162 kubelet[2397]: E0213 19:31:09.647108 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:10.647533 kubelet[2397]: E0213 19:31:10.647470 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:11.648906 kubelet[2397]: E0213 19:31:11.648845 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:12.388659 kubelet[2397]: E0213 19:31:12.388596 2397 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.153?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="400ms" Feb 13 19:31:12.649150 kubelet[2397]: E0213 19:31:12.649006 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:13.649472 kubelet[2397]: E0213 19:31:13.649414 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:14.649854 kubelet[2397]: E0213 19:31:14.649794 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:15.650150 kubelet[2397]: E0213 19:31:15.650086 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:16.650965 kubelet[2397]: E0213 19:31:16.650908 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"