Feb 13 15:30:23.035831 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 13:54:58 -00 2025 Feb 13 15:30:23.035872 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:30:23.035889 kernel: BIOS-provided physical RAM map: Feb 13 15:30:23.035901 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 15:30:23.035913 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 15:30:23.035923 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 15:30:23.035939 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Feb 13 15:30:23.035950 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Feb 13 15:30:23.035963 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Feb 13 15:30:23.035974 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 15:30:23.035987 kernel: NX (Execute Disable) protection: active Feb 13 15:30:23.035999 kernel: APIC: Static calls initialized Feb 13 15:30:23.036011 kernel: SMBIOS 2.7 present. Feb 13 15:30:23.036023 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Feb 13 15:30:23.036041 kernel: Hypervisor detected: KVM Feb 13 15:30:23.036053 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 15:30:23.036067 kernel: kvm-clock: using sched offset of 8252759892 cycles Feb 13 15:30:23.036081 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 15:30:23.036094 kernel: tsc: Detected 2499.996 MHz processor Feb 13 15:30:23.036109 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 15:30:23.036122 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 15:30:23.036138 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Feb 13 15:30:23.036152 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 15:30:23.036165 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 15:30:23.036178 kernel: Using GB pages for direct mapping Feb 13 15:30:23.036192 kernel: ACPI: Early table checksum verification disabled Feb 13 15:30:23.036205 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Feb 13 15:30:23.036219 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Feb 13 15:30:23.036232 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 15:30:23.036329 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Feb 13 15:30:23.036348 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Feb 13 15:30:23.036362 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 13 15:30:23.036375 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 15:30:23.036389 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Feb 13 15:30:23.036402 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 15:30:23.036415 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Feb 13 15:30:23.036429 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Feb 13 15:30:23.036442 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 13 15:30:23.036455 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Feb 13 15:30:23.036473 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Feb 13 15:30:23.036492 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Feb 13 15:30:23.036506 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Feb 13 15:30:23.036520 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Feb 13 15:30:23.036535 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Feb 13 15:30:23.036553 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Feb 13 15:30:23.036567 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Feb 13 15:30:23.036581 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Feb 13 15:30:23.036595 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Feb 13 15:30:23.036690 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 15:30:23.036707 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 15:30:23.036721 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Feb 13 15:30:23.036736 kernel: NUMA: Initialized distance table, cnt=1 Feb 13 15:30:23.036750 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Feb 13 15:30:23.036768 kernel: Zone ranges: Feb 13 15:30:23.036783 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 15:30:23.036808 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Feb 13 15:30:23.036823 kernel: Normal empty Feb 13 15:30:23.036837 kernel: Movable zone start for each node Feb 13 15:30:23.036851 kernel: Early memory node ranges Feb 13 15:30:23.036866 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 15:30:23.036880 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Feb 13 15:30:23.036893 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Feb 13 15:30:23.036911 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:30:23.036925 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 15:30:23.036939 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Feb 13 15:30:23.036953 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 13 15:30:23.036966 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 15:30:23.036980 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Feb 13 15:30:23.036992 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 15:30:23.037006 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 15:30:23.037025 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 15:30:23.037042 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 15:30:23.037058 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 15:30:23.037072 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 15:30:23.037086 kernel: TSC deadline timer available Feb 13 15:30:23.037099 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 15:30:23.037114 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 15:30:23.037127 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Feb 13 15:30:23.037141 kernel: Booting paravirtualized kernel on KVM Feb 13 15:30:23.037156 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 15:30:23.037171 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 15:30:23.037191 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 15:30:23.037206 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 15:30:23.037221 kernel: pcpu-alloc: [0] 0 1 Feb 13 15:30:23.037236 kernel: kvm-guest: PV spinlocks enabled Feb 13 15:30:23.037251 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 15:30:23.037268 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:30:23.037284 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:30:23.037299 kernel: random: crng init done Feb 13 15:30:23.037317 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:30:23.037333 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 15:30:23.037348 kernel: Fallback order for Node 0: 0 Feb 13 15:30:23.037363 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Feb 13 15:30:23.037379 kernel: Policy zone: DMA32 Feb 13 15:30:23.037394 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:30:23.037409 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 125152K reserved, 0K cma-reserved) Feb 13 15:30:23.037425 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:30:23.037443 kernel: Kernel/User page tables isolation: enabled Feb 13 15:30:23.037458 kernel: ftrace: allocating 37920 entries in 149 pages Feb 13 15:30:23.037473 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 15:30:23.037489 kernel: Dynamic Preempt: voluntary Feb 13 15:30:23.037505 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:30:23.037521 kernel: rcu: RCU event tracing is enabled. Feb 13 15:30:23.037536 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:30:23.037552 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:30:23.037567 kernel: Rude variant of Tasks RCU enabled. Feb 13 15:30:23.037582 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:30:23.037601 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:30:23.037616 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:30:23.037632 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 15:30:23.037647 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:30:23.037662 kernel: Console: colour VGA+ 80x25 Feb 13 15:30:23.037676 kernel: printk: console [ttyS0] enabled Feb 13 15:30:23.037691 kernel: ACPI: Core revision 20230628 Feb 13 15:30:23.037706 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Feb 13 15:30:23.037722 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 15:30:23.037741 kernel: x2apic enabled Feb 13 15:30:23.037756 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 15:30:23.049348 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Feb 13 15:30:23.049377 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Feb 13 15:30:23.049392 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 15:30:23.049405 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 15:30:23.049418 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 15:30:23.049431 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 15:30:23.049461 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 15:30:23.049475 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 15:30:23.049489 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 13 15:30:23.049502 kernel: RETBleed: Vulnerable Feb 13 15:30:23.049602 kernel: Speculative Store Bypass: Vulnerable Feb 13 15:30:23.049615 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 15:30:23.049628 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 15:30:23.049641 kernel: GDS: Unknown: Dependent on hypervisor status Feb 13 15:30:23.049654 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 15:30:23.049668 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 15:30:23.049682 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 15:30:23.049698 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 13 15:30:23.049712 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 13 15:30:23.049724 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 13 15:30:23.049738 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 13 15:30:23.049751 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 13 15:30:23.049764 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Feb 13 15:30:23.049778 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 15:30:23.049803 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 13 15:30:23.049847 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 13 15:30:23.049862 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Feb 13 15:30:23.049875 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Feb 13 15:30:23.049893 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Feb 13 15:30:23.049906 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Feb 13 15:30:23.049930 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Feb 13 15:30:23.049943 kernel: Freeing SMP alternatives memory: 32K Feb 13 15:30:23.049956 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:30:23.049969 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:30:23.049982 kernel: landlock: Up and running. Feb 13 15:30:23.049995 kernel: SELinux: Initializing. Feb 13 15:30:23.050007 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 15:30:23.050019 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 15:30:23.050032 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 13 15:30:23.050048 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:30:23.050173 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:30:23.050189 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:30:23.050203 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 13 15:30:23.050217 kernel: signal: max sigframe size: 3632 Feb 13 15:30:23.050230 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:30:23.050244 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:30:23.050256 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 15:30:23.050269 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:30:23.050286 kernel: smpboot: x86: Booting SMP configuration: Feb 13 15:30:23.050299 kernel: .... node #0, CPUs: #1 Feb 13 15:30:23.050314 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 13 15:30:23.050328 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 15:30:23.050341 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:30:23.050355 kernel: smpboot: Max logical packages: 1 Feb 13 15:30:23.050368 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Feb 13 15:30:23.050381 kernel: devtmpfs: initialized Feb 13 15:30:23.050396 kernel: x86/mm: Memory block size: 128MB Feb 13 15:30:23.050409 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:30:23.050422 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:30:23.050435 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:30:23.050448 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:30:23.050461 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:30:23.050474 kernel: audit: type=2000 audit(1739460622.367:1): state=initialized audit_enabled=0 res=1 Feb 13 15:30:23.050487 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:30:23.050500 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 15:30:23.050517 kernel: cpuidle: using governor menu Feb 13 15:30:23.050555 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:30:23.050569 kernel: dca service started, version 1.12.1 Feb 13 15:30:23.050582 kernel: PCI: Using configuration type 1 for base access Feb 13 15:30:23.050596 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 15:30:23.050609 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:30:23.050623 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:30:23.050636 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:30:23.050650 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:30:23.050666 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:30:23.050750 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:30:23.050766 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:30:23.050781 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:30:23.050857 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 13 15:30:23.050874 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 15:30:23.050889 kernel: ACPI: Interpreter enabled Feb 13 15:30:23.050905 kernel: ACPI: PM: (supports S0 S5) Feb 13 15:30:23.050922 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 15:30:23.050938 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 15:30:23.050960 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 15:30:23.050976 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 13 15:30:23.050992 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:30:23.060043 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:30:23.060240 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 15:30:23.060376 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 15:30:23.060397 kernel: acpiphp: Slot [3] registered Feb 13 15:30:23.060421 kernel: acpiphp: Slot [4] registered Feb 13 15:30:23.060438 kernel: acpiphp: Slot [5] registered Feb 13 15:30:23.060454 kernel: acpiphp: Slot [6] registered Feb 13 15:30:23.060469 kernel: acpiphp: Slot [7] registered Feb 13 15:30:23.060485 kernel: acpiphp: Slot [8] registered Feb 13 15:30:23.060501 kernel: acpiphp: Slot [9] registered Feb 13 15:30:23.060518 kernel: acpiphp: Slot [10] registered Feb 13 15:30:23.060535 kernel: acpiphp: Slot [11] registered Feb 13 15:30:23.060550 kernel: acpiphp: Slot [12] registered Feb 13 15:30:23.060570 kernel: acpiphp: Slot [13] registered Feb 13 15:30:23.060586 kernel: acpiphp: Slot [14] registered Feb 13 15:30:23.060603 kernel: acpiphp: Slot [15] registered Feb 13 15:30:23.060619 kernel: acpiphp: Slot [16] registered Feb 13 15:30:23.060634 kernel: acpiphp: Slot [17] registered Feb 13 15:30:23.060651 kernel: acpiphp: Slot [18] registered Feb 13 15:30:23.060668 kernel: acpiphp: Slot [19] registered Feb 13 15:30:23.060684 kernel: acpiphp: Slot [20] registered Feb 13 15:30:23.060700 kernel: acpiphp: Slot [21] registered Feb 13 15:30:23.060716 kernel: acpiphp: Slot [22] registered Feb 13 15:30:23.060735 kernel: acpiphp: Slot [23] registered Feb 13 15:30:23.060751 kernel: acpiphp: Slot [24] registered Feb 13 15:30:23.060767 kernel: acpiphp: Slot [25] registered Feb 13 15:30:23.060783 kernel: acpiphp: Slot [26] registered Feb 13 15:30:23.060812 kernel: acpiphp: Slot [27] registered Feb 13 15:30:23.060826 kernel: acpiphp: Slot [28] registered Feb 13 15:30:23.060842 kernel: acpiphp: Slot [29] registered Feb 13 15:30:23.060858 kernel: acpiphp: Slot [30] registered Feb 13 15:30:23.060874 kernel: acpiphp: Slot [31] registered Feb 13 15:30:23.061063 kernel: PCI host bridge to bus 0000:00 Feb 13 15:30:23.061233 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 15:30:23.061380 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 15:30:23.061634 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 15:30:23.066558 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 13 15:30:23.066718 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:30:23.066908 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 15:30:23.067067 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 13 15:30:23.067330 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Feb 13 15:30:23.067471 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 13 15:30:23.067604 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 13 15:30:23.067737 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Feb 13 15:30:23.067989 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Feb 13 15:30:23.068129 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Feb 13 15:30:23.068280 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Feb 13 15:30:23.068408 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Feb 13 15:30:23.068652 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Feb 13 15:30:23.074993 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Feb 13 15:30:23.075156 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Feb 13 15:30:23.075282 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 13 15:30:23.075406 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 15:30:23.075544 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 15:30:23.077389 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Feb 13 15:30:23.077603 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 15:30:23.082250 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Feb 13 15:30:23.082372 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 15:30:23.082392 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 15:30:23.082419 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 15:30:23.082433 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 15:30:23.082456 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 15:30:23.082480 kernel: iommu: Default domain type: Translated Feb 13 15:30:23.082571 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 15:30:23.082588 kernel: PCI: Using ACPI for IRQ routing Feb 13 15:30:23.082604 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 15:30:23.082620 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 15:30:23.084892 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Feb 13 15:30:23.085133 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Feb 13 15:30:23.085274 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Feb 13 15:30:23.085407 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 15:30:23.085427 kernel: vgaarb: loaded Feb 13 15:30:23.085444 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 13 15:30:23.085461 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Feb 13 15:30:23.085477 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 15:30:23.085493 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:30:23.085509 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:30:23.085530 kernel: pnp: PnP ACPI init Feb 13 15:30:23.085545 kernel: pnp: PnP ACPI: found 5 devices Feb 13 15:30:23.085561 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 15:30:23.085577 kernel: NET: Registered PF_INET protocol family Feb 13 15:30:23.085593 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:30:23.085609 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 15:30:23.085625 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:30:23.085641 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 15:30:23.085661 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 15:30:23.085677 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 15:30:23.085710 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 15:30:23.085726 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 15:30:23.085745 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:30:23.085762 kernel: NET: Registered PF_XDP protocol family Feb 13 15:30:23.085900 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 15:30:23.086018 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 15:30:23.086223 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 15:30:23.086354 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 13 15:30:23.086495 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 15:30:23.086517 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:30:23.086534 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 15:30:23.086558 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Feb 13 15:30:23.086574 kernel: clocksource: Switched to clocksource tsc Feb 13 15:30:23.086591 kernel: Initialise system trusted keyrings Feb 13 15:30:23.086607 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 15:30:23.086628 kernel: Key type asymmetric registered Feb 13 15:30:23.086644 kernel: Asymmetric key parser 'x509' registered Feb 13 15:30:23.086660 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 15:30:23.086676 kernel: io scheduler mq-deadline registered Feb 13 15:30:23.086693 kernel: io scheduler kyber registered Feb 13 15:30:23.086709 kernel: io scheduler bfq registered Feb 13 15:30:23.086725 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 15:30:23.086741 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:30:23.086758 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 15:30:23.086778 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 15:30:23.088827 kernel: i8042: Warning: Keylock active Feb 13 15:30:23.088852 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 15:30:23.088869 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 15:30:23.089046 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 13 15:30:23.089170 kernel: rtc_cmos 00:00: registered as rtc0 Feb 13 15:30:23.089291 kernel: rtc_cmos 00:00: setting system clock to 2025-02-13T15:30:22 UTC (1739460622) Feb 13 15:30:23.089407 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 13 15:30:23.089433 kernel: intel_pstate: CPU model not supported Feb 13 15:30:23.089467 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:30:23.089484 kernel: Segment Routing with IPv6 Feb 13 15:30:23.089501 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:30:23.089517 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:30:23.089534 kernel: Key type dns_resolver registered Feb 13 15:30:23.089551 kernel: IPI shorthand broadcast: enabled Feb 13 15:30:23.089568 kernel: sched_clock: Marking stable (583051449, 266464205)->(956947109, -107431455) Feb 13 15:30:23.089584 kernel: registered taskstats version 1 Feb 13 15:30:23.089604 kernel: Loading compiled-in X.509 certificates Feb 13 15:30:23.089621 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 9ec780e1db69d46be90bbba73ae62b0106e27ae0' Feb 13 15:30:23.089637 kernel: Key type .fscrypt registered Feb 13 15:30:23.089653 kernel: Key type fscrypt-provisioning registered Feb 13 15:30:23.089670 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:30:23.089686 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:30:23.089702 kernel: ima: No architecture policies found Feb 13 15:30:23.089719 kernel: clk: Disabling unused clocks Feb 13 15:30:23.089735 kernel: Freeing unused kernel image (initmem) memory: 42976K Feb 13 15:30:23.089755 kernel: Write protecting the kernel read-only data: 36864k Feb 13 15:30:23.089772 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Feb 13 15:30:23.089788 kernel: Run /init as init process Feb 13 15:30:23.089828 kernel: with arguments: Feb 13 15:30:23.089844 kernel: /init Feb 13 15:30:23.089860 kernel: with environment: Feb 13 15:30:23.089875 kernel: HOME=/ Feb 13 15:30:23.089891 kernel: TERM=linux Feb 13 15:30:23.089907 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:30:23.090158 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:30:23.090197 systemd[1]: Detected virtualization amazon. Feb 13 15:30:23.090218 systemd[1]: Detected architecture x86-64. Feb 13 15:30:23.090236 systemd[1]: Running in initrd. Feb 13 15:30:23.090254 systemd[1]: No hostname configured, using default hostname. Feb 13 15:30:23.090274 systemd[1]: Hostname set to . Feb 13 15:30:23.090293 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:30:23.090311 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:30:23.090367 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:30:23.090386 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:30:23.090405 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:30:23.090424 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:30:23.090445 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:30:23.090464 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:30:23.090485 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:30:23.090503 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:30:23.090521 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:30:23.090547 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:30:23.090566 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:30:23.090587 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:30:23.090605 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:30:23.090681 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:30:23.090703 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:30:23.090722 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:30:23.090741 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:30:23.090759 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:30:23.090777 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:30:23.092346 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:30:23.092765 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:30:23.095547 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:30:23.095577 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:30:23.095594 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:30:23.095611 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:30:23.095627 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:30:23.095643 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:30:23.095667 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:30:23.095684 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:30:23.095698 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 15:30:23.095721 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:30:23.095740 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:30:23.095817 systemd-journald[179]: Collecting audit messages is disabled. Feb 13 15:30:23.095857 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:30:23.095873 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:30:23.095890 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:30:23.095910 systemd-journald[179]: Journal started Feb 13 15:30:23.095942 systemd-journald[179]: Runtime Journal (/run/log/journal/ec24a1e22bf0fd459a8c28ee45fc2fc3) is 4.8M, max 38.6M, 33.7M free. Feb 13 15:30:23.032626 systemd-modules-load[180]: Inserted module 'overlay' Feb 13 15:30:23.236873 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:30:23.236912 kernel: Bridge firewalling registered Feb 13 15:30:23.102726 systemd-modules-load[180]: Inserted module 'br_netfilter' Feb 13 15:30:23.239657 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:30:23.240168 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:30:23.250321 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:30:23.254598 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:30:23.272007 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:30:23.276258 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:30:23.295629 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:30:23.300055 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:30:23.314070 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:30:23.326058 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:30:23.337139 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:30:23.355412 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:30:23.367557 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:30:23.400610 dracut-cmdline[216]: dracut-dracut-053 Feb 13 15:30:23.405730 systemd-resolved[205]: Positive Trust Anchors: Feb 13 15:30:23.416469 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:30:23.405751 systemd-resolved[205]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:30:23.405830 systemd-resolved[205]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:30:23.410515 systemd-resolved[205]: Defaulting to hostname 'linux'. Feb 13 15:30:23.412461 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:30:23.414194 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:30:23.505821 kernel: SCSI subsystem initialized Feb 13 15:30:23.514819 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:30:23.525817 kernel: iscsi: registered transport (tcp) Feb 13 15:30:23.550899 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:30:23.550985 kernel: QLogic iSCSI HBA Driver Feb 13 15:30:23.589388 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:30:23.595015 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:30:23.622846 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:30:23.622926 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:30:23.622947 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:30:23.664820 kernel: raid6: avx512x4 gen() 14687 MB/s Feb 13 15:30:23.681821 kernel: raid6: avx512x2 gen() 12807 MB/s Feb 13 15:30:23.698819 kernel: raid6: avx512x1 gen() 15080 MB/s Feb 13 15:30:23.715821 kernel: raid6: avx2x4 gen() 14853 MB/s Feb 13 15:30:23.732825 kernel: raid6: avx2x2 gen() 14910 MB/s Feb 13 15:30:23.749827 kernel: raid6: avx2x1 gen() 11222 MB/s Feb 13 15:30:23.749913 kernel: raid6: using algorithm avx512x1 gen() 15080 MB/s Feb 13 15:30:23.766821 kernel: raid6: .... xor() 21205 MB/s, rmw enabled Feb 13 15:30:23.766886 kernel: raid6: using avx512x2 recovery algorithm Feb 13 15:30:23.797874 kernel: xor: automatically using best checksumming function avx Feb 13 15:30:23.984822 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:30:23.995569 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:30:24.003042 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:30:24.032998 systemd-udevd[398]: Using default interface naming scheme 'v255'. Feb 13 15:30:24.039039 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:30:24.051106 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:30:24.077764 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Feb 13 15:30:24.123261 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:30:24.133289 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:30:24.201534 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:30:24.211064 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:30:24.268469 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:30:24.277282 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:30:24.282202 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:30:24.285521 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:30:24.299220 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:30:24.326835 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:30:24.354971 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 15:30:24.362993 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:30:24.364540 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:30:24.368171 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:30:24.369529 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:30:24.369739 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:30:24.371024 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:30:24.379770 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:30:24.400871 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 15:30:24.403154 kernel: AES CTR mode by8 optimization enabled Feb 13 15:30:24.403217 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 15:30:24.426708 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 15:30:24.427110 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Feb 13 15:30:24.427337 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:52:9d:8b:8a:8f Feb 13 15:30:24.432652 (udev-worker)[457]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:30:24.457533 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 15:30:24.458404 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 13 15:30:24.484892 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 15:30:24.492610 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:30:24.492674 kernel: GPT:9289727 != 16777215 Feb 13 15:30:24.492694 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:30:24.492711 kernel: GPT:9289727 != 16777215 Feb 13 15:30:24.492728 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:30:24.492744 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:30:24.564826 kernel: BTRFS: device fsid 966d6124-9067-4089-b000-5e99065fe7e2 devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (462) Feb 13 15:30:24.593836 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (444) Feb 13 15:30:24.625704 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:30:24.636021 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:30:24.674669 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 15:30:24.690581 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 15:30:24.692710 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 15:30:24.706574 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:30:24.726333 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 15:30:24.733537 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 15:30:24.739004 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:30:24.750986 disk-uuid[626]: Primary Header is updated. Feb 13 15:30:24.750986 disk-uuid[626]: Secondary Entries is updated. Feb 13 15:30:24.750986 disk-uuid[626]: Secondary Header is updated. Feb 13 15:30:24.758815 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:30:24.769818 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:30:25.773008 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:30:25.773818 disk-uuid[627]: The operation has completed successfully. Feb 13 15:30:25.989545 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:30:25.989683 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:30:26.019623 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:30:26.032115 sh[885]: Success Feb 13 15:30:26.074819 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 15:30:26.255049 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:30:26.268084 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:30:26.276343 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:30:26.326876 kernel: BTRFS info (device dm-0): first mount of filesystem 966d6124-9067-4089-b000-5e99065fe7e2 Feb 13 15:30:26.326997 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:30:26.327034 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:30:26.328451 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:30:26.328492 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:30:26.354818 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 15:30:26.370030 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:30:26.371126 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:30:26.380979 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:30:26.386093 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:30:26.429402 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:30:26.429532 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:30:26.429604 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:30:26.435931 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:30:26.456892 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:30:26.456087 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:30:26.465814 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:30:26.476273 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:30:26.573731 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:30:26.600149 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:30:26.661633 systemd-networkd[1077]: lo: Link UP Feb 13 15:30:26.661652 systemd-networkd[1077]: lo: Gained carrier Feb 13 15:30:26.667204 systemd-networkd[1077]: Enumeration completed Feb 13 15:30:26.667337 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:30:26.668766 systemd[1]: Reached target network.target - Network. Feb 13 15:30:26.677420 systemd-networkd[1077]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:30:26.677426 systemd-networkd[1077]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:30:26.691488 systemd-networkd[1077]: eth0: Link UP Feb 13 15:30:26.691495 systemd-networkd[1077]: eth0: Gained carrier Feb 13 15:30:26.691510 systemd-networkd[1077]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:30:26.726904 systemd-networkd[1077]: eth0: DHCPv4 address 172.31.26.113/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 15:30:26.738747 ignition[1010]: Ignition 2.20.0 Feb 13 15:30:26.738762 ignition[1010]: Stage: fetch-offline Feb 13 15:30:26.739031 ignition[1010]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:30:26.739044 ignition[1010]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:30:26.740764 ignition[1010]: Ignition finished successfully Feb 13 15:30:26.744009 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:30:26.751239 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:30:26.770834 ignition[1086]: Ignition 2.20.0 Feb 13 15:30:26.770856 ignition[1086]: Stage: fetch Feb 13 15:30:26.771295 ignition[1086]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:30:26.771305 ignition[1086]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:30:26.771386 ignition[1086]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:30:26.790707 ignition[1086]: PUT result: OK Feb 13 15:30:26.793802 ignition[1086]: parsed url from cmdline: "" Feb 13 15:30:26.793832 ignition[1086]: no config URL provided Feb 13 15:30:26.793844 ignition[1086]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:30:26.793869 ignition[1086]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:30:26.793895 ignition[1086]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:30:26.795118 ignition[1086]: PUT result: OK Feb 13 15:30:26.795172 ignition[1086]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 15:30:26.797712 ignition[1086]: GET result: OK Feb 13 15:30:26.797844 ignition[1086]: parsing config with SHA512: 65ffaf36adfb40ecc2efe84e3ee8cec4f6242613346db8b32d32b5f95ea2c9ea9b9289df25cbcc647354c91de85876d9ca8c09e33e10f4a5868c4a90690b22d3 Feb 13 15:30:26.805765 unknown[1086]: fetched base config from "system" Feb 13 15:30:26.805781 unknown[1086]: fetched base config from "system" Feb 13 15:30:26.806101 ignition[1086]: fetch: fetch complete Feb 13 15:30:26.805799 unknown[1086]: fetched user config from "aws" Feb 13 15:30:26.806107 ignition[1086]: fetch: fetch passed Feb 13 15:30:26.809216 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:30:26.806159 ignition[1086]: Ignition finished successfully Feb 13 15:30:26.843924 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:30:26.873321 ignition[1092]: Ignition 2.20.0 Feb 13 15:30:26.873335 ignition[1092]: Stage: kargs Feb 13 15:30:26.873887 ignition[1092]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:30:26.873897 ignition[1092]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:30:26.874003 ignition[1092]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:30:26.875507 ignition[1092]: PUT result: OK Feb 13 15:30:26.881390 ignition[1092]: kargs: kargs passed Feb 13 15:30:26.881470 ignition[1092]: Ignition finished successfully Feb 13 15:30:26.884536 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:30:26.891991 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:30:26.908764 ignition[1098]: Ignition 2.20.0 Feb 13 15:30:26.908919 ignition[1098]: Stage: disks Feb 13 15:30:26.910733 ignition[1098]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:30:26.910891 ignition[1098]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:30:26.911458 ignition[1098]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:30:26.914227 ignition[1098]: PUT result: OK Feb 13 15:30:26.921527 ignition[1098]: disks: disks passed Feb 13 15:30:26.921587 ignition[1098]: Ignition finished successfully Feb 13 15:30:26.929172 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:30:26.931911 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:30:26.935519 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:30:26.938046 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:30:26.940817 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:30:26.944683 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:30:26.954272 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:30:27.002112 systemd-fsck[1106]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:30:27.007322 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:30:27.017114 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:30:27.132106 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 85ed0b0d-7f0f-4eeb-80d8-6213e9fcc55d r/w with ordered data mode. Quota mode: none. Feb 13 15:30:27.133143 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:30:27.134125 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:30:27.149960 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:30:27.158910 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:30:27.162488 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:30:27.162681 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:30:27.165209 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:30:27.177034 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:30:27.184694 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1125) Feb 13 15:30:27.191578 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:30:27.191652 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:30:27.191675 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:30:27.200817 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:30:27.208821 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:30:27.211786 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:30:27.344738 initrd-setup-root[1150]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:30:27.366575 initrd-setup-root[1157]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:30:27.380345 initrd-setup-root[1164]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:30:27.390469 initrd-setup-root[1171]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:30:27.556713 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:30:27.565058 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:30:27.568951 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:30:27.581772 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:30:27.584460 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:30:27.624808 ignition[1238]: INFO : Ignition 2.20.0 Feb 13 15:30:27.624808 ignition[1238]: INFO : Stage: mount Feb 13 15:30:27.628607 ignition[1238]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:30:27.628607 ignition[1238]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:30:27.628607 ignition[1238]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:30:27.632979 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:30:27.637329 ignition[1238]: INFO : PUT result: OK Feb 13 15:30:27.640617 ignition[1238]: INFO : mount: mount passed Feb 13 15:30:27.642075 ignition[1238]: INFO : Ignition finished successfully Feb 13 15:30:27.644599 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:30:27.652965 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:30:27.697985 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:30:27.733087 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1251) Feb 13 15:30:27.735497 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:30:27.735548 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:30:27.735562 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:30:27.747021 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:30:27.754258 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:30:27.789931 ignition[1268]: INFO : Ignition 2.20.0 Feb 13 15:30:27.791172 ignition[1268]: INFO : Stage: files Feb 13 15:30:27.791172 ignition[1268]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:30:27.791172 ignition[1268]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:30:27.791172 ignition[1268]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:30:27.796865 ignition[1268]: INFO : PUT result: OK Feb 13 15:30:27.800729 ignition[1268]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:30:27.802907 ignition[1268]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:30:27.802907 ignition[1268]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:30:27.807486 ignition[1268]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:30:27.809130 ignition[1268]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:30:27.812009 unknown[1268]: wrote ssh authorized keys file for user: core Feb 13 15:30:27.813559 ignition[1268]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:30:27.817140 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:30:27.817140 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:30:27.817140 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:30:27.817140 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:30:27.817140 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 15:30:27.835012 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 15:30:27.835012 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 15:30:27.835012 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Feb 13 15:30:28.021936 systemd-networkd[1077]: eth0: Gained IPv6LL Feb 13 15:30:28.116119 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 15:30:28.437587 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 15:30:28.440436 ignition[1268]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:30:28.440436 ignition[1268]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:30:28.440436 ignition[1268]: INFO : files: files passed Feb 13 15:30:28.440436 ignition[1268]: INFO : Ignition finished successfully Feb 13 15:30:28.439561 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:30:28.455014 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:30:28.459898 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:30:28.466063 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:30:28.469786 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:30:28.502673 initrd-setup-root-after-ignition[1296]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:30:28.505003 initrd-setup-root-after-ignition[1296]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:30:28.506979 initrd-setup-root-after-ignition[1300]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:30:28.510709 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:30:28.511292 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:30:28.520144 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:30:28.573255 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:30:28.573391 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:30:28.576562 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:30:28.580079 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:30:28.583268 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:30:28.590474 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:30:28.608681 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:30:28.614002 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:30:28.635368 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:30:28.635559 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:30:28.641136 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:30:28.643089 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:30:28.643221 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:30:28.647219 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:30:28.648665 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:30:28.651855 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:30:28.656966 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:30:28.667383 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:30:28.670269 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:30:28.683512 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:30:28.699232 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:30:28.700374 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:30:28.710929 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:30:28.715141 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:30:28.715274 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:30:28.723158 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:30:28.725656 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:30:28.732381 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:30:28.734531 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:30:28.736298 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:30:28.736433 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:30:28.742653 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:30:28.744586 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:30:28.748900 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:30:28.749091 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:30:28.757044 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:30:28.761308 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:30:28.765434 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:30:28.765643 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:30:28.767644 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:30:28.768528 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:30:28.789499 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:30:28.792941 ignition[1320]: INFO : Ignition 2.20.0 Feb 13 15:30:28.792941 ignition[1320]: INFO : Stage: umount Feb 13 15:30:28.792941 ignition[1320]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:30:28.792941 ignition[1320]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:30:28.792941 ignition[1320]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:30:28.789653 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:30:28.802287 ignition[1320]: INFO : PUT result: OK Feb 13 15:30:28.810285 ignition[1320]: INFO : umount: umount passed Feb 13 15:30:28.810285 ignition[1320]: INFO : Ignition finished successfully Feb 13 15:30:28.813891 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:30:28.814008 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:30:28.819739 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:30:28.821111 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:30:28.821382 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:30:28.825012 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:30:28.825092 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:30:28.831236 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:30:28.831318 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:30:28.834170 systemd[1]: Stopped target network.target - Network. Feb 13 15:30:28.834308 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:30:28.834375 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:30:28.834726 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:30:28.835033 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:30:28.841873 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:30:28.848933 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:30:28.849035 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:30:28.852333 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:30:28.852383 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:30:28.854987 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:30:28.855033 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:30:28.865546 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:30:28.865635 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:30:28.871241 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:30:28.871378 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:30:28.876909 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:30:28.879130 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:30:28.889300 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:30:28.889610 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:30:28.891114 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:30:28.891212 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:30:28.893850 systemd-networkd[1077]: eth0: DHCPv6 lease lost Feb 13 15:30:28.902099 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:30:28.903253 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:30:28.909569 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:30:28.909685 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:30:28.912274 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:30:28.912331 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:30:28.920916 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:30:28.922316 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:30:28.922379 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:30:28.924216 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:30:28.924265 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:30:28.925966 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:30:28.926025 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:30:28.929474 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:30:28.929541 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:30:28.949298 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:30:28.995695 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:30:28.995977 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:30:29.000999 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:30:29.002258 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:30:29.008294 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:30:29.008373 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:30:29.008493 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:30:29.008524 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:30:29.012971 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:30:29.013034 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:30:29.021671 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:30:29.021764 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:30:29.035836 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:30:29.035935 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:30:29.058999 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:30:29.060682 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:30:29.060754 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:30:29.062422 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:30:29.062511 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:30:29.065610 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:30:29.065666 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:30:29.067884 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:30:29.067969 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:30:29.071539 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:30:29.071725 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:30:29.074187 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:30:29.100467 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:30:29.119867 systemd[1]: Switching root. Feb 13 15:30:29.141910 systemd-journald[179]: Journal stopped Feb 13 15:30:30.705467 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Feb 13 15:30:30.708784 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:30:30.708825 kernel: SELinux: policy capability open_perms=1 Feb 13 15:30:30.708843 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:30:30.708861 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:30:30.708883 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:30:30.708901 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:30:30.708918 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:30:30.708941 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:30:30.708959 kernel: audit: type=1403 audit(1739460629.479:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:30:30.708978 systemd[1]: Successfully loaded SELinux policy in 45.418ms. Feb 13 15:30:30.709005 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.585ms. Feb 13 15:30:30.709026 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:30:30.709044 systemd[1]: Detected virtualization amazon. Feb 13 15:30:30.709066 systemd[1]: Detected architecture x86-64. Feb 13 15:30:30.709085 systemd[1]: Detected first boot. Feb 13 15:30:30.709103 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:30:30.709125 zram_generator::config[1362]: No configuration found. Feb 13 15:30:30.709217 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:30:30.709244 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:30:30.709263 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:30:30.709283 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:30:30.709304 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:30:30.709324 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:30:30.709343 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:30:30.709361 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:30:30.709381 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:30:30.709404 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:30:30.709423 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:30:30.709441 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:30:30.709459 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:30:30.709479 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:30:30.709502 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:30:30.709522 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:30:30.709541 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:30:30.709560 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:30:30.711889 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:30:30.711941 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:30:30.711964 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:30:30.711983 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:30:30.712002 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:30:30.712022 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:30:30.712040 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:30:30.712066 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:30:30.712085 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:30:30.712104 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:30:30.712123 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:30:30.712141 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:30:30.712160 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:30:30.712179 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:30:30.712197 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:30:30.712215 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:30:30.712233 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:30:30.712254 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:30:30.712272 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:30:30.712290 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:30:30.712309 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:30:30.712327 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:30:30.712345 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:30:30.712365 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:30:30.712383 systemd[1]: Reached target machines.target - Containers. Feb 13 15:30:30.712405 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:30:30.712423 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:30:30.712442 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:30:30.712459 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:30:30.712477 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:30:30.712494 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:30:30.712512 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:30:30.712531 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:30:30.712549 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:30:30.712569 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:30:30.712589 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:30:30.712606 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:30:30.712625 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:30:30.712643 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:30:30.712662 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:30:30.712683 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:30:30.712700 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:30:30.712722 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:30:30.712741 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:30:30.712759 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:30:30.712777 systemd[1]: Stopped verity-setup.service. Feb 13 15:30:30.712807 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:30:30.712826 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:30:30.712844 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:30:30.712862 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:30:30.712880 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:30:30.712979 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:30:30.713001 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:30:30.713021 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:30:30.713039 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:30:30.713057 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:30:30.713080 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:30:30.713098 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:30:30.713116 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:30:30.713191 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:30:30.713213 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:30:30.713239 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:30:30.713259 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:30:30.713312 systemd-journald[1438]: Collecting audit messages is disabled. Feb 13 15:30:30.713351 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:30:30.718219 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:30:30.718259 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:30:30.718280 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:30:30.718304 systemd-journald[1438]: Journal started Feb 13 15:30:30.718448 systemd-journald[1438]: Runtime Journal (/run/log/journal/ec24a1e22bf0fd459a8c28ee45fc2fc3) is 4.8M, max 38.6M, 33.7M free. Feb 13 15:30:30.251346 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:30:30.721959 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:30:30.272684 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 15:30:30.273111 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:30:30.729948 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:30:30.739718 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:30:30.747812 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:30:30.747890 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:30:30.756604 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:30:30.756686 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:30:30.770821 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:30:30.777801 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:30:30.784475 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:30:30.803666 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:30:30.810151 kernel: fuse: init (API version 7.39) Feb 13 15:30:30.811154 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:30:30.831396 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:30:30.832909 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:30:30.843876 kernel: loop: module loaded Feb 13 15:30:30.846195 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:30:30.846889 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:30:30.880126 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:30:30.887482 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:30:30.904261 kernel: loop0: detected capacity change from 0 to 140992 Feb 13 15:30:30.950876 kernel: ACPI: bus type drm_connector registered Feb 13 15:30:30.933366 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:30:30.934879 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:30:30.951017 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:30:30.963091 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:30:30.977506 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:30:30.980353 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:30:30.983342 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:30:30.984470 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:30:30.997679 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:30:31.000959 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:30:31.002746 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:30:31.003278 systemd-tmpfiles[1455]: ACLs are not supported, ignoring. Feb 13 15:30:31.003299 systemd-tmpfiles[1455]: ACLs are not supported, ignoring. Feb 13 15:30:31.021169 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:30:31.035239 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:30:31.053412 systemd-journald[1438]: Time spent on flushing to /var/log/journal/ec24a1e22bf0fd459a8c28ee45fc2fc3 is 45.910ms for 954 entries. Feb 13 15:30:31.053412 systemd-journald[1438]: System Journal (/var/log/journal/ec24a1e22bf0fd459a8c28ee45fc2fc3) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:30:31.133234 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:30:31.133273 systemd-journald[1438]: Received client request to flush runtime journal. Feb 13 15:30:31.133315 kernel: loop1: detected capacity change from 0 to 138184 Feb 13 15:30:31.056003 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:30:31.058437 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:30:31.077018 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:30:31.089192 udevadm[1501]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:30:31.136617 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:30:31.208922 kernel: loop2: detected capacity change from 0 to 205544 Feb 13 15:30:31.208740 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:30:31.219042 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:30:31.252353 systemd-tmpfiles[1513]: ACLs are not supported, ignoring. Feb 13 15:30:31.252736 systemd-tmpfiles[1513]: ACLs are not supported, ignoring. Feb 13 15:30:31.262783 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:30:31.372875 kernel: loop3: detected capacity change from 0 to 62848 Feb 13 15:30:31.521820 kernel: loop4: detected capacity change from 0 to 140992 Feb 13 15:30:31.574824 kernel: loop5: detected capacity change from 0 to 138184 Feb 13 15:30:31.631836 kernel: loop6: detected capacity change from 0 to 205544 Feb 13 15:30:31.690028 kernel: loop7: detected capacity change from 0 to 62848 Feb 13 15:30:31.732474 (sd-merge)[1518]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 15:30:31.737277 (sd-merge)[1518]: Merged extensions into '/usr'. Feb 13 15:30:31.748212 systemd[1]: Reloading requested from client PID 1463 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:30:31.748842 systemd[1]: Reloading... Feb 13 15:30:31.912043 zram_generator::config[1543]: No configuration found. Feb 13 15:30:32.134841 ldconfig[1460]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:30:32.172046 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:30:32.281359 systemd[1]: Reloading finished in 531 ms. Feb 13 15:30:32.308716 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:30:32.310633 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:30:32.326474 systemd[1]: Starting ensure-sysext.service... Feb 13 15:30:32.338095 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:30:32.358053 systemd[1]: Reloading requested from client PID 1593 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:30:32.358073 systemd[1]: Reloading... Feb 13 15:30:32.429453 systemd-tmpfiles[1594]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:30:32.430915 systemd-tmpfiles[1594]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:30:32.434310 systemd-tmpfiles[1594]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:30:32.437575 systemd-tmpfiles[1594]: ACLs are not supported, ignoring. Feb 13 15:30:32.437669 systemd-tmpfiles[1594]: ACLs are not supported, ignoring. Feb 13 15:30:32.457781 systemd-tmpfiles[1594]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:30:32.457810 systemd-tmpfiles[1594]: Skipping /boot Feb 13 15:30:32.493768 systemd-tmpfiles[1594]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:30:32.495836 systemd-tmpfiles[1594]: Skipping /boot Feb 13 15:30:32.527887 zram_generator::config[1621]: No configuration found. Feb 13 15:30:32.720284 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:30:32.804200 systemd[1]: Reloading finished in 445 ms. Feb 13 15:30:32.823701 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:30:32.834427 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:30:32.853050 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:30:32.856986 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:30:32.866345 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:30:32.871749 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:30:32.881499 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:30:32.899507 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:30:32.917243 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:30:32.917556 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:30:32.923136 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:30:32.934388 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:30:32.950116 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:30:32.951511 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:30:32.952003 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:30:32.972144 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:30:32.980854 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:30:32.981165 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:30:32.981337 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:30:32.990082 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:30:32.991499 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:30:33.000970 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:30:33.001213 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:30:33.006347 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:30:33.007031 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:30:33.021858 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:30:33.023607 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:30:33.023990 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:30:33.025509 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:30:33.026735 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:30:33.026959 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:30:33.030168 systemd-udevd[1682]: Using default interface naming scheme 'v255'. Feb 13 15:30:33.031883 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:30:33.051409 systemd[1]: Finished ensure-sysext.service. Feb 13 15:30:33.057397 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:30:33.058475 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:30:33.061829 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:30:33.070008 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:30:33.072725 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:30:33.073198 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:30:33.093079 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:30:33.126491 augenrules[1710]: No rules Feb 13 15:30:33.130967 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:30:33.131341 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:30:33.142914 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:30:33.145251 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:30:33.159454 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:30:33.170016 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:30:33.172707 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:30:33.184918 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:30:33.296422 systemd-networkd[1720]: lo: Link UP Feb 13 15:30:33.296929 systemd-networkd[1720]: lo: Gained carrier Feb 13 15:30:33.298234 systemd-networkd[1720]: Enumeration completed Feb 13 15:30:33.298726 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:30:33.316000 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:30:33.348299 (udev-worker)[1740]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:30:33.381892 systemd-resolved[1681]: Positive Trust Anchors: Feb 13 15:30:33.382337 systemd-resolved[1681]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:30:33.382619 systemd-resolved[1681]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:30:33.393113 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:30:33.399018 systemd-resolved[1681]: Defaulting to hostname 'linux'. Feb 13 15:30:33.404692 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:30:33.406400 systemd[1]: Reached target network.target - Network. Feb 13 15:30:33.407600 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:30:33.447212 systemd-networkd[1720]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:30:33.447224 systemd-networkd[1720]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:30:33.452031 systemd-networkd[1720]: eth0: Link UP Feb 13 15:30:33.453081 systemd-networkd[1720]: eth0: Gained carrier Feb 13 15:30:33.453161 systemd-networkd[1720]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:30:33.463896 systemd-networkd[1720]: eth0: DHCPv4 address 172.31.26.113/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 15:30:33.494819 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 15:30:33.501856 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Feb 13 15:30:33.504816 kernel: ACPI: button: Power Button [PWRF] Feb 13 15:30:33.507162 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Feb 13 15:30:33.507231 kernel: ACPI: button: Sleep Button [SLPF] Feb 13 15:30:33.529810 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Feb 13 15:30:33.533901 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1736) Feb 13 15:30:33.681933 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:30:33.760059 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:30:33.808285 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 15:30:33.812225 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:30:33.821198 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:30:33.829230 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:30:33.855160 lvm[1841]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:30:33.965423 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:30:33.971368 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:30:33.974344 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:30:33.976288 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:30:33.977714 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:30:33.980563 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:30:33.982211 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:30:33.983747 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:30:33.985514 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:30:33.987517 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:30:33.987572 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:30:33.990074 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:30:33.994351 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:30:34.001678 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:30:34.018898 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:30:34.026567 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:30:34.031812 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:30:34.034297 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:30:34.037599 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:30:34.038888 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:30:34.040045 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:30:34.040084 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:30:34.043945 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:30:34.049178 lvm[1848]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:30:34.053382 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 15:30:34.064053 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:30:34.069644 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:30:34.074060 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:30:34.075352 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:30:34.078752 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:30:34.091075 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 15:30:34.105959 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 15:30:34.134176 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:30:34.139134 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:30:34.173014 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:30:34.175013 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:30:34.175752 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:30:34.181764 jq[1853]: false Feb 13 15:30:34.189302 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:30:34.201900 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:30:34.204763 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:30:34.214456 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:30:34.214814 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:30:34.217272 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:30:34.218390 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:30:34.255144 jq[1870]: true Feb 13 15:30:34.270390 update_engine[1866]: I20250213 15:30:34.262183 1866 main.cc:92] Flatcar Update Engine starting Feb 13 15:30:34.268671 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:30:34.267864 dbus-daemon[1852]: [system] SELinux support is enabled Feb 13 15:30:34.278160 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:30:34.281269 dbus-daemon[1852]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1720 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 15:30:34.278206 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:30:34.279969 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:30:34.279998 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:30:34.287817 extend-filesystems[1854]: Found loop4 Feb 13 15:30:34.287817 extend-filesystems[1854]: Found loop5 Feb 13 15:30:34.287817 extend-filesystems[1854]: Found loop6 Feb 13 15:30:34.287817 extend-filesystems[1854]: Found loop7 Feb 13 15:30:34.287817 extend-filesystems[1854]: Found nvme0n1 Feb 13 15:30:34.287817 extend-filesystems[1854]: Found nvme0n1p1 Feb 13 15:30:34.287817 extend-filesystems[1854]: Found nvme0n1p2 Feb 13 15:30:34.287817 extend-filesystems[1854]: Found nvme0n1p3 Feb 13 15:30:34.287817 extend-filesystems[1854]: Found usr Feb 13 15:30:34.287817 extend-filesystems[1854]: Found nvme0n1p4 Feb 13 15:30:34.287817 extend-filesystems[1854]: Found nvme0n1p6 Feb 13 15:30:34.287817 extend-filesystems[1854]: Found nvme0n1p7 Feb 13 15:30:34.287817 extend-filesystems[1854]: Found nvme0n1p9 Feb 13 15:30:34.287817 extend-filesystems[1854]: Checking size of /dev/nvme0n1p9 Feb 13 15:30:34.336527 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 15:30:34.300291 dbus-daemon[1852]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 15:30:34.341528 coreos-metadata[1851]: Feb 13 15:30:34.321 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 15:30:34.341528 coreos-metadata[1851]: Feb 13 15:30:34.324 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 15:30:34.341528 coreos-metadata[1851]: Feb 13 15:30:34.325 INFO Fetch successful Feb 13 15:30:34.341528 coreos-metadata[1851]: Feb 13 15:30:34.325 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 15:30:34.341528 coreos-metadata[1851]: Feb 13 15:30:34.327 INFO Fetch successful Feb 13 15:30:34.341528 coreos-metadata[1851]: Feb 13 15:30:34.327 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 15:30:34.341528 coreos-metadata[1851]: Feb 13 15:30:34.328 INFO Fetch successful Feb 13 15:30:34.341528 coreos-metadata[1851]: Feb 13 15:30:34.328 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 15:30:34.341528 coreos-metadata[1851]: Feb 13 15:30:34.332 INFO Fetch successful Feb 13 15:30:34.341528 coreos-metadata[1851]: Feb 13 15:30:34.332 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 15:30:34.341528 coreos-metadata[1851]: Feb 13 15:30:34.333 INFO Fetch failed with 404: resource not found Feb 13 15:30:34.341528 coreos-metadata[1851]: Feb 13 15:30:34.333 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 15:30:34.341528 coreos-metadata[1851]: Feb 13 15:30:34.334 INFO Fetch successful Feb 13 15:30:34.341528 coreos-metadata[1851]: Feb 13 15:30:34.334 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 15:30:34.341528 coreos-metadata[1851]: Feb 13 15:30:34.337 INFO Fetch successful Feb 13 15:30:34.341528 coreos-metadata[1851]: Feb 13 15:30:34.337 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 15:30:34.341528 coreos-metadata[1851]: Feb 13 15:30:34.337 INFO Fetch successful Feb 13 15:30:34.341528 coreos-metadata[1851]: Feb 13 15:30:34.337 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 15:30:34.342629 update_engine[1866]: I20250213 15:30:34.310822 1866 update_check_scheduler.cc:74] Next update check in 2m15s Feb 13 15:30:34.365720 jq[1881]: true Feb 13 15:30:34.366492 coreos-metadata[1851]: Feb 13 15:30:34.355 INFO Fetch successful Feb 13 15:30:34.366492 coreos-metadata[1851]: Feb 13 15:30:34.355 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 15:30:34.353211 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:30:34.355162 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:30:34.355533 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:30:34.375171 coreos-metadata[1851]: Feb 13 15:30:34.370 INFO Fetch successful Feb 13 15:30:34.373448 ntpd[1856]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:21:05 UTC 2025 (1): Starting Feb 13 15:30:34.376385 ntpd[1856]: 13 Feb 15:30:34 ntpd[1856]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:21:05 UTC 2025 (1): Starting Feb 13 15:30:34.376385 ntpd[1856]: 13 Feb 15:30:34 ntpd[1856]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:30:34.376385 ntpd[1856]: 13 Feb 15:30:34 ntpd[1856]: ---------------------------------------------------- Feb 13 15:30:34.376385 ntpd[1856]: 13 Feb 15:30:34 ntpd[1856]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:30:34.376385 ntpd[1856]: 13 Feb 15:30:34 ntpd[1856]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:30:34.376385 ntpd[1856]: 13 Feb 15:30:34 ntpd[1856]: corporation. Support and training for ntp-4 are Feb 13 15:30:34.376385 ntpd[1856]: 13 Feb 15:30:34 ntpd[1856]: available at https://www.nwtime.org/support Feb 13 15:30:34.376385 ntpd[1856]: 13 Feb 15:30:34 ntpd[1856]: ---------------------------------------------------- Feb 13 15:30:34.375432 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:30:34.373475 ntpd[1856]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:30:34.373487 ntpd[1856]: ---------------------------------------------------- Feb 13 15:30:34.373496 ntpd[1856]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:30:34.373562 ntpd[1856]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:30:34.373576 ntpd[1856]: corporation. Support and training for ntp-4 are Feb 13 15:30:34.373586 ntpd[1856]: available at https://www.nwtime.org/support Feb 13 15:30:34.373596 ntpd[1856]: ---------------------------------------------------- Feb 13 15:30:34.390040 (ntainerd)[1888]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:30:34.407828 ntpd[1856]: 13 Feb 15:30:34 ntpd[1856]: proto: precision = 0.077 usec (-24) Feb 13 15:30:34.407388 ntpd[1856]: proto: precision = 0.077 usec (-24) Feb 13 15:30:34.409239 ntpd[1856]: basedate set to 2025-02-01 Feb 13 15:30:34.411457 ntpd[1856]: 13 Feb 15:30:34 ntpd[1856]: basedate set to 2025-02-01 Feb 13 15:30:34.411457 ntpd[1856]: 13 Feb 15:30:34 ntpd[1856]: gps base set to 2025-02-02 (week 2352) Feb 13 15:30:34.409265 ntpd[1856]: gps base set to 2025-02-02 (week 2352) Feb 13 15:30:34.423239 ntpd[1856]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:30:34.427094 ntpd[1856]: 13 Feb 15:30:34 ntpd[1856]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:30:34.427094 ntpd[1856]: 13 Feb 15:30:34 ntpd[1856]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:30:34.427094 ntpd[1856]: 13 Feb 15:30:34 ntpd[1856]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:30:34.427094 ntpd[1856]: 13 Feb 15:30:34 ntpd[1856]: Listen normally on 3 eth0 172.31.26.113:123 Feb 13 15:30:34.427094 ntpd[1856]: 13 Feb 15:30:34 ntpd[1856]: Listen normally on 4 lo [::1]:123 Feb 13 15:30:34.427094 ntpd[1856]: 13 Feb 15:30:34 ntpd[1856]: bind(21) AF_INET6 fe80::452:9dff:fe8b:8a8f%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:30:34.427094 ntpd[1856]: 13 Feb 15:30:34 ntpd[1856]: unable to create socket on eth0 (5) for fe80::452:9dff:fe8b:8a8f%2#123 Feb 13 15:30:34.427094 ntpd[1856]: 13 Feb 15:30:34 ntpd[1856]: failed to init interface for address fe80::452:9dff:fe8b:8a8f%2 Feb 13 15:30:34.427094 ntpd[1856]: 13 Feb 15:30:34 ntpd[1856]: Listening on routing socket on fd #21 for interface updates Feb 13 15:30:34.423313 ntpd[1856]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:30:34.423506 ntpd[1856]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:30:34.430263 extend-filesystems[1854]: Resized partition /dev/nvme0n1p9 Feb 13 15:30:34.423546 ntpd[1856]: Listen normally on 3 eth0 172.31.26.113:123 Feb 13 15:30:34.423587 ntpd[1856]: Listen normally on 4 lo [::1]:123 Feb 13 15:30:34.423637 ntpd[1856]: bind(21) AF_INET6 fe80::452:9dff:fe8b:8a8f%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:30:34.423660 ntpd[1856]: unable to create socket on eth0 (5) for fe80::452:9dff:fe8b:8a8f%2#123 Feb 13 15:30:34.423675 ntpd[1856]: failed to init interface for address fe80::452:9dff:fe8b:8a8f%2 Feb 13 15:30:34.423718 ntpd[1856]: Listening on routing socket on fd #21 for interface updates Feb 13 15:30:34.447816 extend-filesystems[1908]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:30:34.458131 ntpd[1856]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:30:34.462877 ntpd[1856]: 13 Feb 15:30:34 ntpd[1856]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:30:34.462877 ntpd[1856]: 13 Feb 15:30:34 ntpd[1856]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:30:34.461520 ntpd[1856]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:30:34.466811 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 15:30:34.521446 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 15:30:34.558288 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 15:30:34.566058 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:30:34.597236 dbus-daemon[1852]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 15:30:34.598016 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 15:30:34.599701 dbus-daemon[1852]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.4' (uid=0 pid=1887 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 15:30:34.611441 systemd-logind[1864]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 15:30:34.611475 systemd-logind[1864]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 13 15:30:34.654556 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 15:30:34.611497 systemd-logind[1864]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 15:30:34.629984 systemd-logind[1864]: New seat seat0. Feb 13 15:30:34.633122 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 15:30:34.634367 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:30:34.661715 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:30:34.662384 extend-filesystems[1908]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 15:30:34.662384 extend-filesystems[1908]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:30:34.662384 extend-filesystems[1908]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 15:30:34.661993 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:30:34.677484 extend-filesystems[1854]: Resized filesystem in /dev/nvme0n1p9 Feb 13 15:30:34.681743 bash[1922]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:30:34.678140 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:30:34.693190 systemd[1]: Starting sshkeys.service... Feb 13 15:30:34.711912 polkitd[1930]: Started polkitd version 121 Feb 13 15:30:34.780617 polkitd[1930]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 15:30:34.780714 polkitd[1930]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 15:30:34.810013 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 15:30:34.815388 polkitd[1930]: Finished loading, compiling and executing 2 rules Feb 13 15:30:34.824105 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 15:30:34.830505 dbus-daemon[1852]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 15:30:34.831143 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 15:30:34.834715 polkitd[1930]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 15:30:34.899822 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1724) Feb 13 15:30:34.926136 systemd-hostnamed[1887]: Hostname set to (transient) Feb 13 15:30:34.926278 systemd-resolved[1681]: System hostname changed to 'ip-172-31-26-113'. Feb 13 15:30:35.072005 containerd[1888]: time="2025-02-13T15:30:35.068020437Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:30:35.082293 locksmithd[1896]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:30:35.093782 coreos-metadata[1950]: Feb 13 15:30:35.093 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 15:30:35.099333 coreos-metadata[1950]: Feb 13 15:30:35.099 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 15:30:35.102823 coreos-metadata[1950]: Feb 13 15:30:35.100 INFO Fetch successful Feb 13 15:30:35.102823 coreos-metadata[1950]: Feb 13 15:30:35.100 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 15:30:35.103566 coreos-metadata[1950]: Feb 13 15:30:35.103 INFO Fetch successful Feb 13 15:30:35.105890 unknown[1950]: wrote ssh authorized keys file for user: core Feb 13 15:30:35.191209 update-ssh-keys[2021]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:30:35.193550 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 15:30:35.206900 systemd[1]: Finished sshkeys.service. Feb 13 15:30:35.221774 containerd[1888]: time="2025-02-13T15:30:35.220123734Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:30:35.226662 containerd[1888]: time="2025-02-13T15:30:35.225152049Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:30:35.226773 containerd[1888]: time="2025-02-13T15:30:35.226688238Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:30:35.226773 containerd[1888]: time="2025-02-13T15:30:35.226725794Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:30:35.227095 containerd[1888]: time="2025-02-13T15:30:35.226935161Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:30:35.227095 containerd[1888]: time="2025-02-13T15:30:35.226966996Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:30:35.227095 containerd[1888]: time="2025-02-13T15:30:35.227048424Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:30:35.227095 containerd[1888]: time="2025-02-13T15:30:35.227068338Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:30:35.227393 containerd[1888]: time="2025-02-13T15:30:35.227344483Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:30:35.227393 containerd[1888]: time="2025-02-13T15:30:35.227370881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:30:35.227492 containerd[1888]: time="2025-02-13T15:30:35.227393463Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:30:35.227492 containerd[1888]: time="2025-02-13T15:30:35.227407676Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:30:35.227565 containerd[1888]: time="2025-02-13T15:30:35.227515154Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:30:35.229890 containerd[1888]: time="2025-02-13T15:30:35.228949539Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:30:35.229890 containerd[1888]: time="2025-02-13T15:30:35.229165119Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:30:35.229890 containerd[1888]: time="2025-02-13T15:30:35.229188038Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:30:35.229890 containerd[1888]: time="2025-02-13T15:30:35.229305415Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:30:35.229890 containerd[1888]: time="2025-02-13T15:30:35.229415880Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:30:35.240424 containerd[1888]: time="2025-02-13T15:30:35.240379814Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:30:35.240609 containerd[1888]: time="2025-02-13T15:30:35.240581176Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:30:35.240662 containerd[1888]: time="2025-02-13T15:30:35.240617428Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:30:35.240662 containerd[1888]: time="2025-02-13T15:30:35.240641064Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:30:35.240725 containerd[1888]: time="2025-02-13T15:30:35.240663564Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:30:35.242813 containerd[1888]: time="2025-02-13T15:30:35.240868503Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:30:35.242813 containerd[1888]: time="2025-02-13T15:30:35.241209649Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:30:35.242813 containerd[1888]: time="2025-02-13T15:30:35.241355288Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:30:35.242813 containerd[1888]: time="2025-02-13T15:30:35.241379184Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:30:35.242813 containerd[1888]: time="2025-02-13T15:30:35.241402361Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:30:35.242813 containerd[1888]: time="2025-02-13T15:30:35.241424959Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:30:35.242813 containerd[1888]: time="2025-02-13T15:30:35.241445579Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:30:35.242813 containerd[1888]: time="2025-02-13T15:30:35.241466237Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:30:35.242813 containerd[1888]: time="2025-02-13T15:30:35.241764308Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:30:35.242813 containerd[1888]: time="2025-02-13T15:30:35.241834892Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:30:35.242813 containerd[1888]: time="2025-02-13T15:30:35.241856002Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:30:35.242813 containerd[1888]: time="2025-02-13T15:30:35.241877322Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:30:35.242813 containerd[1888]: time="2025-02-13T15:30:35.241896942Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:30:35.242813 containerd[1888]: time="2025-02-13T15:30:35.241932395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:30:35.243526 containerd[1888]: time="2025-02-13T15:30:35.241955385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:30:35.243526 containerd[1888]: time="2025-02-13T15:30:35.241975440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:30:35.243526 containerd[1888]: time="2025-02-13T15:30:35.241995679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:30:35.243526 containerd[1888]: time="2025-02-13T15:30:35.242017043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:30:35.243526 containerd[1888]: time="2025-02-13T15:30:35.242037525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:30:35.243526 containerd[1888]: time="2025-02-13T15:30:35.242055786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:30:35.243526 containerd[1888]: time="2025-02-13T15:30:35.242083011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:30:35.243526 containerd[1888]: time="2025-02-13T15:30:35.242102867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:30:35.243526 containerd[1888]: time="2025-02-13T15:30:35.242124881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:30:35.243526 containerd[1888]: time="2025-02-13T15:30:35.242142071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:30:35.243526 containerd[1888]: time="2025-02-13T15:30:35.242161037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:30:35.243526 containerd[1888]: time="2025-02-13T15:30:35.242183372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:30:35.243526 containerd[1888]: time="2025-02-13T15:30:35.242209365Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:30:35.243526 containerd[1888]: time="2025-02-13T15:30:35.242244873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:30:35.243526 containerd[1888]: time="2025-02-13T15:30:35.242267219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:30:35.246261 containerd[1888]: time="2025-02-13T15:30:35.242286491Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:30:35.246261 containerd[1888]: time="2025-02-13T15:30:35.242353292Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:30:35.246261 containerd[1888]: time="2025-02-13T15:30:35.242383486Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:30:35.246261 containerd[1888]: time="2025-02-13T15:30:35.242400106Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:30:35.246261 containerd[1888]: time="2025-02-13T15:30:35.242418963Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:30:35.246261 containerd[1888]: time="2025-02-13T15:30:35.242433659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:30:35.246261 containerd[1888]: time="2025-02-13T15:30:35.242454942Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:30:35.246261 containerd[1888]: time="2025-02-13T15:30:35.242471174Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:30:35.246261 containerd[1888]: time="2025-02-13T15:30:35.242489470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:30:35.250164 containerd[1888]: time="2025-02-13T15:30:35.248219388Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:30:35.250164 containerd[1888]: time="2025-02-13T15:30:35.248342578Z" level=info msg="Connect containerd service" Feb 13 15:30:35.250164 containerd[1888]: time="2025-02-13T15:30:35.248420942Z" level=info msg="using legacy CRI server" Feb 13 15:30:35.250164 containerd[1888]: time="2025-02-13T15:30:35.248433489Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:30:35.250164 containerd[1888]: time="2025-02-13T15:30:35.248719088Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:30:35.261605 containerd[1888]: time="2025-02-13T15:30:35.261481485Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:30:35.262615 containerd[1888]: time="2025-02-13T15:30:35.262574962Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:30:35.262710 containerd[1888]: time="2025-02-13T15:30:35.262662998Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:30:35.267924 containerd[1888]: time="2025-02-13T15:30:35.267861610Z" level=info msg="Start subscribing containerd event" Feb 13 15:30:35.268044 containerd[1888]: time="2025-02-13T15:30:35.267953663Z" level=info msg="Start recovering state" Feb 13 15:30:35.268087 containerd[1888]: time="2025-02-13T15:30:35.268055250Z" level=info msg="Start event monitor" Feb 13 15:30:35.268087 containerd[1888]: time="2025-02-13T15:30:35.268077049Z" level=info msg="Start snapshots syncer" Feb 13 15:30:35.268152 containerd[1888]: time="2025-02-13T15:30:35.268093429Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:30:35.268152 containerd[1888]: time="2025-02-13T15:30:35.268106664Z" level=info msg="Start streaming server" Feb 13 15:30:35.268219 containerd[1888]: time="2025-02-13T15:30:35.268181555Z" level=info msg="containerd successfully booted in 0.202678s" Feb 13 15:30:35.275214 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:30:35.317942 systemd-networkd[1720]: eth0: Gained IPv6LL Feb 13 15:30:35.324159 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:30:35.327154 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:30:35.341105 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 15:30:35.351160 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:30:35.368046 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:30:35.408352 sshd_keygen[1868]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:30:35.424966 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:30:35.456257 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:30:35.467964 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:30:35.487546 amazon-ssm-agent[2054]: Initializing new seelog logger Feb 13 15:30:35.488364 amazon-ssm-agent[2054]: New Seelog Logger Creation Complete Feb 13 15:30:35.488511 amazon-ssm-agent[2054]: 2025/02/13 15:30:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:30:35.488565 amazon-ssm-agent[2054]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:30:35.489102 amazon-ssm-agent[2054]: 2025/02/13 15:30:35 processing appconfig overrides Feb 13 15:30:35.489516 amazon-ssm-agent[2054]: 2025/02/13 15:30:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:30:35.489584 amazon-ssm-agent[2054]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:30:35.490042 amazon-ssm-agent[2054]: 2025/02/13 15:30:35 processing appconfig overrides Feb 13 15:30:35.490653 amazon-ssm-agent[2054]: 2025/02/13 15:30:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:30:35.490726 amazon-ssm-agent[2054]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:30:35.490898 amazon-ssm-agent[2054]: 2025/02/13 15:30:35 processing appconfig overrides Feb 13 15:30:35.493560 amazon-ssm-agent[2054]: 2025-02-13 15:30:35 INFO Proxy environment variables: Feb 13 15:30:35.510040 amazon-ssm-agent[2054]: 2025/02/13 15:30:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:30:35.510040 amazon-ssm-agent[2054]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:30:35.510040 amazon-ssm-agent[2054]: 2025/02/13 15:30:35 processing appconfig overrides Feb 13 15:30:35.518434 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:30:35.518694 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:30:35.532487 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:30:35.578053 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:30:35.591586 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:30:35.594578 amazon-ssm-agent[2054]: 2025-02-13 15:30:35 INFO https_proxy: Feb 13 15:30:35.607387 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:30:35.609596 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:30:35.695118 amazon-ssm-agent[2054]: 2025-02-13 15:30:35 INFO http_proxy: Feb 13 15:30:35.794503 amazon-ssm-agent[2054]: 2025-02-13 15:30:35 INFO no_proxy: Feb 13 15:30:35.893185 amazon-ssm-agent[2054]: 2025-02-13 15:30:35 INFO Checking if agent identity type OnPrem can be assumed Feb 13 15:30:35.982437 amazon-ssm-agent[2054]: 2025-02-13 15:30:35 INFO Checking if agent identity type EC2 can be assumed Feb 13 15:30:35.982437 amazon-ssm-agent[2054]: 2025-02-13 15:30:35 INFO Agent will take identity from EC2 Feb 13 15:30:35.982437 amazon-ssm-agent[2054]: 2025-02-13 15:30:35 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:30:35.982437 amazon-ssm-agent[2054]: 2025-02-13 15:30:35 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:30:35.982437 amazon-ssm-agent[2054]: 2025-02-13 15:30:35 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:30:35.982437 amazon-ssm-agent[2054]: 2025-02-13 15:30:35 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 15:30:35.982437 amazon-ssm-agent[2054]: 2025-02-13 15:30:35 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Feb 13 15:30:35.982437 amazon-ssm-agent[2054]: 2025-02-13 15:30:35 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 15:30:35.982437 amazon-ssm-agent[2054]: 2025-02-13 15:30:35 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 15:30:35.982437 amazon-ssm-agent[2054]: 2025-02-13 15:30:35 INFO [Registrar] Starting registrar module Feb 13 15:30:35.982437 amazon-ssm-agent[2054]: 2025-02-13 15:30:35 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 15:30:35.982437 amazon-ssm-agent[2054]: 2025-02-13 15:30:35 INFO [EC2Identity] EC2 registration was successful. Feb 13 15:30:35.982437 amazon-ssm-agent[2054]: 2025-02-13 15:30:35 INFO [CredentialRefresher] credentialRefresher has started Feb 13 15:30:35.982437 amazon-ssm-agent[2054]: 2025-02-13 15:30:35 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 15:30:35.982437 amazon-ssm-agent[2054]: 2025-02-13 15:30:35 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 15:30:35.991637 amazon-ssm-agent[2054]: 2025-02-13 15:30:35 INFO [CredentialRefresher] Next credential rotation will be in 31.733326140066666 minutes Feb 13 15:30:37.010164 amazon-ssm-agent[2054]: 2025-02-13 15:30:37 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 15:30:37.111827 amazon-ssm-agent[2054]: 2025-02-13 15:30:37 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2090) started Feb 13 15:30:37.212705 amazon-ssm-agent[2054]: 2025-02-13 15:30:37 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 15:30:37.374112 ntpd[1856]: Listen normally on 6 eth0 [fe80::452:9dff:fe8b:8a8f%2]:123 Feb 13 15:30:37.374605 ntpd[1856]: 13 Feb 15:30:37 ntpd[1856]: Listen normally on 6 eth0 [fe80::452:9dff:fe8b:8a8f%2]:123 Feb 13 15:30:37.635879 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:30:37.641106 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:30:37.646923 systemd[1]: Startup finished in 736ms (kernel) + 6.759s (initrd) + 8.208s (userspace) = 15.704s. Feb 13 15:30:37.653383 (kubelet)[2106]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:30:39.032926 kubelet[2106]: E0213 15:30:39.032866 2106 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:30:39.039634 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:30:39.039860 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:30:42.116477 systemd-resolved[1681]: Clock change detected. Flushing caches. Feb 13 15:30:44.795935 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:30:44.803675 systemd[1]: Started sshd@0-172.31.26.113:22-139.178.89.65:55602.service - OpenSSH per-connection server daemon (139.178.89.65:55602). Feb 13 15:30:45.024508 sshd[2118]: Accepted publickey for core from 139.178.89.65 port 55602 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:30:45.026487 sshd-session[2118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:30:45.041519 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:30:45.047693 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:30:45.050640 systemd-logind[1864]: New session 1 of user core. Feb 13 15:30:45.066309 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:30:45.073532 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:30:45.079009 (systemd)[2122]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:30:45.201478 systemd[2122]: Queued start job for default target default.target. Feb 13 15:30:45.212604 systemd[2122]: Created slice app.slice - User Application Slice. Feb 13 15:30:45.212648 systemd[2122]: Reached target paths.target - Paths. Feb 13 15:30:45.212671 systemd[2122]: Reached target timers.target - Timers. Feb 13 15:30:45.216876 systemd[2122]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:30:45.229749 systemd[2122]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:30:45.229902 systemd[2122]: Reached target sockets.target - Sockets. Feb 13 15:30:45.229925 systemd[2122]: Reached target basic.target - Basic System. Feb 13 15:30:45.229977 systemd[2122]: Reached target default.target - Main User Target. Feb 13 15:30:45.230017 systemd[2122]: Startup finished in 143ms. Feb 13 15:30:45.230425 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:30:45.240377 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:30:45.394513 systemd[1]: Started sshd@1-172.31.26.113:22-139.178.89.65:55606.service - OpenSSH per-connection server daemon (139.178.89.65:55606). Feb 13 15:30:45.563621 sshd[2133]: Accepted publickey for core from 139.178.89.65 port 55606 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:30:45.565293 sshd-session[2133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:30:45.572702 systemd-logind[1864]: New session 2 of user core. Feb 13 15:30:45.581320 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:30:45.708379 sshd[2135]: Connection closed by 139.178.89.65 port 55606 Feb 13 15:30:45.709444 sshd-session[2133]: pam_unix(sshd:session): session closed for user core Feb 13 15:30:45.713442 systemd[1]: sshd@1-172.31.26.113:22-139.178.89.65:55606.service: Deactivated successfully. Feb 13 15:30:45.715962 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:30:45.717510 systemd-logind[1864]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:30:45.719408 systemd-logind[1864]: Removed session 2. Feb 13 15:30:45.745496 systemd[1]: Started sshd@2-172.31.26.113:22-139.178.89.65:55620.service - OpenSSH per-connection server daemon (139.178.89.65:55620). Feb 13 15:30:45.913300 sshd[2140]: Accepted publickey for core from 139.178.89.65 port 55620 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:30:45.917773 sshd-session[2140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:30:45.930600 systemd-logind[1864]: New session 3 of user core. Feb 13 15:30:45.940308 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:30:46.056480 sshd[2142]: Connection closed by 139.178.89.65 port 55620 Feb 13 15:30:46.057762 sshd-session[2140]: pam_unix(sshd:session): session closed for user core Feb 13 15:30:46.061063 systemd[1]: sshd@2-172.31.26.113:22-139.178.89.65:55620.service: Deactivated successfully. Feb 13 15:30:46.063533 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:30:46.065091 systemd-logind[1864]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:30:46.066412 systemd-logind[1864]: Removed session 3. Feb 13 15:30:46.103503 systemd[1]: Started sshd@3-172.31.26.113:22-139.178.89.65:55622.service - OpenSSH per-connection server daemon (139.178.89.65:55622). Feb 13 15:30:46.266902 sshd[2147]: Accepted publickey for core from 139.178.89.65 port 55622 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:30:46.268917 sshd-session[2147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:30:46.275753 systemd-logind[1864]: New session 4 of user core. Feb 13 15:30:46.283309 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:30:46.403997 sshd[2149]: Connection closed by 139.178.89.65 port 55622 Feb 13 15:30:46.407662 sshd-session[2147]: pam_unix(sshd:session): session closed for user core Feb 13 15:30:46.410986 systemd[1]: sshd@3-172.31.26.113:22-139.178.89.65:55622.service: Deactivated successfully. Feb 13 15:30:46.417981 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:30:46.421257 systemd-logind[1864]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:30:46.422710 systemd-logind[1864]: Removed session 4. Feb 13 15:30:46.440508 systemd[1]: Started sshd@4-172.31.26.113:22-139.178.89.65:55636.service - OpenSSH per-connection server daemon (139.178.89.65:55636). Feb 13 15:30:46.607444 sshd[2154]: Accepted publickey for core from 139.178.89.65 port 55636 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:30:46.609165 sshd-session[2154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:30:46.619372 systemd-logind[1864]: New session 5 of user core. Feb 13 15:30:46.633350 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:30:46.748744 sudo[2157]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:30:46.749802 sudo[2157]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:30:46.763703 sudo[2157]: pam_unix(sudo:session): session closed for user root Feb 13 15:30:46.786848 sshd[2156]: Connection closed by 139.178.89.65 port 55636 Feb 13 15:30:46.787881 sshd-session[2154]: pam_unix(sshd:session): session closed for user core Feb 13 15:30:46.791382 systemd[1]: sshd@4-172.31.26.113:22-139.178.89.65:55636.service: Deactivated successfully. Feb 13 15:30:46.793364 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:30:46.795159 systemd-logind[1864]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:30:46.796523 systemd-logind[1864]: Removed session 5. Feb 13 15:30:46.820438 systemd[1]: Started sshd@5-172.31.26.113:22-139.178.89.65:55652.service - OpenSSH per-connection server daemon (139.178.89.65:55652). Feb 13 15:30:46.984429 sshd[2162]: Accepted publickey for core from 139.178.89.65 port 55652 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:30:46.986054 sshd-session[2162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:30:46.991168 systemd-logind[1864]: New session 6 of user core. Feb 13 15:30:46.998476 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:30:47.098712 sudo[2166]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:30:47.099528 sudo[2166]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:30:47.105006 sudo[2166]: pam_unix(sudo:session): session closed for user root Feb 13 15:30:47.113334 sudo[2165]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:30:47.113894 sudo[2165]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:30:47.129720 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:30:47.172489 augenrules[2188]: No rules Feb 13 15:30:47.174621 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:30:47.174981 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:30:47.177178 sudo[2165]: pam_unix(sudo:session): session closed for user root Feb 13 15:30:47.199770 sshd[2164]: Connection closed by 139.178.89.65 port 55652 Feb 13 15:30:47.200525 sshd-session[2162]: pam_unix(sshd:session): session closed for user core Feb 13 15:30:47.203721 systemd[1]: sshd@5-172.31.26.113:22-139.178.89.65:55652.service: Deactivated successfully. Feb 13 15:30:47.205871 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:30:47.207835 systemd-logind[1864]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:30:47.209786 systemd-logind[1864]: Removed session 6. Feb 13 15:30:47.237536 systemd[1]: Started sshd@6-172.31.26.113:22-139.178.89.65:55668.service - OpenSSH per-connection server daemon (139.178.89.65:55668). Feb 13 15:30:47.404759 sshd[2196]: Accepted publickey for core from 139.178.89.65 port 55668 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:30:47.406714 sshd-session[2196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:30:47.412299 systemd-logind[1864]: New session 7 of user core. Feb 13 15:30:47.421280 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:30:47.517710 sudo[2199]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:30:47.518117 sudo[2199]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:30:48.485138 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:30:48.497367 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:30:48.540912 systemd[1]: Reloading requested from client PID 2232 ('systemctl') (unit session-7.scope)... Feb 13 15:30:48.540934 systemd[1]: Reloading... Feb 13 15:30:48.755406 zram_generator::config[2275]: No configuration found. Feb 13 15:30:48.891597 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:30:48.994585 systemd[1]: Reloading finished in 453 ms. Feb 13 15:30:49.062849 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:30:49.062957 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:30:49.063437 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:30:49.067617 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:30:49.270402 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:30:49.283745 (kubelet)[2332]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:30:49.360618 kubelet[2332]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:30:49.362039 kubelet[2332]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:30:49.362039 kubelet[2332]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:30:49.362039 kubelet[2332]: I0213 15:30:49.361220 2332 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:30:49.970408 kubelet[2332]: I0213 15:30:49.970369 2332 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 15:30:49.970408 kubelet[2332]: I0213 15:30:49.970400 2332 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:30:49.970752 kubelet[2332]: I0213 15:30:49.970731 2332 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 15:30:50.014407 kubelet[2332]: I0213 15:30:50.012737 2332 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:30:50.026152 kubelet[2332]: E0213 15:30:50.026110 2332 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 15:30:50.026152 kubelet[2332]: I0213 15:30:50.026142 2332 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 15:30:50.035130 kubelet[2332]: I0213 15:30:50.035101 2332 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:30:50.035295 kubelet[2332]: I0213 15:30:50.035228 2332 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 15:30:50.035418 kubelet[2332]: I0213 15:30:50.035382 2332 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:30:50.035606 kubelet[2332]: I0213 15:30:50.035414 2332 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.26.113","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 15:30:50.035749 kubelet[2332]: I0213 15:30:50.035621 2332 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:30:50.035749 kubelet[2332]: I0213 15:30:50.035636 2332 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 15:30:50.035837 kubelet[2332]: I0213 15:30:50.035772 2332 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:30:50.038919 kubelet[2332]: I0213 15:30:50.038583 2332 kubelet.go:408] "Attempting to sync node with API server" Feb 13 15:30:50.038919 kubelet[2332]: I0213 15:30:50.038618 2332 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:30:50.038919 kubelet[2332]: I0213 15:30:50.038658 2332 kubelet.go:314] "Adding apiserver pod source" Feb 13 15:30:50.038919 kubelet[2332]: I0213 15:30:50.038674 2332 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:30:50.043560 kubelet[2332]: E0213 15:30:50.043529 2332 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:30:50.043781 kubelet[2332]: E0213 15:30:50.043766 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:30:50.046918 kubelet[2332]: I0213 15:30:50.046893 2332 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:30:50.049330 kubelet[2332]: I0213 15:30:50.049307 2332 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:30:50.050159 kubelet[2332]: W0213 15:30:50.050130 2332 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:30:50.050955 kubelet[2332]: I0213 15:30:50.050803 2332 server.go:1269] "Started kubelet" Feb 13 15:30:50.052550 kubelet[2332]: I0213 15:30:50.052203 2332 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:30:50.067106 kubelet[2332]: I0213 15:30:50.066305 2332 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:30:50.071629 kubelet[2332]: I0213 15:30:50.071597 2332 server.go:460] "Adding debug handlers to kubelet server" Feb 13 15:30:50.073131 kubelet[2332]: I0213 15:30:50.072850 2332 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:30:50.073131 kubelet[2332]: I0213 15:30:50.073133 2332 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:30:50.074056 kubelet[2332]: I0213 15:30:50.074032 2332 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 15:30:50.076281 kubelet[2332]: I0213 15:30:50.076174 2332 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 15:30:50.076664 kubelet[2332]: E0213 15:30:50.076637 2332 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.26.113\" not found" Feb 13 15:30:50.077057 kubelet[2332]: I0213 15:30:50.077033 2332 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 15:30:50.077228 kubelet[2332]: I0213 15:30:50.077112 2332 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:30:50.089715 kubelet[2332]: W0213 15:30:50.089640 2332 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 15:30:50.089927 kubelet[2332]: E0213 15:30:50.089737 2332 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 15:30:50.090475 kubelet[2332]: W0213 15:30:50.090262 2332 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.26.113" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 15:30:50.090475 kubelet[2332]: E0213 15:30:50.090328 2332 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.31.26.113\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 15:30:50.094342 kubelet[2332]: I0213 15:30:50.094319 2332 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:30:50.094479 kubelet[2332]: I0213 15:30:50.094444 2332 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:30:50.101937 kubelet[2332]: I0213 15:30:50.101872 2332 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:30:50.108931 kubelet[2332]: E0213 15:30:50.108892 2332 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:30:50.127875 kubelet[2332]: W0213 15:30:50.127722 2332 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 15:30:50.127875 kubelet[2332]: E0213 15:30:50.127769 2332 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 13 15:30:50.139102 kubelet[2332]: E0213 15:30:50.127836 2332 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.26.113.1823ce3ec30bc65a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.26.113,UID:172.31.26.113,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.26.113,},FirstTimestamp:2025-02-13 15:30:50.050766426 +0000 UTC m=+0.762053415,LastTimestamp:2025-02-13 15:30:50.050766426 +0000 UTC m=+0.762053415,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.26.113,}" Feb 13 15:30:50.140900 kubelet[2332]: E0213 15:30:50.140857 2332 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.26.113\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 13 15:30:50.147550 kubelet[2332]: I0213 15:30:50.147524 2332 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:30:50.147550 kubelet[2332]: I0213 15:30:50.147546 2332 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:30:50.147715 kubelet[2332]: I0213 15:30:50.147564 2332 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:30:50.158259 kubelet[2332]: I0213 15:30:50.158231 2332 policy_none.go:49] "None policy: Start" Feb 13 15:30:50.162972 kubelet[2332]: I0213 15:30:50.162946 2332 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:30:50.163199 kubelet[2332]: I0213 15:30:50.163185 2332 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:30:50.177490 kubelet[2332]: E0213 15:30:50.177297 2332 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.26.113.1823ce3ec6823715 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.26.113,UID:172.31.26.113,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.31.26.113,},FirstTimestamp:2025-02-13 15:30:50.108860181 +0000 UTC m=+0.820147169,LastTimestamp:2025-02-13 15:30:50.108860181 +0000 UTC m=+0.820147169,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.26.113,}" Feb 13 15:30:50.177755 kubelet[2332]: E0213 15:30:50.177701 2332 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.26.113\" not found" Feb 13 15:30:50.181535 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:30:50.188921 kubelet[2332]: E0213 15:30:50.188780 2332 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.26.113.1823ce3ec8c49579 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.26.113,UID:172.31.26.113,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 172.31.26.113 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:172.31.26.113,},FirstTimestamp:2025-02-13 15:30:50.146764153 +0000 UTC m=+0.858051133,LastTimestamp:2025-02-13 15:30:50.146764153 +0000 UTC m=+0.858051133,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.26.113,}" Feb 13 15:30:50.197397 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:30:50.202203 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:30:50.204614 kubelet[2332]: E0213 15:30:50.204427 2332 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.26.113.1823ce3ec8c4e15a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.26.113,UID:172.31.26.113,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 172.31.26.113 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:172.31.26.113,},FirstTimestamp:2025-02-13 15:30:50.146783578 +0000 UTC m=+0.858070543,LastTimestamp:2025-02-13 15:30:50.146783578 +0000 UTC m=+0.858070543,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.26.113,}" Feb 13 15:30:50.210963 kubelet[2332]: I0213 15:30:50.210932 2332 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:30:50.214683 kubelet[2332]: I0213 15:30:50.214198 2332 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 15:30:50.214683 kubelet[2332]: I0213 15:30:50.214221 2332 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:30:50.214683 kubelet[2332]: I0213 15:30:50.214565 2332 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:30:50.224705 kubelet[2332]: E0213 15:30:50.224499 2332 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.26.113.1823ce3ec8c4f3d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.26.113,UID:172.31.26.113,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 172.31.26.113 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:172.31.26.113,},FirstTimestamp:2025-02-13 15:30:50.146788304 +0000 UTC m=+0.858075285,LastTimestamp:2025-02-13 15:30:50.146788304 +0000 UTC m=+0.858075285,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.26.113,}" Feb 13 15:30:50.258525 kubelet[2332]: E0213 15:30:50.258361 2332 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.26.113\" not found" Feb 13 15:30:50.275765 kubelet[2332]: I0213 15:30:50.275697 2332 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:30:50.283838 kubelet[2332]: I0213 15:30:50.282534 2332 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:30:50.283838 kubelet[2332]: I0213 15:30:50.282573 2332 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:30:50.283838 kubelet[2332]: I0213 15:30:50.282596 2332 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 15:30:50.283838 kubelet[2332]: E0213 15:30:50.282656 2332 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 13 15:30:50.315720 kubelet[2332]: I0213 15:30:50.315688 2332 kubelet_node_status.go:72] "Attempting to register node" node="172.31.26.113" Feb 13 15:30:50.329955 kubelet[2332]: I0213 15:30:50.329920 2332 kubelet_node_status.go:75] "Successfully registered node" node="172.31.26.113" Feb 13 15:30:50.329955 kubelet[2332]: E0213 15:30:50.329958 2332 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.26.113\": node \"172.31.26.113\" not found" Feb 13 15:30:50.347631 kubelet[2332]: E0213 15:30:50.347597 2332 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.26.113\" not found" Feb 13 15:30:50.448361 kubelet[2332]: E0213 15:30:50.448219 2332 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.26.113\" not found" Feb 13 15:30:50.548940 kubelet[2332]: E0213 15:30:50.548782 2332 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.26.113\" not found" Feb 13 15:30:50.649808 kubelet[2332]: E0213 15:30:50.649761 2332 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.26.113\" not found" Feb 13 15:30:50.750566 kubelet[2332]: E0213 15:30:50.750514 2332 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.26.113\" not found" Feb 13 15:30:50.827393 sudo[2199]: pam_unix(sudo:session): session closed for user root Feb 13 15:30:50.850953 sshd[2198]: Connection closed by 139.178.89.65 port 55668 Feb 13 15:30:50.852139 sshd-session[2196]: pam_unix(sshd:session): session closed for user core Feb 13 15:30:50.853485 kubelet[2332]: E0213 15:30:50.853232 2332 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.26.113\" not found" Feb 13 15:30:50.865647 systemd[1]: sshd@6-172.31.26.113:22-139.178.89.65:55668.service: Deactivated successfully. Feb 13 15:30:50.879200 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:30:50.885875 systemd-logind[1864]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:30:50.887926 systemd-logind[1864]: Removed session 7. Feb 13 15:30:50.957030 kubelet[2332]: E0213 15:30:50.956982 2332 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.26.113\" not found" Feb 13 15:30:50.973132 kubelet[2332]: I0213 15:30:50.973098 2332 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 15:30:50.973297 kubelet[2332]: W0213 15:30:50.973258 2332 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 15:30:50.973351 kubelet[2332]: W0213 15:30:50.973297 2332 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 15:30:51.044999 kubelet[2332]: E0213 15:30:51.044928 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:30:51.057377 kubelet[2332]: E0213 15:30:51.057322 2332 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.26.113\" not found" Feb 13 15:30:51.157644 kubelet[2332]: E0213 15:30:51.157518 2332 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.26.113\" not found" Feb 13 15:30:51.259308 kubelet[2332]: I0213 15:30:51.259280 2332 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 15:30:51.260084 containerd[1888]: time="2025-02-13T15:30:51.259969968Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:30:51.260568 kubelet[2332]: I0213 15:30:51.260367 2332 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 15:30:52.044584 kubelet[2332]: I0213 15:30:52.044539 2332 apiserver.go:52] "Watching apiserver" Feb 13 15:30:52.045562 kubelet[2332]: E0213 15:30:52.045530 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:30:52.049354 kubelet[2332]: E0213 15:30:52.049309 2332 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hk2gm" podUID="f08ddd1e-6241-48e2-83b7-49191ff10a45" Feb 13 15:30:52.056801 systemd[1]: Created slice kubepods-besteffort-pod5af0ab54_6805_48ca_a837_54db06158735.slice - libcontainer container kubepods-besteffort-pod5af0ab54_6805_48ca_a837_54db06158735.slice. Feb 13 15:30:52.075800 systemd[1]: Created slice kubepods-besteffort-pod4375579a_f5d3_42a8_98c2_e1eaac0420d6.slice - libcontainer container kubepods-besteffort-pod4375579a_f5d3_42a8_98c2_e1eaac0420d6.slice. Feb 13 15:30:52.078550 kubelet[2332]: I0213 15:30:52.078376 2332 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 15:30:52.091294 kubelet[2332]: I0213 15:30:52.091255 2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f08ddd1e-6241-48e2-83b7-49191ff10a45-varrun\") pod \"csi-node-driver-hk2gm\" (UID: \"f08ddd1e-6241-48e2-83b7-49191ff10a45\") " pod="calico-system/csi-node-driver-hk2gm" Feb 13 15:30:52.091826 kubelet[2332]: I0213 15:30:52.091574 2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqbgv\" (UniqueName: \"kubernetes.io/projected/f08ddd1e-6241-48e2-83b7-49191ff10a45-kube-api-access-bqbgv\") pod \"csi-node-driver-hk2gm\" (UID: \"f08ddd1e-6241-48e2-83b7-49191ff10a45\") " pod="calico-system/csi-node-driver-hk2gm" Feb 13 15:30:52.091826 kubelet[2332]: I0213 15:30:52.091674 2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5af0ab54-6805-48ca-a837-54db06158735-lib-modules\") pod \"kube-proxy-zvzhm\" (UID: \"5af0ab54-6805-48ca-a837-54db06158735\") " pod="kube-system/kube-proxy-zvzhm" Feb 13 15:30:52.091826 kubelet[2332]: I0213 15:30:52.091749 2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4375579a-f5d3-42a8-98c2-e1eaac0420d6-xtables-lock\") pod \"calico-node-dgldv\" (UID: \"4375579a-f5d3-42a8-98c2-e1eaac0420d6\") " pod="calico-system/calico-node-dgldv" Feb 13 15:30:52.091826 kubelet[2332]: I0213 15:30:52.091789 2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4375579a-f5d3-42a8-98c2-e1eaac0420d6-var-lib-calico\") pod \"calico-node-dgldv\" (UID: \"4375579a-f5d3-42a8-98c2-e1eaac0420d6\") " pod="calico-system/calico-node-dgldv" Feb 13 15:30:52.092414 kubelet[2332]: I0213 15:30:52.092102 2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4375579a-f5d3-42a8-98c2-e1eaac0420d6-cni-net-dir\") pod \"calico-node-dgldv\" (UID: \"4375579a-f5d3-42a8-98c2-e1eaac0420d6\") " pod="calico-system/calico-node-dgldv" Feb 13 15:30:52.092414 kubelet[2332]: I0213 15:30:52.092168 2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f08ddd1e-6241-48e2-83b7-49191ff10a45-registration-dir\") pod \"csi-node-driver-hk2gm\" (UID: \"f08ddd1e-6241-48e2-83b7-49191ff10a45\") " pod="calico-system/csi-node-driver-hk2gm" Feb 13 15:30:52.092414 kubelet[2332]: I0213 15:30:52.092225 2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5af0ab54-6805-48ca-a837-54db06158735-kube-proxy\") pod \"kube-proxy-zvzhm\" (UID: \"5af0ab54-6805-48ca-a837-54db06158735\") " pod="kube-system/kube-proxy-zvzhm" Feb 13 15:30:52.092414 kubelet[2332]: I0213 15:30:52.092357 2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5af0ab54-6805-48ca-a837-54db06158735-xtables-lock\") pod \"kube-proxy-zvzhm\" (UID: \"5af0ab54-6805-48ca-a837-54db06158735\") " pod="kube-system/kube-proxy-zvzhm" Feb 13 15:30:52.093166 kubelet[2332]: I0213 15:30:52.092396 2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb742\" (UniqueName: \"kubernetes.io/projected/5af0ab54-6805-48ca-a837-54db06158735-kube-api-access-sb742\") pod \"kube-proxy-zvzhm\" (UID: \"5af0ab54-6805-48ca-a837-54db06158735\") " pod="kube-system/kube-proxy-zvzhm" Feb 13 15:30:52.093166 kubelet[2332]: I0213 15:30:52.092912 2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4375579a-f5d3-42a8-98c2-e1eaac0420d6-tigera-ca-bundle\") pod \"calico-node-dgldv\" (UID: \"4375579a-f5d3-42a8-98c2-e1eaac0420d6\") " pod="calico-system/calico-node-dgldv" Feb 13 15:30:52.093166 kubelet[2332]: I0213 15:30:52.092954 2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4375579a-f5d3-42a8-98c2-e1eaac0420d6-var-run-calico\") pod \"calico-node-dgldv\" (UID: \"4375579a-f5d3-42a8-98c2-e1eaac0420d6\") " pod="calico-system/calico-node-dgldv" Feb 13 15:30:52.093166 kubelet[2332]: I0213 15:30:52.092977 2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4375579a-f5d3-42a8-98c2-e1eaac0420d6-cni-bin-dir\") pod \"calico-node-dgldv\" (UID: \"4375579a-f5d3-42a8-98c2-e1eaac0420d6\") " pod="calico-system/calico-node-dgldv" Feb 13 15:30:52.093166 kubelet[2332]: I0213 15:30:52.092999 2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4375579a-f5d3-42a8-98c2-e1eaac0420d6-cni-log-dir\") pod \"calico-node-dgldv\" (UID: \"4375579a-f5d3-42a8-98c2-e1eaac0420d6\") " pod="calico-system/calico-node-dgldv" Feb 13 15:30:52.093569 kubelet[2332]: I0213 15:30:52.093057 2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f08ddd1e-6241-48e2-83b7-49191ff10a45-socket-dir\") pod \"csi-node-driver-hk2gm\" (UID: \"f08ddd1e-6241-48e2-83b7-49191ff10a45\") " pod="calico-system/csi-node-driver-hk2gm" Feb 13 15:30:52.093569 kubelet[2332]: I0213 15:30:52.093115 2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4375579a-f5d3-42a8-98c2-e1eaac0420d6-lib-modules\") pod \"calico-node-dgldv\" (UID: \"4375579a-f5d3-42a8-98c2-e1eaac0420d6\") " pod="calico-system/calico-node-dgldv" Feb 13 15:30:52.093569 kubelet[2332]: I0213 15:30:52.093140 2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4375579a-f5d3-42a8-98c2-e1eaac0420d6-policysync\") pod \"calico-node-dgldv\" (UID: \"4375579a-f5d3-42a8-98c2-e1eaac0420d6\") " pod="calico-system/calico-node-dgldv" Feb 13 15:30:52.094160 kubelet[2332]: I0213 15:30:52.093748 2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f08ddd1e-6241-48e2-83b7-49191ff10a45-kubelet-dir\") pod \"csi-node-driver-hk2gm\" (UID: \"f08ddd1e-6241-48e2-83b7-49191ff10a45\") " pod="calico-system/csi-node-driver-hk2gm" Feb 13 15:30:52.094160 kubelet[2332]: I0213 15:30:52.093796 2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4375579a-f5d3-42a8-98c2-e1eaac0420d6-node-certs\") pod \"calico-node-dgldv\" (UID: \"4375579a-f5d3-42a8-98c2-e1eaac0420d6\") " pod="calico-system/calico-node-dgldv" Feb 13 15:30:52.094160 kubelet[2332]: I0213 15:30:52.093819 2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4375579a-f5d3-42a8-98c2-e1eaac0420d6-flexvol-driver-host\") pod \"calico-node-dgldv\" (UID: \"4375579a-f5d3-42a8-98c2-e1eaac0420d6\") " pod="calico-system/calico-node-dgldv" Feb 13 15:30:52.094160 kubelet[2332]: I0213 15:30:52.093878 2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfwrv\" (UniqueName: \"kubernetes.io/projected/4375579a-f5d3-42a8-98c2-e1eaac0420d6-kube-api-access-qfwrv\") pod \"calico-node-dgldv\" (UID: \"4375579a-f5d3-42a8-98c2-e1eaac0420d6\") " pod="calico-system/calico-node-dgldv" Feb 13 15:30:52.198869 kubelet[2332]: E0213 15:30:52.198579 2332 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:30:52.198869 kubelet[2332]: W0213 15:30:52.198605 2332 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:30:52.198869 kubelet[2332]: E0213 15:30:52.198634 2332 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:30:52.199226 kubelet[2332]: E0213 15:30:52.198959 2332 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:30:52.199226 kubelet[2332]: W0213 15:30:52.198971 2332 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:30:52.199226 kubelet[2332]: E0213 15:30:52.199044 2332 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:30:52.200326 kubelet[2332]: E0213 15:30:52.199856 2332 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:30:52.200326 kubelet[2332]: W0213 15:30:52.199871 2332 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:30:52.200326 kubelet[2332]: E0213 15:30:52.199897 2332 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:30:52.200326 kubelet[2332]: E0213 15:30:52.200252 2332 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:30:52.200326 kubelet[2332]: W0213 15:30:52.200262 2332 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:30:52.200571 kubelet[2332]: E0213 15:30:52.200345 2332 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:30:52.203552 kubelet[2332]: E0213 15:30:52.203246 2332 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:30:52.203552 kubelet[2332]: W0213 15:30:52.203262 2332 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:30:52.203552 kubelet[2332]: E0213 15:30:52.203318 2332 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:30:52.204084 kubelet[2332]: E0213 15:30:52.203865 2332 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:30:52.204084 kubelet[2332]: W0213 15:30:52.203890 2332 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:30:52.204084 kubelet[2332]: E0213 15:30:52.203940 2332 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:30:52.204804 kubelet[2332]: E0213 15:30:52.204586 2332 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:30:52.204804 kubelet[2332]: W0213 15:30:52.204707 2332 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:30:52.205102 kubelet[2332]: E0213 15:30:52.204950 2332 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:30:52.206549 kubelet[2332]: E0213 15:30:52.206452 2332 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:30:52.206549 kubelet[2332]: W0213 15:30:52.206465 2332 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:30:52.206923 kubelet[2332]: E0213 15:30:52.206808 2332 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:30:52.207133 kubelet[2332]: E0213 15:30:52.207025 2332 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:30:52.207133 kubelet[2332]: W0213 15:30:52.207037 2332 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:30:52.207338 kubelet[2332]: E0213 15:30:52.207238 2332 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:30:52.207508 kubelet[2332]: E0213 15:30:52.207424 2332 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:30:52.207508 kubelet[2332]: W0213 15:30:52.207435 2332 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:30:52.207800 kubelet[2332]: E0213 15:30:52.207683 2332 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:30:52.208025 kubelet[2332]: E0213 15:30:52.207932 2332 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:30:52.208025 kubelet[2332]: W0213 15:30:52.207945 2332 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:30:52.208305 kubelet[2332]: E0213 15:30:52.208150 2332 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:30:52.208432 kubelet[2332]: E0213 15:30:52.208422 2332 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:30:52.208645 kubelet[2332]: W0213 15:30:52.208562 2332 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:30:52.208998 kubelet[2332]: E0213 15:30:52.208908 2332 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:30:52.208998 kubelet[2332]: W0213 15:30:52.208920 2332 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:30:52.209565 kubelet[2332]: E0213 15:30:52.209432 2332 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:30:52.209565 kubelet[2332]: E0213 15:30:52.209518 2332 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:30:52.211582 kubelet[2332]: E0213 15:30:52.211563 2332 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:30:52.211582 kubelet[2332]: W0213 15:30:52.211577 2332 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:30:52.211774 kubelet[2332]: E0213 15:30:52.211680 2332 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:30:52.211933 kubelet[2332]: E0213 15:30:52.211914 2332 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:30:52.211933 kubelet[2332]: W0213 15:30:52.211929 2332 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:30:52.212114 kubelet[2332]: E0213 15:30:52.211996 2332 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:30:52.212185 kubelet[2332]: E0213 15:30:52.212171 2332 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:30:52.212185 kubelet[2332]: W0213 15:30:52.212183 2332 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:30:52.212303 kubelet[2332]: E0213 15:30:52.212208 2332 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:30:52.212476 kubelet[2332]: E0213 15:30:52.212460 2332 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:30:52.212476 kubelet[2332]: W0213 15:30:52.212472 2332 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:30:52.212609 kubelet[2332]: E0213 15:30:52.212530 2332 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:30:52.212681 kubelet[2332]: E0213 15:30:52.212671 2332 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:30:52.212681 kubelet[2332]: W0213 15:30:52.212679 2332 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:30:52.212794 kubelet[2332]: E0213 15:30:52.212700 2332 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:30:52.212996 kubelet[2332]: E0213 15:30:52.212980 2332 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:30:52.212996 kubelet[2332]: W0213 15:30:52.212992 2332 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:30:52.213174 kubelet[2332]: E0213 15:30:52.213050 2332 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:30:52.213540 kubelet[2332]: E0213 15:30:52.213301 2332 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:30:52.213540 kubelet[2332]: W0213 15:30:52.213311 2332 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:30:52.213540 kubelet[2332]: E0213 15:30:52.213506 2332 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:30:52.213774 kubelet[2332]: E0213 15:30:52.213756 2332 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:30:52.213774 kubelet[2332]: W0213 15:30:52.213769 2332 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:30:52.213937 kubelet[2332]: E0213 15:30:52.213831 2332 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:30:52.214042 kubelet[2332]: E0213 15:30:52.213979 2332 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:30:52.214042 kubelet[2332]: W0213 15:30:52.213988 2332 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:30:52.214042 kubelet[2332]: E0213 15:30:52.214010 2332 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:30:52.214307 kubelet[2332]: E0213 15:30:52.214290 2332 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:30:52.214307 kubelet[2332]: W0213 15:30:52.214303 2332 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:30:52.214406 kubelet[2332]: E0213 15:30:52.214386 2332 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:30:52.214660 kubelet[2332]: E0213 15:30:52.214642 2332 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:30:52.214660 kubelet[2332]: W0213 15:30:52.214656 2332 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:30:52.214889 kubelet[2332]: E0213 15:30:52.214788 2332 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:30:52.214963 kubelet[2332]: E0213 15:30:52.214940 2332 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:30:52.214963 kubelet[2332]: W0213 15:30:52.214951 2332 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:30:52.214963 kubelet[2332]: E0213 15:30:52.214973 2332 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:30:52.215276 kubelet[2332]: E0213 15:30:52.215259 2332 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:30:52.215276 kubelet[2332]: W0213 15:30:52.215271 2332 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:30:52.215561 kubelet[2332]: E0213 15:30:52.215342 2332 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:30:52.215693 kubelet[2332]: E0213 15:30:52.215589 2332 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:30:52.215693 kubelet[2332]: W0213 15:30:52.215599 2332 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:30:52.216046 kubelet[2332]: E0213 15:30:52.215718 2332 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:30:52.216230 kubelet[2332]: E0213 15:30:52.216213 2332 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:30:52.216230 kubelet[2332]: W0213 15:30:52.216228 2332 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:30:52.216456 kubelet[2332]: E0213 15:30:52.216333 2332 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:30:52.216545 kubelet[2332]: E0213 15:30:52.216460 2332 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:30:52.216545 kubelet[2332]: W0213 15:30:52.216469 2332 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:30:52.216545 kubelet[2332]: E0213 15:30:52.216500 2332 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:30:52.216982 kubelet[2332]: E0213 15:30:52.216843 2332 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:30:52.216982 kubelet[2332]: W0213 15:30:52.216855 2332 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:30:52.216982 kubelet[2332]: E0213 15:30:52.216878 2332 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:30:52.217299 kubelet[2332]: E0213 15:30:52.217130 2332 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:30:52.217299 kubelet[2332]: W0213 15:30:52.217140 2332 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:30:52.217299 kubelet[2332]: E0213 15:30:52.217162 2332 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:30:52.217497 kubelet[2332]: E0213 15:30:52.217425 2332 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:30:52.217497 kubelet[2332]: W0213 15:30:52.217435 2332 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:30:52.217497 kubelet[2332]: E0213 15:30:52.217461 2332 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:30:52.217729 kubelet[2332]: E0213 15:30:52.217712 2332 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:30:52.217729 kubelet[2332]: W0213 15:30:52.217726 2332 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:30:52.217878 kubelet[2332]: E0213 15:30:52.217802 2332 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:30:52.217958 kubelet[2332]: E0213 15:30:52.217945 2332 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:30:52.218007 kubelet[2332]: W0213 15:30:52.217957 2332 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:30:52.218007 kubelet[2332]: E0213 15:30:52.217969 2332 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:30:52.233288 kubelet[2332]: E0213 15:30:52.233194 2332 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:30:52.233288 kubelet[2332]: W0213 15:30:52.233288 2332 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:30:52.233457 kubelet[2332]: E0213 15:30:52.233313 2332 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:30:52.246107 kubelet[2332]: E0213 15:30:52.245657 2332 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:30:52.246107 kubelet[2332]: W0213 15:30:52.245703 2332 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:30:52.246107 kubelet[2332]: E0213 15:30:52.245728 2332 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:30:52.248806 kubelet[2332]: E0213 15:30:52.248784 2332 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:30:52.248947 kubelet[2332]: W0213 15:30:52.248932 2332 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:30:52.249031 kubelet[2332]: E0213 15:30:52.249017 2332 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:30:52.371009 containerd[1888]: time="2025-02-13T15:30:52.370832537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zvzhm,Uid:5af0ab54-6805-48ca-a837-54db06158735,Namespace:kube-system,Attempt:0,}" Feb 13 15:30:52.384422 containerd[1888]: time="2025-02-13T15:30:52.383308622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dgldv,Uid:4375579a-f5d3-42a8-98c2-e1eaac0420d6,Namespace:calico-system,Attempt:0,}" Feb 13 15:30:52.925962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount530999164.mount: Deactivated successfully. Feb 13 15:30:52.935521 containerd[1888]: time="2025-02-13T15:30:52.935462161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:30:52.938637 containerd[1888]: time="2025-02-13T15:30:52.938535190Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:30:52.939483 containerd[1888]: time="2025-02-13T15:30:52.939447552Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 15:30:52.940853 containerd[1888]: time="2025-02-13T15:30:52.940792418Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:30:52.943099 containerd[1888]: time="2025-02-13T15:30:52.941461266Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:30:52.948962 containerd[1888]: time="2025-02-13T15:30:52.948915973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:30:52.950730 containerd[1888]: time="2025-02-13T15:30:52.950685784Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 566.195443ms" Feb 13 15:30:52.952498 containerd[1888]: time="2025-02-13T15:30:52.952436991Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 581.426723ms" Feb 13 15:30:53.046101 kubelet[2332]: E0213 15:30:53.045771 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:30:53.131485 containerd[1888]: time="2025-02-13T15:30:53.130972392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:30:53.131485 containerd[1888]: time="2025-02-13T15:30:53.131041541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:30:53.131485 containerd[1888]: time="2025-02-13T15:30:53.131066918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:30:53.131485 containerd[1888]: time="2025-02-13T15:30:53.131199321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:30:53.135445 containerd[1888]: time="2025-02-13T15:30:53.135310842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:30:53.135445 containerd[1888]: time="2025-02-13T15:30:53.135367931Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:30:53.135445 containerd[1888]: time="2025-02-13T15:30:53.135391307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:30:53.135926 containerd[1888]: time="2025-02-13T15:30:53.135491610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:30:53.265270 systemd[1]: Started cri-containerd-c6130a4aa653c9f9cba8830b6a3b9d4f80ea7aa17be1d92058e5813d1fd0577f.scope - libcontainer container c6130a4aa653c9f9cba8830b6a3b9d4f80ea7aa17be1d92058e5813d1fd0577f. Feb 13 15:30:53.270563 systemd[1]: Started cri-containerd-257874ede29ea097189e53900d09295fb8bcae78e67b1a4eaca4c702eb7e6469.scope - libcontainer container 257874ede29ea097189e53900d09295fb8bcae78e67b1a4eaca4c702eb7e6469. Feb 13 15:30:53.283734 kubelet[2332]: E0213 15:30:53.283686 2332 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hk2gm" podUID="f08ddd1e-6241-48e2-83b7-49191ff10a45" Feb 13 15:30:53.326459 containerd[1888]: time="2025-02-13T15:30:53.326411957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dgldv,Uid:4375579a-f5d3-42a8-98c2-e1eaac0420d6,Namespace:calico-system,Attempt:0,} returns sandbox id \"c6130a4aa653c9f9cba8830b6a3b9d4f80ea7aa17be1d92058e5813d1fd0577f\"" Feb 13 15:30:53.330258 containerd[1888]: time="2025-02-13T15:30:53.330146959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 15:30:53.338990 containerd[1888]: time="2025-02-13T15:30:53.338944303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zvzhm,Uid:5af0ab54-6805-48ca-a837-54db06158735,Namespace:kube-system,Attempt:0,} returns sandbox id \"257874ede29ea097189e53900d09295fb8bcae78e67b1a4eaca4c702eb7e6469\"" Feb 13 15:30:54.046418 kubelet[2332]: E0213 15:30:54.046337 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:30:55.015379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2785428973.mount: Deactivated successfully. Feb 13 15:30:55.047711 kubelet[2332]: E0213 15:30:55.047441 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:30:55.143457 containerd[1888]: time="2025-02-13T15:30:55.143405578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:30:55.144514 containerd[1888]: time="2025-02-13T15:30:55.144384652Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Feb 13 15:30:55.147410 containerd[1888]: time="2025-02-13T15:30:55.146178781Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:30:55.149255 containerd[1888]: time="2025-02-13T15:30:55.148418112Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:30:55.149255 containerd[1888]: time="2025-02-13T15:30:55.149048225Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.818830073s" Feb 13 15:30:55.149255 containerd[1888]: time="2025-02-13T15:30:55.149098392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 15:30:55.151162 containerd[1888]: time="2025-02-13T15:30:55.151134995Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 15:30:55.152734 containerd[1888]: time="2025-02-13T15:30:55.152704344Z" level=info msg="CreateContainer within sandbox \"c6130a4aa653c9f9cba8830b6a3b9d4f80ea7aa17be1d92058e5813d1fd0577f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 15:30:55.173483 containerd[1888]: time="2025-02-13T15:30:55.173433893Z" level=info msg="CreateContainer within sandbox \"c6130a4aa653c9f9cba8830b6a3b9d4f80ea7aa17be1d92058e5813d1fd0577f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"13587a2cb147cb35f1ad8b52243c0aa623c8d5af6ac6ca4bd7f3c7c62cf08318\"" Feb 13 15:30:55.174412 containerd[1888]: time="2025-02-13T15:30:55.174375859Z" level=info msg="StartContainer for \"13587a2cb147cb35f1ad8b52243c0aa623c8d5af6ac6ca4bd7f3c7c62cf08318\"" Feb 13 15:30:55.216302 systemd[1]: Started cri-containerd-13587a2cb147cb35f1ad8b52243c0aa623c8d5af6ac6ca4bd7f3c7c62cf08318.scope - libcontainer container 13587a2cb147cb35f1ad8b52243c0aa623c8d5af6ac6ca4bd7f3c7c62cf08318. Feb 13 15:30:55.252538 containerd[1888]: time="2025-02-13T15:30:55.252490507Z" level=info msg="StartContainer for \"13587a2cb147cb35f1ad8b52243c0aa623c8d5af6ac6ca4bd7f3c7c62cf08318\" returns successfully" Feb 13 15:30:55.264909 systemd[1]: cri-containerd-13587a2cb147cb35f1ad8b52243c0aa623c8d5af6ac6ca4bd7f3c7c62cf08318.scope: Deactivated successfully. Feb 13 15:30:55.283877 kubelet[2332]: E0213 15:30:55.283695 2332 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hk2gm" podUID="f08ddd1e-6241-48e2-83b7-49191ff10a45" Feb 13 15:30:55.345696 containerd[1888]: time="2025-02-13T15:30:55.345485833Z" level=info msg="shim disconnected" id=13587a2cb147cb35f1ad8b52243c0aa623c8d5af6ac6ca4bd7f3c7c62cf08318 namespace=k8s.io Feb 13 15:30:55.347126 containerd[1888]: time="2025-02-13T15:30:55.347064567Z" level=warning msg="cleaning up after shim disconnected" id=13587a2cb147cb35f1ad8b52243c0aa623c8d5af6ac6ca4bd7f3c7c62cf08318 namespace=k8s.io Feb 13 15:30:55.347126 containerd[1888]: time="2025-02-13T15:30:55.347111902Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:30:55.974101 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13587a2cb147cb35f1ad8b52243c0aa623c8d5af6ac6ca4bd7f3c7c62cf08318-rootfs.mount: Deactivated successfully. Feb 13 15:30:56.047892 kubelet[2332]: E0213 15:30:56.047810 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:30:56.431239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3016992983.mount: Deactivated successfully. Feb 13 15:30:57.048185 kubelet[2332]: E0213 15:30:57.048144 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:30:57.073345 containerd[1888]: time="2025-02-13T15:30:57.073293746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:30:57.075027 containerd[1888]: time="2025-02-13T15:30:57.074976131Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=30229108" Feb 13 15:30:57.076726 containerd[1888]: time="2025-02-13T15:30:57.076257545Z" level=info msg="ImageCreate event name:\"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:30:57.080273 containerd[1888]: time="2025-02-13T15:30:57.080217842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:30:57.081197 containerd[1888]: time="2025-02-13T15:30:57.081159161Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"30228127\" in 1.929983648s" Feb 13 15:30:57.081299 containerd[1888]: time="2025-02-13T15:30:57.081203187Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\"" Feb 13 15:30:57.083997 containerd[1888]: time="2025-02-13T15:30:57.083960726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 15:30:57.085136 containerd[1888]: time="2025-02-13T15:30:57.085103427Z" level=info msg="CreateContainer within sandbox \"257874ede29ea097189e53900d09295fb8bcae78e67b1a4eaca4c702eb7e6469\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:30:57.117182 containerd[1888]: time="2025-02-13T15:30:57.117117217Z" level=info msg="CreateContainer within sandbox \"257874ede29ea097189e53900d09295fb8bcae78e67b1a4eaca4c702eb7e6469\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f457fdbe39b62dd43e20405234e90c05702a0c7939fd4ca6eba060378e526efd\"" Feb 13 15:30:57.118769 containerd[1888]: time="2025-02-13T15:30:57.118732510Z" level=info msg="StartContainer for \"f457fdbe39b62dd43e20405234e90c05702a0c7939fd4ca6eba060378e526efd\"" Feb 13 15:30:57.186280 systemd[1]: Started cri-containerd-f457fdbe39b62dd43e20405234e90c05702a0c7939fd4ca6eba060378e526efd.scope - libcontainer container f457fdbe39b62dd43e20405234e90c05702a0c7939fd4ca6eba060378e526efd. Feb 13 15:30:57.233780 containerd[1888]: time="2025-02-13T15:30:57.233578602Z" level=info msg="StartContainer for \"f457fdbe39b62dd43e20405234e90c05702a0c7939fd4ca6eba060378e526efd\" returns successfully" Feb 13 15:30:57.283768 kubelet[2332]: E0213 15:30:57.283723 2332 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hk2gm" podUID="f08ddd1e-6241-48e2-83b7-49191ff10a45" Feb 13 15:30:58.050260 kubelet[2332]: E0213 15:30:58.049508 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:30:59.049757 kubelet[2332]: E0213 15:30:59.049711 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:30:59.283474 kubelet[2332]: E0213 15:30:59.283423 2332 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hk2gm" podUID="f08ddd1e-6241-48e2-83b7-49191ff10a45" Feb 13 15:31:00.051089 kubelet[2332]: E0213 15:31:00.050937 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:01.051444 kubelet[2332]: E0213 15:31:01.051395 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:01.283770 kubelet[2332]: E0213 15:31:01.283537 2332 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hk2gm" podUID="f08ddd1e-6241-48e2-83b7-49191ff10a45" Feb 13 15:31:01.674467 containerd[1888]: time="2025-02-13T15:31:01.672388747Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:01.674467 containerd[1888]: time="2025-02-13T15:31:01.673948470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 15:31:01.675026 containerd[1888]: time="2025-02-13T15:31:01.674822188Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:01.682566 containerd[1888]: time="2025-02-13T15:31:01.682355408Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:01.684527 containerd[1888]: time="2025-02-13T15:31:01.684267631Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.600263582s" Feb 13 15:31:01.684527 containerd[1888]: time="2025-02-13T15:31:01.684311146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 15:31:01.692576 containerd[1888]: time="2025-02-13T15:31:01.692535942Z" level=info msg="CreateContainer within sandbox \"c6130a4aa653c9f9cba8830b6a3b9d4f80ea7aa17be1d92058e5813d1fd0577f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 15:31:01.714658 containerd[1888]: time="2025-02-13T15:31:01.714614439Z" level=info msg="CreateContainer within sandbox \"c6130a4aa653c9f9cba8830b6a3b9d4f80ea7aa17be1d92058e5813d1fd0577f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"60bab9e3cee4d62896cf591d24e85fefb09155c34a3f31e5f1c75978a7cab0b5\"" Feb 13 15:31:01.716346 containerd[1888]: time="2025-02-13T15:31:01.716308359Z" level=info msg="StartContainer for \"60bab9e3cee4d62896cf591d24e85fefb09155c34a3f31e5f1c75978a7cab0b5\"" Feb 13 15:31:01.782290 systemd[1]: Started cri-containerd-60bab9e3cee4d62896cf591d24e85fefb09155c34a3f31e5f1c75978a7cab0b5.scope - libcontainer container 60bab9e3cee4d62896cf591d24e85fefb09155c34a3f31e5f1c75978a7cab0b5. Feb 13 15:31:01.842798 containerd[1888]: time="2025-02-13T15:31:01.842750249Z" level=info msg="StartContainer for \"60bab9e3cee4d62896cf591d24e85fefb09155c34a3f31e5f1c75978a7cab0b5\" returns successfully" Feb 13 15:31:02.053706 kubelet[2332]: E0213 15:31:02.053546 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:02.418399 kubelet[2332]: I0213 15:31:02.418143 2332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zvzhm" podStartSLOduration=8.675634557 podStartE2EDuration="12.418124482s" podCreationTimestamp="2025-02-13 15:30:50 +0000 UTC" firstStartedPulling="2025-02-13 15:30:53.340143455 +0000 UTC m=+4.051430424" lastFinishedPulling="2025-02-13 15:30:57.082633362 +0000 UTC m=+7.793920349" observedRunningTime="2025-02-13 15:30:57.369093839 +0000 UTC m=+8.080380824" watchObservedRunningTime="2025-02-13 15:31:02.418124482 +0000 UTC m=+13.129411473" Feb 13 15:31:02.590147 containerd[1888]: time="2025-02-13T15:31:02.589979349Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:31:02.592035 systemd[1]: cri-containerd-60bab9e3cee4d62896cf591d24e85fefb09155c34a3f31e5f1c75978a7cab0b5.scope: Deactivated successfully. Feb 13 15:31:02.623297 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60bab9e3cee4d62896cf591d24e85fefb09155c34a3f31e5f1c75978a7cab0b5-rootfs.mount: Deactivated successfully. Feb 13 15:31:02.644025 kubelet[2332]: I0213 15:31:02.643990 2332 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 15:31:02.843449 containerd[1888]: time="2025-02-13T15:31:02.843223976Z" level=info msg="shim disconnected" id=60bab9e3cee4d62896cf591d24e85fefb09155c34a3f31e5f1c75978a7cab0b5 namespace=k8s.io Feb 13 15:31:02.845444 containerd[1888]: time="2025-02-13T15:31:02.844009323Z" level=warning msg="cleaning up after shim disconnected" id=60bab9e3cee4d62896cf591d24e85fefb09155c34a3f31e5f1c75978a7cab0b5 namespace=k8s.io Feb 13 15:31:02.845444 containerd[1888]: time="2025-02-13T15:31:02.844086346Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:31:03.054883 kubelet[2332]: E0213 15:31:03.054799 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:03.291292 systemd[1]: Created slice kubepods-besteffort-podf08ddd1e_6241_48e2_83b7_49191ff10a45.slice - libcontainer container kubepods-besteffort-podf08ddd1e_6241_48e2_83b7_49191ff10a45.slice. Feb 13 15:31:03.298529 containerd[1888]: time="2025-02-13T15:31:03.298491015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hk2gm,Uid:f08ddd1e-6241-48e2-83b7-49191ff10a45,Namespace:calico-system,Attempt:0,}" Feb 13 15:31:03.360221 containerd[1888]: time="2025-02-13T15:31:03.360182618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 15:31:03.384433 containerd[1888]: time="2025-02-13T15:31:03.384381287Z" level=error msg="Failed to destroy network for sandbox \"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:03.388958 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55-shm.mount: Deactivated successfully. Feb 13 15:31:03.390607 containerd[1888]: time="2025-02-13T15:31:03.390536650Z" level=error msg="encountered an error cleaning up failed sandbox \"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:03.390712 containerd[1888]: time="2025-02-13T15:31:03.390640526Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hk2gm,Uid:f08ddd1e-6241-48e2-83b7-49191ff10a45,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:03.391053 kubelet[2332]: E0213 15:31:03.390879 2332 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:03.391237 kubelet[2332]: E0213 15:31:03.391133 2332 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hk2gm" Feb 13 15:31:03.391459 kubelet[2332]: E0213 15:31:03.391312 2332 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hk2gm" Feb 13 15:31:03.391459 kubelet[2332]: E0213 15:31:03.391403 2332 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hk2gm_calico-system(f08ddd1e-6241-48e2-83b7-49191ff10a45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hk2gm_calico-system(f08ddd1e-6241-48e2-83b7-49191ff10a45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hk2gm" podUID="f08ddd1e-6241-48e2-83b7-49191ff10a45" Feb 13 15:31:04.055932 kubelet[2332]: E0213 15:31:04.055876 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:04.359364 kubelet[2332]: I0213 15:31:04.359243 2332 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55" Feb 13 15:31:04.360032 containerd[1888]: time="2025-02-13T15:31:04.359988298Z" level=info msg="StopPodSandbox for \"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\"" Feb 13 15:31:04.360440 containerd[1888]: time="2025-02-13T15:31:04.360290741Z" level=info msg="Ensure that sandbox 46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55 in task-service has been cleanup successfully" Feb 13 15:31:04.361046 containerd[1888]: time="2025-02-13T15:31:04.361009576Z" level=info msg="TearDown network for sandbox \"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\" successfully" Feb 13 15:31:04.361188 containerd[1888]: time="2025-02-13T15:31:04.361047959Z" level=info msg="StopPodSandbox for \"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\" returns successfully" Feb 13 15:31:04.367524 containerd[1888]: time="2025-02-13T15:31:04.367459324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hk2gm,Uid:f08ddd1e-6241-48e2-83b7-49191ff10a45,Namespace:calico-system,Attempt:1,}" Feb 13 15:31:04.368171 systemd[1]: run-netns-cni\x2d3fe9d056\x2d3eaf\x2df39f\x2d9907\x2da01d5c211a95.mount: Deactivated successfully. Feb 13 15:31:04.465517 containerd[1888]: time="2025-02-13T15:31:04.464928380Z" level=error msg="Failed to destroy network for sandbox \"6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:04.465517 containerd[1888]: time="2025-02-13T15:31:04.465333572Z" level=error msg="encountered an error cleaning up failed sandbox \"6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:04.465517 containerd[1888]: time="2025-02-13T15:31:04.465407004Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hk2gm,Uid:f08ddd1e-6241-48e2-83b7-49191ff10a45,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:04.466124 kubelet[2332]: E0213 15:31:04.465913 2332 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:04.466124 kubelet[2332]: E0213 15:31:04.465981 2332 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hk2gm" Feb 13 15:31:04.466124 kubelet[2332]: E0213 15:31:04.466009 2332 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hk2gm" Feb 13 15:31:04.466316 kubelet[2332]: E0213 15:31:04.466055 2332 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hk2gm_calico-system(f08ddd1e-6241-48e2-83b7-49191ff10a45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hk2gm_calico-system(f08ddd1e-6241-48e2-83b7-49191ff10a45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hk2gm" podUID="f08ddd1e-6241-48e2-83b7-49191ff10a45" Feb 13 15:31:04.468347 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786-shm.mount: Deactivated successfully. Feb 13 15:31:05.057098 kubelet[2332]: E0213 15:31:05.056237 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:05.363634 kubelet[2332]: I0213 15:31:05.363386 2332 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786" Feb 13 15:31:05.364865 containerd[1888]: time="2025-02-13T15:31:05.364808988Z" level=info msg="StopPodSandbox for \"6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786\"" Feb 13 15:31:05.365329 containerd[1888]: time="2025-02-13T15:31:05.365125899Z" level=info msg="Ensure that sandbox 6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786 in task-service has been cleanup successfully" Feb 13 15:31:05.368348 systemd[1]: run-netns-cni\x2d15f5125c\x2d45ad\x2d6f91\x2de67c\x2dffbf36d5292e.mount: Deactivated successfully. Feb 13 15:31:05.370551 containerd[1888]: time="2025-02-13T15:31:05.370330644Z" level=info msg="TearDown network for sandbox \"6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786\" successfully" Feb 13 15:31:05.370551 containerd[1888]: time="2025-02-13T15:31:05.370367502Z" level=info msg="StopPodSandbox for \"6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786\" returns successfully" Feb 13 15:31:05.371656 containerd[1888]: time="2025-02-13T15:31:05.371093078Z" level=info msg="StopPodSandbox for \"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\"" Feb 13 15:31:05.371656 containerd[1888]: time="2025-02-13T15:31:05.371220711Z" level=info msg="TearDown network for sandbox \"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\" successfully" Feb 13 15:31:05.371656 containerd[1888]: time="2025-02-13T15:31:05.371235642Z" level=info msg="StopPodSandbox for \"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\" returns successfully" Feb 13 15:31:05.372250 containerd[1888]: time="2025-02-13T15:31:05.372225528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hk2gm,Uid:f08ddd1e-6241-48e2-83b7-49191ff10a45,Namespace:calico-system,Attempt:2,}" Feb 13 15:31:05.508530 containerd[1888]: time="2025-02-13T15:31:05.508478593Z" level=error msg="Failed to destroy network for sandbox \"c2062018143225935d831555441151f166411d89d5c3a73413352e64c89d0626\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:05.508955 containerd[1888]: time="2025-02-13T15:31:05.508898336Z" level=error msg="encountered an error cleaning up failed sandbox \"c2062018143225935d831555441151f166411d89d5c3a73413352e64c89d0626\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:05.509049 containerd[1888]: time="2025-02-13T15:31:05.509010744Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hk2gm,Uid:f08ddd1e-6241-48e2-83b7-49191ff10a45,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"c2062018143225935d831555441151f166411d89d5c3a73413352e64c89d0626\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:05.511793 kubelet[2332]: E0213 15:31:05.509266 2332 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2062018143225935d831555441151f166411d89d5c3a73413352e64c89d0626\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:05.511793 kubelet[2332]: E0213 15:31:05.509330 2332 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2062018143225935d831555441151f166411d89d5c3a73413352e64c89d0626\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hk2gm" Feb 13 15:31:05.511793 kubelet[2332]: E0213 15:31:05.509365 2332 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2062018143225935d831555441151f166411d89d5c3a73413352e64c89d0626\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hk2gm" Feb 13 15:31:05.512001 kubelet[2332]: E0213 15:31:05.509415 2332 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hk2gm_calico-system(f08ddd1e-6241-48e2-83b7-49191ff10a45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hk2gm_calico-system(f08ddd1e-6241-48e2-83b7-49191ff10a45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c2062018143225935d831555441151f166411d89d5c3a73413352e64c89d0626\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hk2gm" podUID="f08ddd1e-6241-48e2-83b7-49191ff10a45" Feb 13 15:31:05.512013 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c2062018143225935d831555441151f166411d89d5c3a73413352e64c89d0626-shm.mount: Deactivated successfully. Feb 13 15:31:05.688995 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 15:31:06.056745 kubelet[2332]: E0213 15:31:06.056596 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:06.378480 kubelet[2332]: I0213 15:31:06.377523 2332 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2062018143225935d831555441151f166411d89d5c3a73413352e64c89d0626" Feb 13 15:31:06.378905 containerd[1888]: time="2025-02-13T15:31:06.378730222Z" level=info msg="StopPodSandbox for \"c2062018143225935d831555441151f166411d89d5c3a73413352e64c89d0626\"" Feb 13 15:31:06.379233 containerd[1888]: time="2025-02-13T15:31:06.378985855Z" level=info msg="Ensure that sandbox c2062018143225935d831555441151f166411d89d5c3a73413352e64c89d0626 in task-service has been cleanup successfully" Feb 13 15:31:06.383406 systemd[1]: run-netns-cni\x2d0e2fdeb0\x2d946b\x2de509\x2d53c8\x2dd21bd0485388.mount: Deactivated successfully. Feb 13 15:31:06.384786 containerd[1888]: time="2025-02-13T15:31:06.383933755Z" level=info msg="TearDown network for sandbox \"c2062018143225935d831555441151f166411d89d5c3a73413352e64c89d0626\" successfully" Feb 13 15:31:06.384786 containerd[1888]: time="2025-02-13T15:31:06.383980655Z" level=info msg="StopPodSandbox for \"c2062018143225935d831555441151f166411d89d5c3a73413352e64c89d0626\" returns successfully" Feb 13 15:31:06.384786 containerd[1888]: time="2025-02-13T15:31:06.384769693Z" level=info msg="StopPodSandbox for \"6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786\"" Feb 13 15:31:06.384951 containerd[1888]: time="2025-02-13T15:31:06.384889585Z" level=info msg="TearDown network for sandbox \"6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786\" successfully" Feb 13 15:31:06.384999 containerd[1888]: time="2025-02-13T15:31:06.384947775Z" level=info msg="StopPodSandbox for \"6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786\" returns successfully" Feb 13 15:31:06.386867 containerd[1888]: time="2025-02-13T15:31:06.385913885Z" level=info msg="StopPodSandbox for \"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\"" Feb 13 15:31:06.386867 containerd[1888]: time="2025-02-13T15:31:06.386012588Z" level=info msg="TearDown network for sandbox \"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\" successfully" Feb 13 15:31:06.386867 containerd[1888]: time="2025-02-13T15:31:06.386027964Z" level=info msg="StopPodSandbox for \"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\" returns successfully" Feb 13 15:31:06.386867 containerd[1888]: time="2025-02-13T15:31:06.386635869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hk2gm,Uid:f08ddd1e-6241-48e2-83b7-49191ff10a45,Namespace:calico-system,Attempt:3,}" Feb 13 15:31:06.392908 systemd[1]: Created slice kubepods-besteffort-podd5f6a5ec_239d_4976_acea_200a963b6960.slice - libcontainer container kubepods-besteffort-podd5f6a5ec_239d_4976_acea_200a963b6960.slice. Feb 13 15:31:06.409358 kubelet[2332]: I0213 15:31:06.407289 2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n94l5\" (UniqueName: \"kubernetes.io/projected/d5f6a5ec-239d-4976-acea-200a963b6960-kube-api-access-n94l5\") pod \"nginx-deployment-8587fbcb89-59dbf\" (UID: \"d5f6a5ec-239d-4976-acea-200a963b6960\") " pod="default/nginx-deployment-8587fbcb89-59dbf" Feb 13 15:31:06.525842 containerd[1888]: time="2025-02-13T15:31:06.525787906Z" level=error msg="Failed to destroy network for sandbox \"0111c65653a34a4775b455233cef655a6171f473130bb8a37089cc787e1e08b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:06.539704 containerd[1888]: time="2025-02-13T15:31:06.538581612Z" level=error msg="encountered an error cleaning up failed sandbox \"0111c65653a34a4775b455233cef655a6171f473130bb8a37089cc787e1e08b5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:06.541173 containerd[1888]: time="2025-02-13T15:31:06.540193937Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hk2gm,Uid:f08ddd1e-6241-48e2-83b7-49191ff10a45,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"0111c65653a34a4775b455233cef655a6171f473130bb8a37089cc787e1e08b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:06.542443 kubelet[2332]: E0213 15:31:06.541769 2332 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0111c65653a34a4775b455233cef655a6171f473130bb8a37089cc787e1e08b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:06.542443 kubelet[2332]: E0213 15:31:06.541837 2332 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0111c65653a34a4775b455233cef655a6171f473130bb8a37089cc787e1e08b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hk2gm" Feb 13 15:31:06.542443 kubelet[2332]: E0213 15:31:06.541870 2332 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0111c65653a34a4775b455233cef655a6171f473130bb8a37089cc787e1e08b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hk2gm" Feb 13 15:31:06.542642 kubelet[2332]: E0213 15:31:06.541926 2332 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hk2gm_calico-system(f08ddd1e-6241-48e2-83b7-49191ff10a45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hk2gm_calico-system(f08ddd1e-6241-48e2-83b7-49191ff10a45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0111c65653a34a4775b455233cef655a6171f473130bb8a37089cc787e1e08b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hk2gm" podUID="f08ddd1e-6241-48e2-83b7-49191ff10a45" Feb 13 15:31:06.545871 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0111c65653a34a4775b455233cef655a6171f473130bb8a37089cc787e1e08b5-shm.mount: Deactivated successfully. Feb 13 15:31:06.699133 containerd[1888]: time="2025-02-13T15:31:06.697701778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-59dbf,Uid:d5f6a5ec-239d-4976-acea-200a963b6960,Namespace:default,Attempt:0,}" Feb 13 15:31:06.823852 containerd[1888]: time="2025-02-13T15:31:06.823800121Z" level=error msg="Failed to destroy network for sandbox \"7dfd9e05d94a6c710fdcc2ec9fc32b71f2ec1dfd37bef64567534e836f51cd3d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:06.824775 containerd[1888]: time="2025-02-13T15:31:06.824722841Z" level=error msg="encountered an error cleaning up failed sandbox \"7dfd9e05d94a6c710fdcc2ec9fc32b71f2ec1dfd37bef64567534e836f51cd3d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:06.824889 containerd[1888]: time="2025-02-13T15:31:06.824810686Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-59dbf,Uid:d5f6a5ec-239d-4976-acea-200a963b6960,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7dfd9e05d94a6c710fdcc2ec9fc32b71f2ec1dfd37bef64567534e836f51cd3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:06.825570 kubelet[2332]: E0213 15:31:06.825058 2332 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dfd9e05d94a6c710fdcc2ec9fc32b71f2ec1dfd37bef64567534e836f51cd3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:06.825570 kubelet[2332]: E0213 15:31:06.825355 2332 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dfd9e05d94a6c710fdcc2ec9fc32b71f2ec1dfd37bef64567534e836f51cd3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-59dbf" Feb 13 15:31:06.825570 kubelet[2332]: E0213 15:31:06.825376 2332 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dfd9e05d94a6c710fdcc2ec9fc32b71f2ec1dfd37bef64567534e836f51cd3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-59dbf" Feb 13 15:31:06.826396 kubelet[2332]: E0213 15:31:06.826345 2332 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-59dbf_default(d5f6a5ec-239d-4976-acea-200a963b6960)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-59dbf_default(d5f6a5ec-239d-4976-acea-200a963b6960)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7dfd9e05d94a6c710fdcc2ec9fc32b71f2ec1dfd37bef64567534e836f51cd3d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-59dbf" podUID="d5f6a5ec-239d-4976-acea-200a963b6960" Feb 13 15:31:07.057943 kubelet[2332]: E0213 15:31:07.057360 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:07.389857 kubelet[2332]: I0213 15:31:07.389467 2332 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0111c65653a34a4775b455233cef655a6171f473130bb8a37089cc787e1e08b5" Feb 13 15:31:07.390501 containerd[1888]: time="2025-02-13T15:31:07.390467526Z" level=info msg="StopPodSandbox for \"0111c65653a34a4775b455233cef655a6171f473130bb8a37089cc787e1e08b5\"" Feb 13 15:31:07.395055 containerd[1888]: time="2025-02-13T15:31:07.391440560Z" level=info msg="Ensure that sandbox 0111c65653a34a4775b455233cef655a6171f473130bb8a37089cc787e1e08b5 in task-service has been cleanup successfully" Feb 13 15:31:07.395055 containerd[1888]: time="2025-02-13T15:31:07.393248396Z" level=info msg="StopPodSandbox for \"7dfd9e05d94a6c710fdcc2ec9fc32b71f2ec1dfd37bef64567534e836f51cd3d\"" Feb 13 15:31:07.395055 containerd[1888]: time="2025-02-13T15:31:07.393587763Z" level=info msg="Ensure that sandbox 7dfd9e05d94a6c710fdcc2ec9fc32b71f2ec1dfd37bef64567534e836f51cd3d in task-service has been cleanup successfully" Feb 13 15:31:07.395055 containerd[1888]: time="2025-02-13T15:31:07.393812882Z" level=info msg="TearDown network for sandbox \"7dfd9e05d94a6c710fdcc2ec9fc32b71f2ec1dfd37bef64567534e836f51cd3d\" successfully" Feb 13 15:31:07.395055 containerd[1888]: time="2025-02-13T15:31:07.393847435Z" level=info msg="StopPodSandbox for \"7dfd9e05d94a6c710fdcc2ec9fc32b71f2ec1dfd37bef64567534e836f51cd3d\" returns successfully" Feb 13 15:31:07.395055 containerd[1888]: time="2025-02-13T15:31:07.394507047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-59dbf,Uid:d5f6a5ec-239d-4976-acea-200a963b6960,Namespace:default,Attempt:1,}" Feb 13 15:31:07.395333 kubelet[2332]: I0213 15:31:07.392596 2332 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dfd9e05d94a6c710fdcc2ec9fc32b71f2ec1dfd37bef64567534e836f51cd3d" Feb 13 15:31:07.395968 containerd[1888]: time="2025-02-13T15:31:07.395599052Z" level=info msg="TearDown network for sandbox \"0111c65653a34a4775b455233cef655a6171f473130bb8a37089cc787e1e08b5\" successfully" Feb 13 15:31:07.395968 containerd[1888]: time="2025-02-13T15:31:07.395623217Z" level=info msg="StopPodSandbox for \"0111c65653a34a4775b455233cef655a6171f473130bb8a37089cc787e1e08b5\" returns successfully" Feb 13 15:31:07.397068 systemd[1]: run-netns-cni\x2d47fa9927\x2d2a62\x2d1a1e\x2d1d7b\x2d8471b443ced3.mount: Deactivated successfully. Feb 13 15:31:07.397613 containerd[1888]: time="2025-02-13T15:31:07.397420809Z" level=info msg="StopPodSandbox for \"c2062018143225935d831555441151f166411d89d5c3a73413352e64c89d0626\"" Feb 13 15:31:07.397613 containerd[1888]: time="2025-02-13T15:31:07.397587812Z" level=info msg="TearDown network for sandbox \"c2062018143225935d831555441151f166411d89d5c3a73413352e64c89d0626\" successfully" Feb 13 15:31:07.397613 containerd[1888]: time="2025-02-13T15:31:07.397606429Z" level=info msg="StopPodSandbox for \"c2062018143225935d831555441151f166411d89d5c3a73413352e64c89d0626\" returns successfully" Feb 13 15:31:07.400980 containerd[1888]: time="2025-02-13T15:31:07.399250642Z" level=info msg="StopPodSandbox for \"6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786\"" Feb 13 15:31:07.400980 containerd[1888]: time="2025-02-13T15:31:07.399343322Z" level=info msg="TearDown network for sandbox \"6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786\" successfully" Feb 13 15:31:07.400980 containerd[1888]: time="2025-02-13T15:31:07.399358350Z" level=info msg="StopPodSandbox for \"6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786\" returns successfully" Feb 13 15:31:07.404013 systemd[1]: run-netns-cni\x2ddb1f3653\x2dae80\x2d7c89\x2dcad4\x2d85e743b3b71c.mount: Deactivated successfully. Feb 13 15:31:07.404479 containerd[1888]: time="2025-02-13T15:31:07.402740854Z" level=info msg="StopPodSandbox for \"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\"" Feb 13 15:31:07.404784 containerd[1888]: time="2025-02-13T15:31:07.404565972Z" level=info msg="TearDown network for sandbox \"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\" successfully" Feb 13 15:31:07.404784 containerd[1888]: time="2025-02-13T15:31:07.404586116Z" level=info msg="StopPodSandbox for \"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\" returns successfully" Feb 13 15:31:07.407234 containerd[1888]: time="2025-02-13T15:31:07.407205306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hk2gm,Uid:f08ddd1e-6241-48e2-83b7-49191ff10a45,Namespace:calico-system,Attempt:4,}" Feb 13 15:31:07.580261 containerd[1888]: time="2025-02-13T15:31:07.580118230Z" level=error msg="Failed to destroy network for sandbox \"1441ff552043c3e280e84ba9d0bc6aa7ea3bec9192028af9e43301dd10366345\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:07.583715 containerd[1888]: time="2025-02-13T15:31:07.583489289Z" level=error msg="encountered an error cleaning up failed sandbox \"1441ff552043c3e280e84ba9d0bc6aa7ea3bec9192028af9e43301dd10366345\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:07.583715 containerd[1888]: time="2025-02-13T15:31:07.583588859Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-59dbf,Uid:d5f6a5ec-239d-4976-acea-200a963b6960,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"1441ff552043c3e280e84ba9d0bc6aa7ea3bec9192028af9e43301dd10366345\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:07.584205 kubelet[2332]: E0213 15:31:07.583955 2332 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1441ff552043c3e280e84ba9d0bc6aa7ea3bec9192028af9e43301dd10366345\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:07.584205 kubelet[2332]: E0213 15:31:07.584061 2332 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1441ff552043c3e280e84ba9d0bc6aa7ea3bec9192028af9e43301dd10366345\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-59dbf" Feb 13 15:31:07.584205 kubelet[2332]: E0213 15:31:07.584191 2332 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1441ff552043c3e280e84ba9d0bc6aa7ea3bec9192028af9e43301dd10366345\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-59dbf" Feb 13 15:31:07.585037 kubelet[2332]: E0213 15:31:07.584379 2332 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-59dbf_default(d5f6a5ec-239d-4976-acea-200a963b6960)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-59dbf_default(d5f6a5ec-239d-4976-acea-200a963b6960)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1441ff552043c3e280e84ba9d0bc6aa7ea3bec9192028af9e43301dd10366345\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-59dbf" podUID="d5f6a5ec-239d-4976-acea-200a963b6960" Feb 13 15:31:07.591315 containerd[1888]: time="2025-02-13T15:31:07.591182452Z" level=error msg="Failed to destroy network for sandbox \"f9384dcdeebd925b8031a46bcebeef6dd9a5bf492ccf93dd7f189836d41c7871\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:07.592025 containerd[1888]: time="2025-02-13T15:31:07.591959136Z" level=error msg="encountered an error cleaning up failed sandbox \"f9384dcdeebd925b8031a46bcebeef6dd9a5bf492ccf93dd7f189836d41c7871\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:07.592373 containerd[1888]: time="2025-02-13T15:31:07.592342894Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hk2gm,Uid:f08ddd1e-6241-48e2-83b7-49191ff10a45,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"f9384dcdeebd925b8031a46bcebeef6dd9a5bf492ccf93dd7f189836d41c7871\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:07.593446 kubelet[2332]: E0213 15:31:07.593408 2332 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9384dcdeebd925b8031a46bcebeef6dd9a5bf492ccf93dd7f189836d41c7871\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:07.593530 kubelet[2332]: E0213 15:31:07.593474 2332 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9384dcdeebd925b8031a46bcebeef6dd9a5bf492ccf93dd7f189836d41c7871\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hk2gm" Feb 13 15:31:07.593530 kubelet[2332]: E0213 15:31:07.593504 2332 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9384dcdeebd925b8031a46bcebeef6dd9a5bf492ccf93dd7f189836d41c7871\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hk2gm" Feb 13 15:31:07.593625 kubelet[2332]: E0213 15:31:07.593558 2332 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hk2gm_calico-system(f08ddd1e-6241-48e2-83b7-49191ff10a45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hk2gm_calico-system(f08ddd1e-6241-48e2-83b7-49191ff10a45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9384dcdeebd925b8031a46bcebeef6dd9a5bf492ccf93dd7f189836d41c7871\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hk2gm" podUID="f08ddd1e-6241-48e2-83b7-49191ff10a45" Feb 13 15:31:08.057523 kubelet[2332]: E0213 15:31:08.057482 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:08.385521 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1441ff552043c3e280e84ba9d0bc6aa7ea3bec9192028af9e43301dd10366345-shm.mount: Deactivated successfully. Feb 13 15:31:08.399711 kubelet[2332]: I0213 15:31:08.399681 2332 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9384dcdeebd925b8031a46bcebeef6dd9a5bf492ccf93dd7f189836d41c7871" Feb 13 15:31:08.401110 containerd[1888]: time="2025-02-13T15:31:08.400738289Z" level=info msg="StopPodSandbox for \"f9384dcdeebd925b8031a46bcebeef6dd9a5bf492ccf93dd7f189836d41c7871\"" Feb 13 15:31:08.405130 containerd[1888]: time="2025-02-13T15:31:08.402641663Z" level=info msg="Ensure that sandbox f9384dcdeebd925b8031a46bcebeef6dd9a5bf492ccf93dd7f189836d41c7871 in task-service has been cleanup successfully" Feb 13 15:31:08.405130 containerd[1888]: time="2025-02-13T15:31:08.403134814Z" level=info msg="TearDown network for sandbox \"f9384dcdeebd925b8031a46bcebeef6dd9a5bf492ccf93dd7f189836d41c7871\" successfully" Feb 13 15:31:08.405130 containerd[1888]: time="2025-02-13T15:31:08.403156573Z" level=info msg="StopPodSandbox for \"f9384dcdeebd925b8031a46bcebeef6dd9a5bf492ccf93dd7f189836d41c7871\" returns successfully" Feb 13 15:31:08.406023 containerd[1888]: time="2025-02-13T15:31:08.405532309Z" level=info msg="StopPodSandbox for \"0111c65653a34a4775b455233cef655a6171f473130bb8a37089cc787e1e08b5\"" Feb 13 15:31:08.406023 containerd[1888]: time="2025-02-13T15:31:08.405631561Z" level=info msg="TearDown network for sandbox \"0111c65653a34a4775b455233cef655a6171f473130bb8a37089cc787e1e08b5\" successfully" Feb 13 15:31:08.406023 containerd[1888]: time="2025-02-13T15:31:08.405647616Z" level=info msg="StopPodSandbox for \"0111c65653a34a4775b455233cef655a6171f473130bb8a37089cc787e1e08b5\" returns successfully" Feb 13 15:31:08.406210 kubelet[2332]: I0213 15:31:08.405280 2332 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1441ff552043c3e280e84ba9d0bc6aa7ea3bec9192028af9e43301dd10366345" Feb 13 15:31:08.407827 containerd[1888]: time="2025-02-13T15:31:08.407422288Z" level=info msg="StopPodSandbox for \"1441ff552043c3e280e84ba9d0bc6aa7ea3bec9192028af9e43301dd10366345\"" Feb 13 15:31:08.407827 containerd[1888]: time="2025-02-13T15:31:08.407484481Z" level=info msg="StopPodSandbox for \"c2062018143225935d831555441151f166411d89d5c3a73413352e64c89d0626\"" Feb 13 15:31:08.407827 containerd[1888]: time="2025-02-13T15:31:08.407565956Z" level=info msg="TearDown network for sandbox \"c2062018143225935d831555441151f166411d89d5c3a73413352e64c89d0626\" successfully" Feb 13 15:31:08.407827 containerd[1888]: time="2025-02-13T15:31:08.407582289Z" level=info msg="StopPodSandbox for \"c2062018143225935d831555441151f166411d89d5c3a73413352e64c89d0626\" returns successfully" Feb 13 15:31:08.408263 systemd[1]: run-netns-cni\x2d29d8bb16\x2d63ef\x2d78d6\x2dc18c\x2d440c40f11b60.mount: Deactivated successfully. Feb 13 15:31:08.410528 containerd[1888]: time="2025-02-13T15:31:08.410464232Z" level=info msg="StopPodSandbox for \"6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786\"" Feb 13 15:31:08.410774 containerd[1888]: time="2025-02-13T15:31:08.410755258Z" level=info msg="TearDown network for sandbox \"6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786\" successfully" Feb 13 15:31:08.410950 containerd[1888]: time="2025-02-13T15:31:08.410903879Z" level=info msg="StopPodSandbox for \"6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786\" returns successfully" Feb 13 15:31:08.411243 containerd[1888]: time="2025-02-13T15:31:08.410807810Z" level=info msg="Ensure that sandbox 1441ff552043c3e280e84ba9d0bc6aa7ea3bec9192028af9e43301dd10366345 in task-service has been cleanup successfully" Feb 13 15:31:08.413838 containerd[1888]: time="2025-02-13T15:31:08.413811598Z" level=info msg="StopPodSandbox for \"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\"" Feb 13 15:31:08.413940 containerd[1888]: time="2025-02-13T15:31:08.413909589Z" level=info msg="TearDown network for sandbox \"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\" successfully" Feb 13 15:31:08.413940 containerd[1888]: time="2025-02-13T15:31:08.413925102Z" level=info msg="StopPodSandbox for \"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\" returns successfully" Feb 13 15:31:08.414605 systemd[1]: run-netns-cni\x2d07ca3bb3\x2d1dc0\x2d8b14\x2dd163\x2dcc605cd48e7d.mount: Deactivated successfully. Feb 13 15:31:08.415582 containerd[1888]: time="2025-02-13T15:31:08.415548984Z" level=info msg="TearDown network for sandbox \"1441ff552043c3e280e84ba9d0bc6aa7ea3bec9192028af9e43301dd10366345\" successfully" Feb 13 15:31:08.415582 containerd[1888]: time="2025-02-13T15:31:08.415572253Z" level=info msg="StopPodSandbox for \"1441ff552043c3e280e84ba9d0bc6aa7ea3bec9192028af9e43301dd10366345\" returns successfully" Feb 13 15:31:08.418207 containerd[1888]: time="2025-02-13T15:31:08.417859004Z" level=info msg="StopPodSandbox for \"7dfd9e05d94a6c710fdcc2ec9fc32b71f2ec1dfd37bef64567534e836f51cd3d\"" Feb 13 15:31:08.418429 containerd[1888]: time="2025-02-13T15:31:08.418350733Z" level=info msg="TearDown network for sandbox \"7dfd9e05d94a6c710fdcc2ec9fc32b71f2ec1dfd37bef64567534e836f51cd3d\" successfully" Feb 13 15:31:08.418429 containerd[1888]: time="2025-02-13T15:31:08.418375691Z" level=info msg="StopPodSandbox for \"7dfd9e05d94a6c710fdcc2ec9fc32b71f2ec1dfd37bef64567534e836f51cd3d\" returns successfully" Feb 13 15:31:08.418535 containerd[1888]: time="2025-02-13T15:31:08.418515091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hk2gm,Uid:f08ddd1e-6241-48e2-83b7-49191ff10a45,Namespace:calico-system,Attempt:5,}" Feb 13 15:31:08.424643 containerd[1888]: time="2025-02-13T15:31:08.424606245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-59dbf,Uid:d5f6a5ec-239d-4976-acea-200a963b6960,Namespace:default,Attempt:2,}" Feb 13 15:31:08.685505 containerd[1888]: time="2025-02-13T15:31:08.685029511Z" level=error msg="Failed to destroy network for sandbox \"5594f31a95453176efd5fcfe64d4da181254b168de14cf3c490db6f4a10b5a60\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:08.687763 containerd[1888]: time="2025-02-13T15:31:08.686754374Z" level=error msg="encountered an error cleaning up failed sandbox \"5594f31a95453176efd5fcfe64d4da181254b168de14cf3c490db6f4a10b5a60\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:08.687763 containerd[1888]: time="2025-02-13T15:31:08.686912553Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-59dbf,Uid:d5f6a5ec-239d-4976-acea-200a963b6960,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"5594f31a95453176efd5fcfe64d4da181254b168de14cf3c490db6f4a10b5a60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:08.687763 containerd[1888]: time="2025-02-13T15:31:08.687616817Z" level=error msg="Failed to destroy network for sandbox \"c62379b0e0a89e264f2d6158ea14067b20260d10c62f4009b6a5629675c32a69\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:08.688387 kubelet[2332]: E0213 15:31:08.687165 2332 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5594f31a95453176efd5fcfe64d4da181254b168de14cf3c490db6f4a10b5a60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:08.688387 kubelet[2332]: E0213 15:31:08.687232 2332 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5594f31a95453176efd5fcfe64d4da181254b168de14cf3c490db6f4a10b5a60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-59dbf" Feb 13 15:31:08.688387 kubelet[2332]: E0213 15:31:08.687251 2332 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5594f31a95453176efd5fcfe64d4da181254b168de14cf3c490db6f4a10b5a60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-59dbf" Feb 13 15:31:08.688500 containerd[1888]: time="2025-02-13T15:31:08.688337035Z" level=error msg="encountered an error cleaning up failed sandbox \"c62379b0e0a89e264f2d6158ea14067b20260d10c62f4009b6a5629675c32a69\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:08.688529 kubelet[2332]: E0213 15:31:08.687303 2332 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-59dbf_default(d5f6a5ec-239d-4976-acea-200a963b6960)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-59dbf_default(d5f6a5ec-239d-4976-acea-200a963b6960)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5594f31a95453176efd5fcfe64d4da181254b168de14cf3c490db6f4a10b5a60\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-59dbf" podUID="d5f6a5ec-239d-4976-acea-200a963b6960" Feb 13 15:31:08.688682 containerd[1888]: time="2025-02-13T15:31:08.688489027Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hk2gm,Uid:f08ddd1e-6241-48e2-83b7-49191ff10a45,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"c62379b0e0a89e264f2d6158ea14067b20260d10c62f4009b6a5629675c32a69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:08.689059 kubelet[2332]: E0213 15:31:08.689006 2332 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c62379b0e0a89e264f2d6158ea14067b20260d10c62f4009b6a5629675c32a69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:08.689245 kubelet[2332]: E0213 15:31:08.689146 2332 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c62379b0e0a89e264f2d6158ea14067b20260d10c62f4009b6a5629675c32a69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hk2gm" Feb 13 15:31:08.689852 kubelet[2332]: E0213 15:31:08.689724 2332 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c62379b0e0a89e264f2d6158ea14067b20260d10c62f4009b6a5629675c32a69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hk2gm" Feb 13 15:31:08.689852 kubelet[2332]: E0213 15:31:08.689790 2332 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hk2gm_calico-system(f08ddd1e-6241-48e2-83b7-49191ff10a45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hk2gm_calico-system(f08ddd1e-6241-48e2-83b7-49191ff10a45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c62379b0e0a89e264f2d6158ea14067b20260d10c62f4009b6a5629675c32a69\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hk2gm" podUID="f08ddd1e-6241-48e2-83b7-49191ff10a45" Feb 13 15:31:09.059140 kubelet[2332]: E0213 15:31:09.058823 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:09.384389 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c62379b0e0a89e264f2d6158ea14067b20260d10c62f4009b6a5629675c32a69-shm.mount: Deactivated successfully. Feb 13 15:31:09.413832 kubelet[2332]: I0213 15:31:09.412928 2332 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c62379b0e0a89e264f2d6158ea14067b20260d10c62f4009b6a5629675c32a69" Feb 13 15:31:09.415607 containerd[1888]: time="2025-02-13T15:31:09.413901258Z" level=info msg="StopPodSandbox for \"c62379b0e0a89e264f2d6158ea14067b20260d10c62f4009b6a5629675c32a69\"" Feb 13 15:31:09.415607 containerd[1888]: time="2025-02-13T15:31:09.414304978Z" level=info msg="Ensure that sandbox c62379b0e0a89e264f2d6158ea14067b20260d10c62f4009b6a5629675c32a69 in task-service has been cleanup successfully" Feb 13 15:31:09.416401 containerd[1888]: time="2025-02-13T15:31:09.415613825Z" level=info msg="TearDown network for sandbox \"c62379b0e0a89e264f2d6158ea14067b20260d10c62f4009b6a5629675c32a69\" successfully" Feb 13 15:31:09.416401 containerd[1888]: time="2025-02-13T15:31:09.415635720Z" level=info msg="StopPodSandbox for \"c62379b0e0a89e264f2d6158ea14067b20260d10c62f4009b6a5629675c32a69\" returns successfully" Feb 13 15:31:09.416401 containerd[1888]: time="2025-02-13T15:31:09.416301683Z" level=info msg="StopPodSandbox for \"f9384dcdeebd925b8031a46bcebeef6dd9a5bf492ccf93dd7f189836d41c7871\"" Feb 13 15:31:09.416401 containerd[1888]: time="2025-02-13T15:31:09.416397332Z" level=info msg="TearDown network for sandbox \"f9384dcdeebd925b8031a46bcebeef6dd9a5bf492ccf93dd7f189836d41c7871\" successfully" Feb 13 15:31:09.417815 containerd[1888]: time="2025-02-13T15:31:09.416411824Z" level=info msg="StopPodSandbox for \"f9384dcdeebd925b8031a46bcebeef6dd9a5bf492ccf93dd7f189836d41c7871\" returns successfully" Feb 13 15:31:09.419250 systemd[1]: run-netns-cni\x2d9a5ffdaa\x2df809\x2de57f\x2da8ff\x2d9fd3dd83a737.mount: Deactivated successfully. Feb 13 15:31:09.423212 containerd[1888]: time="2025-02-13T15:31:09.421349628Z" level=info msg="StopPodSandbox for \"0111c65653a34a4775b455233cef655a6171f473130bb8a37089cc787e1e08b5\"" Feb 13 15:31:09.423212 containerd[1888]: time="2025-02-13T15:31:09.421450660Z" level=info msg="TearDown network for sandbox \"0111c65653a34a4775b455233cef655a6171f473130bb8a37089cc787e1e08b5\" successfully" Feb 13 15:31:09.423212 containerd[1888]: time="2025-02-13T15:31:09.421467900Z" level=info msg="StopPodSandbox for \"0111c65653a34a4775b455233cef655a6171f473130bb8a37089cc787e1e08b5\" returns successfully" Feb 13 15:31:09.423212 containerd[1888]: time="2025-02-13T15:31:09.423204145Z" level=info msg="StopPodSandbox for \"5594f31a95453176efd5fcfe64d4da181254b168de14cf3c490db6f4a10b5a60\"" Feb 13 15:31:09.423430 kubelet[2332]: I0213 15:31:09.422670 2332 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5594f31a95453176efd5fcfe64d4da181254b168de14cf3c490db6f4a10b5a60" Feb 13 15:31:09.423485 containerd[1888]: time="2025-02-13T15:31:09.423438364Z" level=info msg="Ensure that sandbox 5594f31a95453176efd5fcfe64d4da181254b168de14cf3c490db6f4a10b5a60 in task-service has been cleanup successfully" Feb 13 15:31:09.427970 containerd[1888]: time="2025-02-13T15:31:09.426644796Z" level=info msg="StopPodSandbox for \"c2062018143225935d831555441151f166411d89d5c3a73413352e64c89d0626\"" Feb 13 15:31:09.427970 containerd[1888]: time="2025-02-13T15:31:09.426756328Z" level=info msg="TearDown network for sandbox \"c2062018143225935d831555441151f166411d89d5c3a73413352e64c89d0626\" successfully" Feb 13 15:31:09.427970 containerd[1888]: time="2025-02-13T15:31:09.426772840Z" level=info msg="StopPodSandbox for \"c2062018143225935d831555441151f166411d89d5c3a73413352e64c89d0626\" returns successfully" Feb 13 15:31:09.426904 systemd[1]: run-netns-cni\x2d0573fe5e\x2d04c5\x2dce3d\x2d5ad4\x2d466300221b09.mount: Deactivated successfully. Feb 13 15:31:09.429925 containerd[1888]: time="2025-02-13T15:31:09.428728181Z" level=info msg="TearDown network for sandbox \"5594f31a95453176efd5fcfe64d4da181254b168de14cf3c490db6f4a10b5a60\" successfully" Feb 13 15:31:09.429925 containerd[1888]: time="2025-02-13T15:31:09.428831385Z" level=info msg="StopPodSandbox for \"5594f31a95453176efd5fcfe64d4da181254b168de14cf3c490db6f4a10b5a60\" returns successfully" Feb 13 15:31:09.429925 containerd[1888]: time="2025-02-13T15:31:09.429915647Z" level=info msg="StopPodSandbox for \"6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786\"" Feb 13 15:31:09.430371 containerd[1888]: time="2025-02-13T15:31:09.430010421Z" level=info msg="TearDown network for sandbox \"6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786\" successfully" Feb 13 15:31:09.430371 containerd[1888]: time="2025-02-13T15:31:09.430025014Z" level=info msg="StopPodSandbox for \"6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786\" returns successfully" Feb 13 15:31:09.431151 containerd[1888]: time="2025-02-13T15:31:09.431126108Z" level=info msg="StopPodSandbox for \"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\"" Feb 13 15:31:09.431238 containerd[1888]: time="2025-02-13T15:31:09.431222233Z" level=info msg="TearDown network for sandbox \"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\" successfully" Feb 13 15:31:09.431285 containerd[1888]: time="2025-02-13T15:31:09.431237461Z" level=info msg="StopPodSandbox for \"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\" returns successfully" Feb 13 15:31:09.431646 containerd[1888]: time="2025-02-13T15:31:09.431617520Z" level=info msg="StopPodSandbox for \"1441ff552043c3e280e84ba9d0bc6aa7ea3bec9192028af9e43301dd10366345\"" Feb 13 15:31:09.431755 containerd[1888]: time="2025-02-13T15:31:09.431731402Z" level=info msg="TearDown network for sandbox \"1441ff552043c3e280e84ba9d0bc6aa7ea3bec9192028af9e43301dd10366345\" successfully" Feb 13 15:31:09.431807 containerd[1888]: time="2025-02-13T15:31:09.431750866Z" level=info msg="StopPodSandbox for \"1441ff552043c3e280e84ba9d0bc6aa7ea3bec9192028af9e43301dd10366345\" returns successfully" Feb 13 15:31:09.432115 containerd[1888]: time="2025-02-13T15:31:09.431994519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hk2gm,Uid:f08ddd1e-6241-48e2-83b7-49191ff10a45,Namespace:calico-system,Attempt:6,}" Feb 13 15:31:09.432487 containerd[1888]: time="2025-02-13T15:31:09.432307270Z" level=info msg="StopPodSandbox for \"7dfd9e05d94a6c710fdcc2ec9fc32b71f2ec1dfd37bef64567534e836f51cd3d\"" Feb 13 15:31:09.432487 containerd[1888]: time="2025-02-13T15:31:09.432395467Z" level=info msg="TearDown network for sandbox \"7dfd9e05d94a6c710fdcc2ec9fc32b71f2ec1dfd37bef64567534e836f51cd3d\" successfully" Feb 13 15:31:09.432487 containerd[1888]: time="2025-02-13T15:31:09.432409153Z" level=info msg="StopPodSandbox for \"7dfd9e05d94a6c710fdcc2ec9fc32b71f2ec1dfd37bef64567534e836f51cd3d\" returns successfully" Feb 13 15:31:09.433935 containerd[1888]: time="2025-02-13T15:31:09.433907379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-59dbf,Uid:d5f6a5ec-239d-4976-acea-200a963b6960,Namespace:default,Attempt:3,}" Feb 13 15:31:09.666392 containerd[1888]: time="2025-02-13T15:31:09.664040904Z" level=error msg="Failed to destroy network for sandbox \"0e914a0e62ab2c715489e22ce7216965150a576638a81af3992c00ef0b0fd526\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:09.666392 containerd[1888]: time="2025-02-13T15:31:09.664536725Z" level=error msg="encountered an error cleaning up failed sandbox \"0e914a0e62ab2c715489e22ce7216965150a576638a81af3992c00ef0b0fd526\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:09.666392 containerd[1888]: time="2025-02-13T15:31:09.664933477Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hk2gm,Uid:f08ddd1e-6241-48e2-83b7-49191ff10a45,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"0e914a0e62ab2c715489e22ce7216965150a576638a81af3992c00ef0b0fd526\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:09.666634 kubelet[2332]: E0213 15:31:09.665193 2332 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e914a0e62ab2c715489e22ce7216965150a576638a81af3992c00ef0b0fd526\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:09.666634 kubelet[2332]: E0213 15:31:09.665255 2332 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e914a0e62ab2c715489e22ce7216965150a576638a81af3992c00ef0b0fd526\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hk2gm" Feb 13 15:31:09.666634 kubelet[2332]: E0213 15:31:09.665281 2332 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e914a0e62ab2c715489e22ce7216965150a576638a81af3992c00ef0b0fd526\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hk2gm" Feb 13 15:31:09.666796 kubelet[2332]: E0213 15:31:09.665346 2332 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hk2gm_calico-system(f08ddd1e-6241-48e2-83b7-49191ff10a45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hk2gm_calico-system(f08ddd1e-6241-48e2-83b7-49191ff10a45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e914a0e62ab2c715489e22ce7216965150a576638a81af3992c00ef0b0fd526\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hk2gm" podUID="f08ddd1e-6241-48e2-83b7-49191ff10a45" Feb 13 15:31:09.678098 containerd[1888]: time="2025-02-13T15:31:09.677668093Z" level=error msg="Failed to destroy network for sandbox \"394f10ec7761b00f05e2f3cd55b3945dc0f3d13d154fe6e57a457ab76c57d0cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:09.678098 containerd[1888]: time="2025-02-13T15:31:09.678019962Z" level=error msg="encountered an error cleaning up failed sandbox \"394f10ec7761b00f05e2f3cd55b3945dc0f3d13d154fe6e57a457ab76c57d0cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:09.678277 containerd[1888]: time="2025-02-13T15:31:09.678118419Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-59dbf,Uid:d5f6a5ec-239d-4976-acea-200a963b6960,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"394f10ec7761b00f05e2f3cd55b3945dc0f3d13d154fe6e57a457ab76c57d0cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:09.679134 kubelet[2332]: E0213 15:31:09.678767 2332 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"394f10ec7761b00f05e2f3cd55b3945dc0f3d13d154fe6e57a457ab76c57d0cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:09.679134 kubelet[2332]: E0213 15:31:09.678822 2332 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"394f10ec7761b00f05e2f3cd55b3945dc0f3d13d154fe6e57a457ab76c57d0cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-59dbf" Feb 13 15:31:09.679134 kubelet[2332]: E0213 15:31:09.678842 2332 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"394f10ec7761b00f05e2f3cd55b3945dc0f3d13d154fe6e57a457ab76c57d0cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-59dbf" Feb 13 15:31:09.679527 kubelet[2332]: E0213 15:31:09.678883 2332 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-59dbf_default(d5f6a5ec-239d-4976-acea-200a963b6960)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-59dbf_default(d5f6a5ec-239d-4976-acea-200a963b6960)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"394f10ec7761b00f05e2f3cd55b3945dc0f3d13d154fe6e57a457ab76c57d0cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-59dbf" podUID="d5f6a5ec-239d-4976-acea-200a963b6960" Feb 13 15:31:10.040023 kubelet[2332]: E0213 15:31:10.039367 2332 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:10.059150 kubelet[2332]: E0213 15:31:10.059115 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:10.384139 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0e914a0e62ab2c715489e22ce7216965150a576638a81af3992c00ef0b0fd526-shm.mount: Deactivated successfully. Feb 13 15:31:10.384338 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-394f10ec7761b00f05e2f3cd55b3945dc0f3d13d154fe6e57a457ab76c57d0cc-shm.mount: Deactivated successfully. Feb 13 15:31:10.429762 kubelet[2332]: I0213 15:31:10.429727 2332 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e914a0e62ab2c715489e22ce7216965150a576638a81af3992c00ef0b0fd526" Feb 13 15:31:10.430746 containerd[1888]: time="2025-02-13T15:31:10.430714040Z" level=info msg="StopPodSandbox for \"0e914a0e62ab2c715489e22ce7216965150a576638a81af3992c00ef0b0fd526\"" Feb 13 15:31:10.434523 containerd[1888]: time="2025-02-13T15:31:10.431584469Z" level=info msg="Ensure that sandbox 0e914a0e62ab2c715489e22ce7216965150a576638a81af3992c00ef0b0fd526 in task-service has been cleanup successfully" Feb 13 15:31:10.434523 containerd[1888]: time="2025-02-13T15:31:10.434409613Z" level=info msg="TearDown network for sandbox \"0e914a0e62ab2c715489e22ce7216965150a576638a81af3992c00ef0b0fd526\" successfully" Feb 13 15:31:10.434523 containerd[1888]: time="2025-02-13T15:31:10.434433556Z" level=info msg="StopPodSandbox for \"0e914a0e62ab2c715489e22ce7216965150a576638a81af3992c00ef0b0fd526\" returns successfully" Feb 13 15:31:10.434904 containerd[1888]: time="2025-02-13T15:31:10.434878792Z" level=info msg="StopPodSandbox for \"c62379b0e0a89e264f2d6158ea14067b20260d10c62f4009b6a5629675c32a69\"" Feb 13 15:31:10.436372 containerd[1888]: time="2025-02-13T15:31:10.435223822Z" level=info msg="TearDown network for sandbox \"c62379b0e0a89e264f2d6158ea14067b20260d10c62f4009b6a5629675c32a69\" successfully" Feb 13 15:31:10.436372 containerd[1888]: time="2025-02-13T15:31:10.435245497Z" level=info msg="StopPodSandbox for \"c62379b0e0a89e264f2d6158ea14067b20260d10c62f4009b6a5629675c32a69\" returns successfully" Feb 13 15:31:10.436372 containerd[1888]: time="2025-02-13T15:31:10.435596613Z" level=info msg="StopPodSandbox for \"f9384dcdeebd925b8031a46bcebeef6dd9a5bf492ccf93dd7f189836d41c7871\"" Feb 13 15:31:10.436372 containerd[1888]: time="2025-02-13T15:31:10.435678014Z" level=info msg="TearDown network for sandbox \"f9384dcdeebd925b8031a46bcebeef6dd9a5bf492ccf93dd7f189836d41c7871\" successfully" Feb 13 15:31:10.436372 containerd[1888]: time="2025-02-13T15:31:10.435692400Z" level=info msg="StopPodSandbox for \"f9384dcdeebd925b8031a46bcebeef6dd9a5bf492ccf93dd7f189836d41c7871\" returns successfully" Feb 13 15:31:10.435301 systemd[1]: run-netns-cni\x2dd63c03ce\x2db82d\x2dc046\x2d591e\x2d01e6f9bd7b1a.mount: Deactivated successfully. Feb 13 15:31:10.437904 containerd[1888]: time="2025-02-13T15:31:10.437874275Z" level=info msg="StopPodSandbox for \"0111c65653a34a4775b455233cef655a6171f473130bb8a37089cc787e1e08b5\"" Feb 13 15:31:10.438054 containerd[1888]: time="2025-02-13T15:31:10.437963695Z" level=info msg="TearDown network for sandbox \"0111c65653a34a4775b455233cef655a6171f473130bb8a37089cc787e1e08b5\" successfully" Feb 13 15:31:10.438054 containerd[1888]: time="2025-02-13T15:31:10.437981499Z" level=info msg="StopPodSandbox for \"0111c65653a34a4775b455233cef655a6171f473130bb8a37089cc787e1e08b5\" returns successfully" Feb 13 15:31:10.442223 containerd[1888]: time="2025-02-13T15:31:10.439954108Z" level=info msg="StopPodSandbox for \"c2062018143225935d831555441151f166411d89d5c3a73413352e64c89d0626\"" Feb 13 15:31:10.442223 containerd[1888]: time="2025-02-13T15:31:10.440046354Z" level=info msg="TearDown network for sandbox \"c2062018143225935d831555441151f166411d89d5c3a73413352e64c89d0626\" successfully" Feb 13 15:31:10.442223 containerd[1888]: time="2025-02-13T15:31:10.440061387Z" level=info msg="StopPodSandbox for \"c2062018143225935d831555441151f166411d89d5c3a73413352e64c89d0626\" returns successfully" Feb 13 15:31:10.442223 containerd[1888]: time="2025-02-13T15:31:10.440546108Z" level=info msg="StopPodSandbox for \"6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786\"" Feb 13 15:31:10.442223 containerd[1888]: time="2025-02-13T15:31:10.440628429Z" level=info msg="TearDown network for sandbox \"6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786\" successfully" Feb 13 15:31:10.442223 containerd[1888]: time="2025-02-13T15:31:10.440641156Z" level=info msg="StopPodSandbox for \"6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786\" returns successfully" Feb 13 15:31:10.442223 containerd[1888]: time="2025-02-13T15:31:10.441029988Z" level=info msg="StopPodSandbox for \"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\"" Feb 13 15:31:10.442223 containerd[1888]: time="2025-02-13T15:31:10.441161973Z" level=info msg="TearDown network for sandbox \"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\" successfully" Feb 13 15:31:10.442223 containerd[1888]: time="2025-02-13T15:31:10.441179365Z" level=info msg="StopPodSandbox for \"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\" returns successfully" Feb 13 15:31:10.442223 containerd[1888]: time="2025-02-13T15:31:10.441613727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hk2gm,Uid:f08ddd1e-6241-48e2-83b7-49191ff10a45,Namespace:calico-system,Attempt:7,}" Feb 13 15:31:10.442223 containerd[1888]: time="2025-02-13T15:31:10.442190677Z" level=info msg="StopPodSandbox for \"394f10ec7761b00f05e2f3cd55b3945dc0f3d13d154fe6e57a457ab76c57d0cc\"" Feb 13 15:31:10.442761 kubelet[2332]: I0213 15:31:10.441402 2332 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="394f10ec7761b00f05e2f3cd55b3945dc0f3d13d154fe6e57a457ab76c57d0cc" Feb 13 15:31:10.442820 containerd[1888]: time="2025-02-13T15:31:10.442395381Z" level=info msg="Ensure that sandbox 394f10ec7761b00f05e2f3cd55b3945dc0f3d13d154fe6e57a457ab76c57d0cc in task-service has been cleanup successfully" Feb 13 15:31:10.445365 containerd[1888]: time="2025-02-13T15:31:10.443444724Z" level=info msg="TearDown network for sandbox \"394f10ec7761b00f05e2f3cd55b3945dc0f3d13d154fe6e57a457ab76c57d0cc\" successfully" Feb 13 15:31:10.445365 containerd[1888]: time="2025-02-13T15:31:10.443466869Z" level=info msg="StopPodSandbox for \"394f10ec7761b00f05e2f3cd55b3945dc0f3d13d154fe6e57a457ab76c57d0cc\" returns successfully" Feb 13 15:31:10.445365 containerd[1888]: time="2025-02-13T15:31:10.445185946Z" level=info msg="StopPodSandbox for \"5594f31a95453176efd5fcfe64d4da181254b168de14cf3c490db6f4a10b5a60\"" Feb 13 15:31:10.445365 containerd[1888]: time="2025-02-13T15:31:10.445277110Z" level=info msg="TearDown network for sandbox \"5594f31a95453176efd5fcfe64d4da181254b168de14cf3c490db6f4a10b5a60\" successfully" Feb 13 15:31:10.445365 containerd[1888]: time="2025-02-13T15:31:10.445292327Z" level=info msg="StopPodSandbox for \"5594f31a95453176efd5fcfe64d4da181254b168de14cf3c490db6f4a10b5a60\" returns successfully" Feb 13 15:31:10.446564 systemd[1]: run-netns-cni\x2de8d77f83\x2de611\x2d6be2\x2df3b8\x2d7563a1b4b5dc.mount: Deactivated successfully. Feb 13 15:31:10.448669 containerd[1888]: time="2025-02-13T15:31:10.448227633Z" level=info msg="StopPodSandbox for \"1441ff552043c3e280e84ba9d0bc6aa7ea3bec9192028af9e43301dd10366345\"" Feb 13 15:31:10.449981 containerd[1888]: time="2025-02-13T15:31:10.449954873Z" level=info msg="TearDown network for sandbox \"1441ff552043c3e280e84ba9d0bc6aa7ea3bec9192028af9e43301dd10366345\" successfully" Feb 13 15:31:10.450057 containerd[1888]: time="2025-02-13T15:31:10.449980511Z" level=info msg="StopPodSandbox for \"1441ff552043c3e280e84ba9d0bc6aa7ea3bec9192028af9e43301dd10366345\" returns successfully" Feb 13 15:31:10.450818 containerd[1888]: time="2025-02-13T15:31:10.450502039Z" level=info msg="StopPodSandbox for \"7dfd9e05d94a6c710fdcc2ec9fc32b71f2ec1dfd37bef64567534e836f51cd3d\"" Feb 13 15:31:10.450818 containerd[1888]: time="2025-02-13T15:31:10.450590898Z" level=info msg="TearDown network for sandbox \"7dfd9e05d94a6c710fdcc2ec9fc32b71f2ec1dfd37bef64567534e836f51cd3d\" successfully" Feb 13 15:31:10.450818 containerd[1888]: time="2025-02-13T15:31:10.450604941Z" level=info msg="StopPodSandbox for \"7dfd9e05d94a6c710fdcc2ec9fc32b71f2ec1dfd37bef64567534e836f51cd3d\" returns successfully" Feb 13 15:31:10.453094 containerd[1888]: time="2025-02-13T15:31:10.452829696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-59dbf,Uid:d5f6a5ec-239d-4976-acea-200a963b6960,Namespace:default,Attempt:4,}" Feb 13 15:31:10.660121 containerd[1888]: time="2025-02-13T15:31:10.659231835Z" level=error msg="Failed to destroy network for sandbox \"43cfc2a1cd891f872194322b76bd7ee6a5f76f87624550d7661ad2db6fdf58dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:10.662034 containerd[1888]: time="2025-02-13T15:31:10.660721855Z" level=error msg="encountered an error cleaning up failed sandbox \"43cfc2a1cd891f872194322b76bd7ee6a5f76f87624550d7661ad2db6fdf58dd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:10.662434 containerd[1888]: time="2025-02-13T15:31:10.662295363Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-59dbf,Uid:d5f6a5ec-239d-4976-acea-200a963b6960,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"43cfc2a1cd891f872194322b76bd7ee6a5f76f87624550d7661ad2db6fdf58dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:10.663268 kubelet[2332]: E0213 15:31:10.663216 2332 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43cfc2a1cd891f872194322b76bd7ee6a5f76f87624550d7661ad2db6fdf58dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:10.663379 kubelet[2332]: E0213 15:31:10.663291 2332 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43cfc2a1cd891f872194322b76bd7ee6a5f76f87624550d7661ad2db6fdf58dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-59dbf" Feb 13 15:31:10.663379 kubelet[2332]: E0213 15:31:10.663318 2332 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43cfc2a1cd891f872194322b76bd7ee6a5f76f87624550d7661ad2db6fdf58dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-59dbf" Feb 13 15:31:10.663667 kubelet[2332]: E0213 15:31:10.663372 2332 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-59dbf_default(d5f6a5ec-239d-4976-acea-200a963b6960)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-59dbf_default(d5f6a5ec-239d-4976-acea-200a963b6960)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43cfc2a1cd891f872194322b76bd7ee6a5f76f87624550d7661ad2db6fdf58dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-59dbf" podUID="d5f6a5ec-239d-4976-acea-200a963b6960" Feb 13 15:31:10.678801 containerd[1888]: time="2025-02-13T15:31:10.678646611Z" level=error msg="Failed to destroy network for sandbox \"287a2e31e254699d73db7f7068c7a7d1c18a77ac84212b45296cdc13e6198009\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:10.680402 containerd[1888]: time="2025-02-13T15:31:10.679568020Z" level=error msg="encountered an error cleaning up failed sandbox \"287a2e31e254699d73db7f7068c7a7d1c18a77ac84212b45296cdc13e6198009\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:10.680402 containerd[1888]: time="2025-02-13T15:31:10.679655073Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hk2gm,Uid:f08ddd1e-6241-48e2-83b7-49191ff10a45,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"287a2e31e254699d73db7f7068c7a7d1c18a77ac84212b45296cdc13e6198009\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:10.680599 kubelet[2332]: E0213 15:31:10.679898 2332 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"287a2e31e254699d73db7f7068c7a7d1c18a77ac84212b45296cdc13e6198009\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:10.680599 kubelet[2332]: E0213 15:31:10.679967 2332 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"287a2e31e254699d73db7f7068c7a7d1c18a77ac84212b45296cdc13e6198009\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hk2gm" Feb 13 15:31:10.680599 kubelet[2332]: E0213 15:31:10.679998 2332 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"287a2e31e254699d73db7f7068c7a7d1c18a77ac84212b45296cdc13e6198009\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hk2gm" Feb 13 15:31:10.680716 kubelet[2332]: E0213 15:31:10.680054 2332 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hk2gm_calico-system(f08ddd1e-6241-48e2-83b7-49191ff10a45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hk2gm_calico-system(f08ddd1e-6241-48e2-83b7-49191ff10a45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"287a2e31e254699d73db7f7068c7a7d1c18a77ac84212b45296cdc13e6198009\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hk2gm" podUID="f08ddd1e-6241-48e2-83b7-49191ff10a45" Feb 13 15:31:11.060005 kubelet[2332]: E0213 15:31:11.059898 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:11.132006 containerd[1888]: time="2025-02-13T15:31:11.131902323Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:11.134038 containerd[1888]: time="2025-02-13T15:31:11.133952590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 15:31:11.136339 containerd[1888]: time="2025-02-13T15:31:11.136266174Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:11.139972 containerd[1888]: time="2025-02-13T15:31:11.139909391Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:11.141262 containerd[1888]: time="2025-02-13T15:31:11.140695043Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.780466474s" Feb 13 15:31:11.141262 containerd[1888]: time="2025-02-13T15:31:11.140736892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 15:31:11.165329 containerd[1888]: time="2025-02-13T15:31:11.165290869Z" level=info msg="CreateContainer within sandbox \"c6130a4aa653c9f9cba8830b6a3b9d4f80ea7aa17be1d92058e5813d1fd0577f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 15:31:11.191741 containerd[1888]: time="2025-02-13T15:31:11.191683598Z" level=info msg="CreateContainer within sandbox \"c6130a4aa653c9f9cba8830b6a3b9d4f80ea7aa17be1d92058e5813d1fd0577f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c8a9ebad7bfab42598f20c8205f71da2181e8af389b703e377db3c6274d7be81\"" Feb 13 15:31:11.192458 containerd[1888]: time="2025-02-13T15:31:11.192425988Z" level=info msg="StartContainer for \"c8a9ebad7bfab42598f20c8205f71da2181e8af389b703e377db3c6274d7be81\"" Feb 13 15:31:11.291023 systemd[1]: Started cri-containerd-c8a9ebad7bfab42598f20c8205f71da2181e8af389b703e377db3c6274d7be81.scope - libcontainer container c8a9ebad7bfab42598f20c8205f71da2181e8af389b703e377db3c6274d7be81. Feb 13 15:31:11.375993 containerd[1888]: time="2025-02-13T15:31:11.375747384Z" level=info msg="StartContainer for \"c8a9ebad7bfab42598f20c8205f71da2181e8af389b703e377db3c6274d7be81\" returns successfully" Feb 13 15:31:11.397045 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-43cfc2a1cd891f872194322b76bd7ee6a5f76f87624550d7661ad2db6fdf58dd-shm.mount: Deactivated successfully. Feb 13 15:31:11.397531 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-287a2e31e254699d73db7f7068c7a7d1c18a77ac84212b45296cdc13e6198009-shm.mount: Deactivated successfully. Feb 13 15:31:11.397745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1440388023.mount: Deactivated successfully. Feb 13 15:31:11.461155 kubelet[2332]: I0213 15:31:11.459519 2332 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43cfc2a1cd891f872194322b76bd7ee6a5f76f87624550d7661ad2db6fdf58dd" Feb 13 15:31:11.465236 containerd[1888]: time="2025-02-13T15:31:11.461905473Z" level=info msg="StopPodSandbox for \"43cfc2a1cd891f872194322b76bd7ee6a5f76f87624550d7661ad2db6fdf58dd\"" Feb 13 15:31:11.465236 containerd[1888]: time="2025-02-13T15:31:11.462359308Z" level=info msg="Ensure that sandbox 43cfc2a1cd891f872194322b76bd7ee6a5f76f87624550d7661ad2db6fdf58dd in task-service has been cleanup successfully" Feb 13 15:31:11.471374 containerd[1888]: time="2025-02-13T15:31:11.471114100Z" level=info msg="TearDown network for sandbox \"43cfc2a1cd891f872194322b76bd7ee6a5f76f87624550d7661ad2db6fdf58dd\" successfully" Feb 13 15:31:11.471374 containerd[1888]: time="2025-02-13T15:31:11.471150907Z" level=info msg="StopPodSandbox for \"43cfc2a1cd891f872194322b76bd7ee6a5f76f87624550d7661ad2db6fdf58dd\" returns successfully" Feb 13 15:31:11.474759 systemd[1]: run-netns-cni\x2dfb625686\x2d1d2a\x2d0c54\x2d3840\x2df5071f6b2ab3.mount: Deactivated successfully. Feb 13 15:31:11.477625 containerd[1888]: time="2025-02-13T15:31:11.476948340Z" level=info msg="StopPodSandbox for \"394f10ec7761b00f05e2f3cd55b3945dc0f3d13d154fe6e57a457ab76c57d0cc\"" Feb 13 15:31:11.477625 containerd[1888]: time="2025-02-13T15:31:11.477135355Z" level=info msg="TearDown network for sandbox \"394f10ec7761b00f05e2f3cd55b3945dc0f3d13d154fe6e57a457ab76c57d0cc\" successfully" Feb 13 15:31:11.477625 containerd[1888]: time="2025-02-13T15:31:11.477167646Z" level=info msg="StopPodSandbox for \"394f10ec7761b00f05e2f3cd55b3945dc0f3d13d154fe6e57a457ab76c57d0cc\" returns successfully" Feb 13 15:31:11.479184 containerd[1888]: time="2025-02-13T15:31:11.478357555Z" level=info msg="StopPodSandbox for \"5594f31a95453176efd5fcfe64d4da181254b168de14cf3c490db6f4a10b5a60\"" Feb 13 15:31:11.479184 containerd[1888]: time="2025-02-13T15:31:11.478555497Z" level=info msg="TearDown network for sandbox \"5594f31a95453176efd5fcfe64d4da181254b168de14cf3c490db6f4a10b5a60\" successfully" Feb 13 15:31:11.479184 containerd[1888]: time="2025-02-13T15:31:11.478634162Z" level=info msg="StopPodSandbox for \"5594f31a95453176efd5fcfe64d4da181254b168de14cf3c490db6f4a10b5a60\" returns successfully" Feb 13 15:31:11.480704 containerd[1888]: time="2025-02-13T15:31:11.479406289Z" level=info msg="StopPodSandbox for \"1441ff552043c3e280e84ba9d0bc6aa7ea3bec9192028af9e43301dd10366345\"" Feb 13 15:31:11.480704 containerd[1888]: time="2025-02-13T15:31:11.479492854Z" level=info msg="TearDown network for sandbox \"1441ff552043c3e280e84ba9d0bc6aa7ea3bec9192028af9e43301dd10366345\" successfully" Feb 13 15:31:11.480704 containerd[1888]: time="2025-02-13T15:31:11.479506664Z" level=info msg="StopPodSandbox for \"1441ff552043c3e280e84ba9d0bc6aa7ea3bec9192028af9e43301dd10366345\" returns successfully" Feb 13 15:31:11.480704 containerd[1888]: time="2025-02-13T15:31:11.480671331Z" level=info msg="StopPodSandbox for \"7dfd9e05d94a6c710fdcc2ec9fc32b71f2ec1dfd37bef64567534e836f51cd3d\"" Feb 13 15:31:11.480952 containerd[1888]: time="2025-02-13T15:31:11.480762171Z" level=info msg="TearDown network for sandbox \"7dfd9e05d94a6c710fdcc2ec9fc32b71f2ec1dfd37bef64567534e836f51cd3d\" successfully" Feb 13 15:31:11.480952 containerd[1888]: time="2025-02-13T15:31:11.480835058Z" level=info msg="StopPodSandbox for \"7dfd9e05d94a6c710fdcc2ec9fc32b71f2ec1dfd37bef64567534e836f51cd3d\" returns successfully" Feb 13 15:31:11.484180 containerd[1888]: time="2025-02-13T15:31:11.482345554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-59dbf,Uid:d5f6a5ec-239d-4976-acea-200a963b6960,Namespace:default,Attempt:5,}" Feb 13 15:31:11.507223 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 15:31:11.507355 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 15:31:11.507390 kubelet[2332]: I0213 15:31:11.502914 2332 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="287a2e31e254699d73db7f7068c7a7d1c18a77ac84212b45296cdc13e6198009" Feb 13 15:31:11.507958 containerd[1888]: time="2025-02-13T15:31:11.507915049Z" level=info msg="StopPodSandbox for \"287a2e31e254699d73db7f7068c7a7d1c18a77ac84212b45296cdc13e6198009\"" Feb 13 15:31:11.509554 containerd[1888]: time="2025-02-13T15:31:11.509442570Z" level=info msg="Ensure that sandbox 287a2e31e254699d73db7f7068c7a7d1c18a77ac84212b45296cdc13e6198009 in task-service has been cleanup successfully" Feb 13 15:31:11.515958 systemd[1]: run-netns-cni\x2d9241ddda\x2d2da4\x2d643d\x2d47f8\x2d0235aa5e6987.mount: Deactivated successfully. Feb 13 15:31:11.520109 containerd[1888]: time="2025-02-13T15:31:11.520044877Z" level=info msg="TearDown network for sandbox \"287a2e31e254699d73db7f7068c7a7d1c18a77ac84212b45296cdc13e6198009\" successfully" Feb 13 15:31:11.520109 containerd[1888]: time="2025-02-13T15:31:11.520098788Z" level=info msg="StopPodSandbox for \"287a2e31e254699d73db7f7068c7a7d1c18a77ac84212b45296cdc13e6198009\" returns successfully" Feb 13 15:31:11.523535 containerd[1888]: time="2025-02-13T15:31:11.521489294Z" level=info msg="StopPodSandbox for \"0e914a0e62ab2c715489e22ce7216965150a576638a81af3992c00ef0b0fd526\"" Feb 13 15:31:11.523535 containerd[1888]: time="2025-02-13T15:31:11.521642660Z" level=info msg="TearDown network for sandbox \"0e914a0e62ab2c715489e22ce7216965150a576638a81af3992c00ef0b0fd526\" successfully" Feb 13 15:31:11.523535 containerd[1888]: time="2025-02-13T15:31:11.521661073Z" level=info msg="StopPodSandbox for \"0e914a0e62ab2c715489e22ce7216965150a576638a81af3992c00ef0b0fd526\" returns successfully" Feb 13 15:31:11.525498 containerd[1888]: time="2025-02-13T15:31:11.525467404Z" level=info msg="StopPodSandbox for \"c62379b0e0a89e264f2d6158ea14067b20260d10c62f4009b6a5629675c32a69\"" Feb 13 15:31:11.525879 containerd[1888]: time="2025-02-13T15:31:11.525841224Z" level=info msg="TearDown network for sandbox \"c62379b0e0a89e264f2d6158ea14067b20260d10c62f4009b6a5629675c32a69\" successfully" Feb 13 15:31:11.526005 containerd[1888]: time="2025-02-13T15:31:11.525988043Z" level=info msg="StopPodSandbox for \"c62379b0e0a89e264f2d6158ea14067b20260d10c62f4009b6a5629675c32a69\" returns successfully" Feb 13 15:31:11.540733 containerd[1888]: time="2025-02-13T15:31:11.540612500Z" level=info msg="StopPodSandbox for \"f9384dcdeebd925b8031a46bcebeef6dd9a5bf492ccf93dd7f189836d41c7871\"" Feb 13 15:31:11.541242 containerd[1888]: time="2025-02-13T15:31:11.541143440Z" level=info msg="TearDown network for sandbox \"f9384dcdeebd925b8031a46bcebeef6dd9a5bf492ccf93dd7f189836d41c7871\" successfully" Feb 13 15:31:11.541242 containerd[1888]: time="2025-02-13T15:31:11.541189574Z" level=info msg="StopPodSandbox for \"f9384dcdeebd925b8031a46bcebeef6dd9a5bf492ccf93dd7f189836d41c7871\" returns successfully" Feb 13 15:31:11.544098 containerd[1888]: time="2025-02-13T15:31:11.543248189Z" level=info msg="StopPodSandbox for \"0111c65653a34a4775b455233cef655a6171f473130bb8a37089cc787e1e08b5\"" Feb 13 15:31:11.544098 containerd[1888]: time="2025-02-13T15:31:11.543430744Z" level=info msg="TearDown network for sandbox \"0111c65653a34a4775b455233cef655a6171f473130bb8a37089cc787e1e08b5\" successfully" Feb 13 15:31:11.544098 containerd[1888]: time="2025-02-13T15:31:11.543451139Z" level=info msg="StopPodSandbox for \"0111c65653a34a4775b455233cef655a6171f473130bb8a37089cc787e1e08b5\" returns successfully" Feb 13 15:31:11.547101 containerd[1888]: time="2025-02-13T15:31:11.546700946Z" level=info msg="StopPodSandbox for \"c2062018143225935d831555441151f166411d89d5c3a73413352e64c89d0626\"" Feb 13 15:31:11.547101 containerd[1888]: time="2025-02-13T15:31:11.546808023Z" level=info msg="TearDown network for sandbox \"c2062018143225935d831555441151f166411d89d5c3a73413352e64c89d0626\" successfully" Feb 13 15:31:11.547101 containerd[1888]: time="2025-02-13T15:31:11.546822166Z" level=info msg="StopPodSandbox for \"c2062018143225935d831555441151f166411d89d5c3a73413352e64c89d0626\" returns successfully" Feb 13 15:31:11.548820 containerd[1888]: time="2025-02-13T15:31:11.548781440Z" level=info msg="StopPodSandbox for \"6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786\"" Feb 13 15:31:11.550007 containerd[1888]: time="2025-02-13T15:31:11.549529744Z" level=info msg="TearDown network for sandbox \"6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786\" successfully" Feb 13 15:31:11.550007 containerd[1888]: time="2025-02-13T15:31:11.549871571Z" level=info msg="StopPodSandbox for \"6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786\" returns successfully" Feb 13 15:31:11.551304 containerd[1888]: time="2025-02-13T15:31:11.551252604Z" level=info msg="StopPodSandbox for \"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\"" Feb 13 15:31:11.551566 containerd[1888]: time="2025-02-13T15:31:11.551376214Z" level=info msg="TearDown network for sandbox \"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\" successfully" Feb 13 15:31:11.551566 containerd[1888]: time="2025-02-13T15:31:11.551395185Z" level=info msg="StopPodSandbox for \"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\" returns successfully" Feb 13 15:31:11.565270 containerd[1888]: time="2025-02-13T15:31:11.565129102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hk2gm,Uid:f08ddd1e-6241-48e2-83b7-49191ff10a45,Namespace:calico-system,Attempt:8,}" Feb 13 15:31:11.884468 kubelet[2332]: I0213 15:31:11.884417 2332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dgldv" podStartSLOduration=4.071569957 podStartE2EDuration="21.884397473s" podCreationTimestamp="2025-02-13 15:30:50 +0000 UTC" firstStartedPulling="2025-02-13 15:30:53.328714308 +0000 UTC m=+4.040001275" lastFinishedPulling="2025-02-13 15:31:11.141541825 +0000 UTC m=+21.852828791" observedRunningTime="2025-02-13 15:31:11.536409911 +0000 UTC m=+22.247696899" watchObservedRunningTime="2025-02-13 15:31:11.884397473 +0000 UTC m=+22.595684441" Feb 13 15:31:11.994851 containerd[1888]: 2025-02-13 15:31:11.882 [INFO][3340] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ac2977d25a9bbaa58b202684649bebe817d03f84b69d492897709fea40192607" Feb 13 15:31:11.994851 containerd[1888]: 2025-02-13 15:31:11.882 [INFO][3340] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ac2977d25a9bbaa58b202684649bebe817d03f84b69d492897709fea40192607" iface="eth0" netns="/var/run/netns/cni-1b3d27b4-19e6-8b7d-40ea-38f16fbe1a83" Feb 13 15:31:11.994851 containerd[1888]: 2025-02-13 15:31:11.883 [INFO][3340] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ac2977d25a9bbaa58b202684649bebe817d03f84b69d492897709fea40192607" iface="eth0" netns="/var/run/netns/cni-1b3d27b4-19e6-8b7d-40ea-38f16fbe1a83" Feb 13 15:31:11.994851 containerd[1888]: 2025-02-13 15:31:11.886 [INFO][3340] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ac2977d25a9bbaa58b202684649bebe817d03f84b69d492897709fea40192607" iface="eth0" netns="/var/run/netns/cni-1b3d27b4-19e6-8b7d-40ea-38f16fbe1a83" Feb 13 15:31:11.994851 containerd[1888]: 2025-02-13 15:31:11.886 [INFO][3340] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ac2977d25a9bbaa58b202684649bebe817d03f84b69d492897709fea40192607" Feb 13 15:31:11.994851 containerd[1888]: 2025-02-13 15:31:11.886 [INFO][3340] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac2977d25a9bbaa58b202684649bebe817d03f84b69d492897709fea40192607" Feb 13 15:31:11.994851 containerd[1888]: 2025-02-13 15:31:11.940 [INFO][3357] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ac2977d25a9bbaa58b202684649bebe817d03f84b69d492897709fea40192607" HandleID="k8s-pod-network.ac2977d25a9bbaa58b202684649bebe817d03f84b69d492897709fea40192607" Workload="172.31.26.113-k8s-nginx--deployment--8587fbcb89--59dbf-eth0" Feb 13 15:31:11.994851 containerd[1888]: 2025-02-13 15:31:11.948 [INFO][3357] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:31:11.994851 containerd[1888]: 2025-02-13 15:31:11.948 [INFO][3357] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:31:11.994851 containerd[1888]: 2025-02-13 15:31:11.985 [WARNING][3357] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ac2977d25a9bbaa58b202684649bebe817d03f84b69d492897709fea40192607" HandleID="k8s-pod-network.ac2977d25a9bbaa58b202684649bebe817d03f84b69d492897709fea40192607" Workload="172.31.26.113-k8s-nginx--deployment--8587fbcb89--59dbf-eth0" Feb 13 15:31:11.994851 containerd[1888]: 2025-02-13 15:31:11.985 [INFO][3357] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ac2977d25a9bbaa58b202684649bebe817d03f84b69d492897709fea40192607" HandleID="k8s-pod-network.ac2977d25a9bbaa58b202684649bebe817d03f84b69d492897709fea40192607" Workload="172.31.26.113-k8s-nginx--deployment--8587fbcb89--59dbf-eth0" Feb 13 15:31:11.994851 containerd[1888]: 2025-02-13 15:31:11.990 [INFO][3357] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:31:11.994851 containerd[1888]: 2025-02-13 15:31:11.993 [INFO][3340] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ac2977d25a9bbaa58b202684649bebe817d03f84b69d492897709fea40192607" Feb 13 15:31:11.999894 containerd[1888]: time="2025-02-13T15:31:11.999852994Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-59dbf,Uid:d5f6a5ec-239d-4976-acea-200a963b6960,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"ac2977d25a9bbaa58b202684649bebe817d03f84b69d492897709fea40192607\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:12.000359 kubelet[2332]: E0213 15:31:12.000300 2332 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac2977d25a9bbaa58b202684649bebe817d03f84b69d492897709fea40192607\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:31:12.000450 kubelet[2332]: E0213 15:31:12.000386 2332 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac2977d25a9bbaa58b202684649bebe817d03f84b69d492897709fea40192607\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-59dbf" Feb 13 15:31:12.000450 kubelet[2332]: E0213 15:31:12.000428 2332 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac2977d25a9bbaa58b202684649bebe817d03f84b69d492897709fea40192607\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-59dbf" Feb 13 15:31:12.000863 kubelet[2332]: E0213 15:31:12.000606 2332 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-59dbf_default(d5f6a5ec-239d-4976-acea-200a963b6960)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-59dbf_default(d5f6a5ec-239d-4976-acea-200a963b6960)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac2977d25a9bbaa58b202684649bebe817d03f84b69d492897709fea40192607\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-59dbf" podUID="d5f6a5ec-239d-4976-acea-200a963b6960" Feb 13 15:31:12.061208 kubelet[2332]: E0213 15:31:12.061159 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:12.201493 (udev-worker)[3276]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:31:12.205544 systemd-networkd[1720]: cali743c5ced4f9: Link UP Feb 13 15:31:12.205833 systemd-networkd[1720]: cali743c5ced4f9: Gained carrier Feb 13 15:31:12.256157 containerd[1888]: 2025-02-13 15:31:11.777 [INFO][3318] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:31:12.256157 containerd[1888]: 2025-02-13 15:31:11.860 [INFO][3318] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.26.113-k8s-csi--node--driver--hk2gm-eth0 csi-node-driver- calico-system f08ddd1e-6241-48e2-83b7-49191ff10a45 950 0 2025-02-13 15:30:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.31.26.113 csi-node-driver-hk2gm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali743c5ced4f9 [] []}} ContainerID="f2b343447051be2903c0267edd26184521a0f8f6ed64501cec4bc6ab33aaba7c" Namespace="calico-system" Pod="csi-node-driver-hk2gm" WorkloadEndpoint="172.31.26.113-k8s-csi--node--driver--hk2gm-" Feb 13 15:31:12.256157 containerd[1888]: 2025-02-13 15:31:11.860 [INFO][3318] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f2b343447051be2903c0267edd26184521a0f8f6ed64501cec4bc6ab33aaba7c" Namespace="calico-system" Pod="csi-node-driver-hk2gm" WorkloadEndpoint="172.31.26.113-k8s-csi--node--driver--hk2gm-eth0" Feb 13 15:31:12.256157 containerd[1888]: 2025-02-13 15:31:11.969 [INFO][3356] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f2b343447051be2903c0267edd26184521a0f8f6ed64501cec4bc6ab33aaba7c" HandleID="k8s-pod-network.f2b343447051be2903c0267edd26184521a0f8f6ed64501cec4bc6ab33aaba7c" Workload="172.31.26.113-k8s-csi--node--driver--hk2gm-eth0" Feb 13 15:31:12.256157 containerd[1888]: 2025-02-13 15:31:11.989 [INFO][3356] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f2b343447051be2903c0267edd26184521a0f8f6ed64501cec4bc6ab33aaba7c" HandleID="k8s-pod-network.f2b343447051be2903c0267edd26184521a0f8f6ed64501cec4bc6ab33aaba7c" Workload="172.31.26.113-k8s-csi--node--driver--hk2gm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318d70), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.26.113", "pod":"csi-node-driver-hk2gm", "timestamp":"2025-02-13 15:31:11.969727482 +0000 UTC"}, Hostname:"172.31.26.113", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:31:12.256157 containerd[1888]: 2025-02-13 15:31:11.990 [INFO][3356] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:31:12.256157 containerd[1888]: 2025-02-13 15:31:11.990 [INFO][3356] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:31:12.256157 containerd[1888]: 2025-02-13 15:31:11.991 [INFO][3356] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.26.113' Feb 13 15:31:12.256157 containerd[1888]: 2025-02-13 15:31:12.049 [INFO][3356] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f2b343447051be2903c0267edd26184521a0f8f6ed64501cec4bc6ab33aaba7c" host="172.31.26.113" Feb 13 15:31:12.256157 containerd[1888]: 2025-02-13 15:31:12.093 [INFO][3356] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.26.113" Feb 13 15:31:12.256157 containerd[1888]: 2025-02-13 15:31:12.109 [INFO][3356] ipam/ipam.go 489: Trying affinity for 192.168.118.0/26 host="172.31.26.113" Feb 13 15:31:12.256157 containerd[1888]: 2025-02-13 15:31:12.123 [INFO][3356] ipam/ipam.go 155: Attempting to load block cidr=192.168.118.0/26 host="172.31.26.113" Feb 13 15:31:12.256157 containerd[1888]: 2025-02-13 15:31:12.133 [INFO][3356] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.118.0/26 host="172.31.26.113" Feb 13 15:31:12.256157 containerd[1888]: 2025-02-13 15:31:12.133 [INFO][3356] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.118.0/26 handle="k8s-pod-network.f2b343447051be2903c0267edd26184521a0f8f6ed64501cec4bc6ab33aaba7c" host="172.31.26.113" Feb 13 15:31:12.256157 containerd[1888]: 2025-02-13 15:31:12.140 [INFO][3356] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f2b343447051be2903c0267edd26184521a0f8f6ed64501cec4bc6ab33aaba7c Feb 13 15:31:12.256157 containerd[1888]: 2025-02-13 15:31:12.175 [INFO][3356] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.118.0/26 handle="k8s-pod-network.f2b343447051be2903c0267edd26184521a0f8f6ed64501cec4bc6ab33aaba7c" host="172.31.26.113" Feb 13 15:31:12.256157 containerd[1888]: 2025-02-13 15:31:12.187 [INFO][3356] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.118.1/26] block=192.168.118.0/26 handle="k8s-pod-network.f2b343447051be2903c0267edd26184521a0f8f6ed64501cec4bc6ab33aaba7c" host="172.31.26.113" Feb 13 15:31:12.256157 containerd[1888]: 2025-02-13 15:31:12.187 [INFO][3356] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.118.1/26] handle="k8s-pod-network.f2b343447051be2903c0267edd26184521a0f8f6ed64501cec4bc6ab33aaba7c" host="172.31.26.113" Feb 13 15:31:12.256157 containerd[1888]: 2025-02-13 15:31:12.187 [INFO][3356] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:31:12.256157 containerd[1888]: 2025-02-13 15:31:12.187 [INFO][3356] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.118.1/26] IPv6=[] ContainerID="f2b343447051be2903c0267edd26184521a0f8f6ed64501cec4bc6ab33aaba7c" HandleID="k8s-pod-network.f2b343447051be2903c0267edd26184521a0f8f6ed64501cec4bc6ab33aaba7c" Workload="172.31.26.113-k8s-csi--node--driver--hk2gm-eth0" Feb 13 15:31:12.258263 containerd[1888]: 2025-02-13 15:31:12.190 [INFO][3318] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f2b343447051be2903c0267edd26184521a0f8f6ed64501cec4bc6ab33aaba7c" Namespace="calico-system" Pod="csi-node-driver-hk2gm" WorkloadEndpoint="172.31.26.113-k8s-csi--node--driver--hk2gm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.113-k8s-csi--node--driver--hk2gm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f08ddd1e-6241-48e2-83b7-49191ff10a45", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 30, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.113", ContainerID:"", Pod:"csi-node-driver-hk2gm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.118.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali743c5ced4f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:31:12.258263 containerd[1888]: 2025-02-13 15:31:12.190 [INFO][3318] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.118.1/32] ContainerID="f2b343447051be2903c0267edd26184521a0f8f6ed64501cec4bc6ab33aaba7c" Namespace="calico-system" Pod="csi-node-driver-hk2gm" WorkloadEndpoint="172.31.26.113-k8s-csi--node--driver--hk2gm-eth0" Feb 13 15:31:12.258263 containerd[1888]: 2025-02-13 15:31:12.190 [INFO][3318] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali743c5ced4f9 ContainerID="f2b343447051be2903c0267edd26184521a0f8f6ed64501cec4bc6ab33aaba7c" Namespace="calico-system" Pod="csi-node-driver-hk2gm" WorkloadEndpoint="172.31.26.113-k8s-csi--node--driver--hk2gm-eth0" Feb 13 15:31:12.258263 containerd[1888]: 2025-02-13 15:31:12.205 [INFO][3318] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f2b343447051be2903c0267edd26184521a0f8f6ed64501cec4bc6ab33aaba7c" Namespace="calico-system" Pod="csi-node-driver-hk2gm" WorkloadEndpoint="172.31.26.113-k8s-csi--node--driver--hk2gm-eth0" Feb 13 15:31:12.258263 containerd[1888]: 2025-02-13 15:31:12.206 [INFO][3318] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f2b343447051be2903c0267edd26184521a0f8f6ed64501cec4bc6ab33aaba7c" Namespace="calico-system" Pod="csi-node-driver-hk2gm" WorkloadEndpoint="172.31.26.113-k8s-csi--node--driver--hk2gm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.113-k8s-csi--node--driver--hk2gm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f08ddd1e-6241-48e2-83b7-49191ff10a45", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 30, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.113", ContainerID:"f2b343447051be2903c0267edd26184521a0f8f6ed64501cec4bc6ab33aaba7c", Pod:"csi-node-driver-hk2gm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.118.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali743c5ced4f9", MAC:"be:9f:c2:fb:40:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:31:12.258263 containerd[1888]: 2025-02-13 15:31:12.253 [INFO][3318] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f2b343447051be2903c0267edd26184521a0f8f6ed64501cec4bc6ab33aaba7c" Namespace="calico-system" Pod="csi-node-driver-hk2gm" WorkloadEndpoint="172.31.26.113-k8s-csi--node--driver--hk2gm-eth0" Feb 13 15:31:12.307154 containerd[1888]: time="2025-02-13T15:31:12.306756381Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:31:12.307154 containerd[1888]: time="2025-02-13T15:31:12.306832257Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:31:12.307154 containerd[1888]: time="2025-02-13T15:31:12.306909993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:31:12.311324 containerd[1888]: time="2025-02-13T15:31:12.307061187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:31:12.356425 systemd[1]: Started cri-containerd-f2b343447051be2903c0267edd26184521a0f8f6ed64501cec4bc6ab33aaba7c.scope - libcontainer container f2b343447051be2903c0267edd26184521a0f8f6ed64501cec4bc6ab33aaba7c. Feb 13 15:31:12.392267 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ac2977d25a9bbaa58b202684649bebe817d03f84b69d492897709fea40192607-shm.mount: Deactivated successfully. Feb 13 15:31:12.404503 containerd[1888]: time="2025-02-13T15:31:12.404442178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hk2gm,Uid:f08ddd1e-6241-48e2-83b7-49191ff10a45,Namespace:calico-system,Attempt:8,} returns sandbox id \"f2b343447051be2903c0267edd26184521a0f8f6ed64501cec4bc6ab33aaba7c\"" Feb 13 15:31:12.406681 containerd[1888]: time="2025-02-13T15:31:12.406427337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 15:31:12.511304 containerd[1888]: time="2025-02-13T15:31:12.509678541Z" level=info msg="StopPodSandbox for \"43cfc2a1cd891f872194322b76bd7ee6a5f76f87624550d7661ad2db6fdf58dd\"" Feb 13 15:31:12.512754 containerd[1888]: time="2025-02-13T15:31:12.511247746Z" level=info msg="TearDown network for sandbox \"43cfc2a1cd891f872194322b76bd7ee6a5f76f87624550d7661ad2db6fdf58dd\" successfully" Feb 13 15:31:12.512754 containerd[1888]: time="2025-02-13T15:31:12.511448959Z" level=info msg="StopPodSandbox for \"43cfc2a1cd891f872194322b76bd7ee6a5f76f87624550d7661ad2db6fdf58dd\" returns successfully" Feb 13 15:31:12.512754 containerd[1888]: time="2025-02-13T15:31:12.512013437Z" level=info msg="StopPodSandbox for \"394f10ec7761b00f05e2f3cd55b3945dc0f3d13d154fe6e57a457ab76c57d0cc\"" Feb 13 15:31:12.512754 containerd[1888]: time="2025-02-13T15:31:12.512358571Z" level=info msg="TearDown network for sandbox \"394f10ec7761b00f05e2f3cd55b3945dc0f3d13d154fe6e57a457ab76c57d0cc\" successfully" Feb 13 15:31:12.512754 containerd[1888]: time="2025-02-13T15:31:12.512377777Z" level=info msg="StopPodSandbox for \"394f10ec7761b00f05e2f3cd55b3945dc0f3d13d154fe6e57a457ab76c57d0cc\" returns successfully" Feb 13 15:31:12.513235 containerd[1888]: time="2025-02-13T15:31:12.513209449Z" level=info msg="StopPodSandbox for \"5594f31a95453176efd5fcfe64d4da181254b168de14cf3c490db6f4a10b5a60\"" Feb 13 15:31:12.513343 containerd[1888]: time="2025-02-13T15:31:12.513301433Z" level=info msg="TearDown network for sandbox \"5594f31a95453176efd5fcfe64d4da181254b168de14cf3c490db6f4a10b5a60\" successfully" Feb 13 15:31:12.513343 containerd[1888]: time="2025-02-13T15:31:12.513316574Z" level=info msg="StopPodSandbox for \"5594f31a95453176efd5fcfe64d4da181254b168de14cf3c490db6f4a10b5a60\" returns successfully" Feb 13 15:31:12.518387 containerd[1888]: time="2025-02-13T15:31:12.518341718Z" level=info msg="StopPodSandbox for \"1441ff552043c3e280e84ba9d0bc6aa7ea3bec9192028af9e43301dd10366345\"" Feb 13 15:31:12.518522 containerd[1888]: time="2025-02-13T15:31:12.518467519Z" level=info msg="TearDown network for sandbox \"1441ff552043c3e280e84ba9d0bc6aa7ea3bec9192028af9e43301dd10366345\" successfully" Feb 13 15:31:12.518522 containerd[1888]: time="2025-02-13T15:31:12.518483853Z" level=info msg="StopPodSandbox for \"1441ff552043c3e280e84ba9d0bc6aa7ea3bec9192028af9e43301dd10366345\" returns successfully" Feb 13 15:31:12.519196 containerd[1888]: time="2025-02-13T15:31:12.519137672Z" level=info msg="StopPodSandbox for \"7dfd9e05d94a6c710fdcc2ec9fc32b71f2ec1dfd37bef64567534e836f51cd3d\"" Feb 13 15:31:12.522941 containerd[1888]: time="2025-02-13T15:31:12.519301142Z" level=info msg="TearDown network for sandbox \"7dfd9e05d94a6c710fdcc2ec9fc32b71f2ec1dfd37bef64567534e836f51cd3d\" successfully" Feb 13 15:31:12.522941 containerd[1888]: time="2025-02-13T15:31:12.519318722Z" level=info msg="StopPodSandbox for \"7dfd9e05d94a6c710fdcc2ec9fc32b71f2ec1dfd37bef64567534e836f51cd3d\" returns successfully" Feb 13 15:31:12.524096 containerd[1888]: time="2025-02-13T15:31:12.524023463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-59dbf,Uid:d5f6a5ec-239d-4976-acea-200a963b6960,Namespace:default,Attempt:5,}" Feb 13 15:31:12.830052 systemd-networkd[1720]: calif1ffb272ba0: Link UP Feb 13 15:31:12.830449 systemd-networkd[1720]: calif1ffb272ba0: Gained carrier Feb 13 15:31:12.833390 (udev-worker)[3373]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:31:12.872871 containerd[1888]: 2025-02-13 15:31:12.615 [INFO][3443] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:31:12.872871 containerd[1888]: 2025-02-13 15:31:12.663 [INFO][3443] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.26.113-k8s-nginx--deployment--8587fbcb89--59dbf-eth0 nginx-deployment-8587fbcb89- default d5f6a5ec-239d-4976-acea-200a963b6960 1136 0 2025-02-13 15:31:06 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.26.113 nginx-deployment-8587fbcb89-59dbf eth0 default [] [] [kns.default ksa.default.default] calif1ffb272ba0 [] []}} ContainerID="ad1e027a5135c4f12f5015b4b5d18bb66e44af98ad61ec42187522c508ce2d37" Namespace="default" Pod="nginx-deployment-8587fbcb89-59dbf" WorkloadEndpoint="172.31.26.113-k8s-nginx--deployment--8587fbcb89--59dbf-" Feb 13 15:31:12.872871 containerd[1888]: 2025-02-13 15:31:12.663 [INFO][3443] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ad1e027a5135c4f12f5015b4b5d18bb66e44af98ad61ec42187522c508ce2d37" Namespace="default" Pod="nginx-deployment-8587fbcb89-59dbf" WorkloadEndpoint="172.31.26.113-k8s-nginx--deployment--8587fbcb89--59dbf-eth0" Feb 13 15:31:12.872871 containerd[1888]: 2025-02-13 15:31:12.707 [INFO][3467] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad1e027a5135c4f12f5015b4b5d18bb66e44af98ad61ec42187522c508ce2d37" HandleID="k8s-pod-network.ad1e027a5135c4f12f5015b4b5d18bb66e44af98ad61ec42187522c508ce2d37" Workload="172.31.26.113-k8s-nginx--deployment--8587fbcb89--59dbf-eth0" Feb 13 15:31:12.872871 containerd[1888]: 2025-02-13 15:31:12.723 [INFO][3467] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ad1e027a5135c4f12f5015b4b5d18bb66e44af98ad61ec42187522c508ce2d37" HandleID="k8s-pod-network.ad1e027a5135c4f12f5015b4b5d18bb66e44af98ad61ec42187522c508ce2d37" Workload="172.31.26.113-k8s-nginx--deployment--8587fbcb89--59dbf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000293170), Attrs:map[string]string{"namespace":"default", "node":"172.31.26.113", "pod":"nginx-deployment-8587fbcb89-59dbf", "timestamp":"2025-02-13 15:31:12.707821227 +0000 UTC"}, Hostname:"172.31.26.113", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:31:12.872871 containerd[1888]: 2025-02-13 15:31:12.723 [INFO][3467] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:31:12.872871 containerd[1888]: 2025-02-13 15:31:12.723 [INFO][3467] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:31:12.872871 containerd[1888]: 2025-02-13 15:31:12.723 [INFO][3467] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.26.113' Feb 13 15:31:12.872871 containerd[1888]: 2025-02-13 15:31:12.727 [INFO][3467] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ad1e027a5135c4f12f5015b4b5d18bb66e44af98ad61ec42187522c508ce2d37" host="172.31.26.113" Feb 13 15:31:12.872871 containerd[1888]: 2025-02-13 15:31:12.739 [INFO][3467] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.26.113" Feb 13 15:31:12.872871 containerd[1888]: 2025-02-13 15:31:12.782 [INFO][3467] ipam/ipam.go 489: Trying affinity for 192.168.118.0/26 host="172.31.26.113" Feb 13 15:31:12.872871 containerd[1888]: 2025-02-13 15:31:12.789 [INFO][3467] ipam/ipam.go 155: Attempting to load block cidr=192.168.118.0/26 host="172.31.26.113" Feb 13 15:31:12.872871 containerd[1888]: 2025-02-13 15:31:12.795 [INFO][3467] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.118.0/26 host="172.31.26.113" Feb 13 15:31:12.872871 containerd[1888]: 2025-02-13 15:31:12.795 [INFO][3467] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.118.0/26 handle="k8s-pod-network.ad1e027a5135c4f12f5015b4b5d18bb66e44af98ad61ec42187522c508ce2d37" host="172.31.26.113" Feb 13 15:31:12.872871 containerd[1888]: 2025-02-13 15:31:12.799 [INFO][3467] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ad1e027a5135c4f12f5015b4b5d18bb66e44af98ad61ec42187522c508ce2d37 Feb 13 15:31:12.872871 containerd[1888]: 2025-02-13 15:31:12.813 [INFO][3467] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.118.0/26 handle="k8s-pod-network.ad1e027a5135c4f12f5015b4b5d18bb66e44af98ad61ec42187522c508ce2d37" host="172.31.26.113" Feb 13 15:31:12.872871 containerd[1888]: 2025-02-13 15:31:12.824 [INFO][3467] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.118.2/26] block=192.168.118.0/26 handle="k8s-pod-network.ad1e027a5135c4f12f5015b4b5d18bb66e44af98ad61ec42187522c508ce2d37" host="172.31.26.113" Feb 13 15:31:12.872871 containerd[1888]: 2025-02-13 15:31:12.824 [INFO][3467] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.118.2/26] handle="k8s-pod-network.ad1e027a5135c4f12f5015b4b5d18bb66e44af98ad61ec42187522c508ce2d37" host="172.31.26.113" Feb 13 15:31:12.872871 containerd[1888]: 2025-02-13 15:31:12.824 [INFO][3467] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:31:12.872871 containerd[1888]: 2025-02-13 15:31:12.824 [INFO][3467] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.118.2/26] IPv6=[] ContainerID="ad1e027a5135c4f12f5015b4b5d18bb66e44af98ad61ec42187522c508ce2d37" HandleID="k8s-pod-network.ad1e027a5135c4f12f5015b4b5d18bb66e44af98ad61ec42187522c508ce2d37" Workload="172.31.26.113-k8s-nginx--deployment--8587fbcb89--59dbf-eth0" Feb 13 15:31:12.873921 containerd[1888]: 2025-02-13 15:31:12.827 [INFO][3443] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ad1e027a5135c4f12f5015b4b5d18bb66e44af98ad61ec42187522c508ce2d37" Namespace="default" Pod="nginx-deployment-8587fbcb89-59dbf" WorkloadEndpoint="172.31.26.113-k8s-nginx--deployment--8587fbcb89--59dbf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.113-k8s-nginx--deployment--8587fbcb89--59dbf-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"d5f6a5ec-239d-4976-acea-200a963b6960", ResourceVersion:"1136", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 31, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.113", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-59dbf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.118.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calif1ffb272ba0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:31:12.873921 containerd[1888]: 2025-02-13 15:31:12.827 [INFO][3443] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.118.2/32] ContainerID="ad1e027a5135c4f12f5015b4b5d18bb66e44af98ad61ec42187522c508ce2d37" Namespace="default" Pod="nginx-deployment-8587fbcb89-59dbf" WorkloadEndpoint="172.31.26.113-k8s-nginx--deployment--8587fbcb89--59dbf-eth0" Feb 13 15:31:12.873921 containerd[1888]: 2025-02-13 15:31:12.827 [INFO][3443] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif1ffb272ba0 ContainerID="ad1e027a5135c4f12f5015b4b5d18bb66e44af98ad61ec42187522c508ce2d37" Namespace="default" Pod="nginx-deployment-8587fbcb89-59dbf" WorkloadEndpoint="172.31.26.113-k8s-nginx--deployment--8587fbcb89--59dbf-eth0" Feb 13 15:31:12.873921 containerd[1888]: 2025-02-13 15:31:12.835 [INFO][3443] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad1e027a5135c4f12f5015b4b5d18bb66e44af98ad61ec42187522c508ce2d37" Namespace="default" Pod="nginx-deployment-8587fbcb89-59dbf" WorkloadEndpoint="172.31.26.113-k8s-nginx--deployment--8587fbcb89--59dbf-eth0" Feb 13 15:31:12.873921 containerd[1888]: 2025-02-13 15:31:12.836 [INFO][3443] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ad1e027a5135c4f12f5015b4b5d18bb66e44af98ad61ec42187522c508ce2d37" Namespace="default" Pod="nginx-deployment-8587fbcb89-59dbf" WorkloadEndpoint="172.31.26.113-k8s-nginx--deployment--8587fbcb89--59dbf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.113-k8s-nginx--deployment--8587fbcb89--59dbf-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"d5f6a5ec-239d-4976-acea-200a963b6960", ResourceVersion:"1136", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 31, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.113", ContainerID:"ad1e027a5135c4f12f5015b4b5d18bb66e44af98ad61ec42187522c508ce2d37", Pod:"nginx-deployment-8587fbcb89-59dbf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.118.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calif1ffb272ba0", MAC:"06:b8:65:20:e8:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:31:12.873921 containerd[1888]: 2025-02-13 15:31:12.854 [INFO][3443] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ad1e027a5135c4f12f5015b4b5d18bb66e44af98ad61ec42187522c508ce2d37" Namespace="default" Pod="nginx-deployment-8587fbcb89-59dbf" WorkloadEndpoint="172.31.26.113-k8s-nginx--deployment--8587fbcb89--59dbf-eth0" Feb 13 15:31:12.906370 containerd[1888]: time="2025-02-13T15:31:12.906146354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:31:12.906370 containerd[1888]: time="2025-02-13T15:31:12.906212865Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:31:12.906370 containerd[1888]: time="2025-02-13T15:31:12.906236444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:31:12.907109 containerd[1888]: time="2025-02-13T15:31:12.906873662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:31:12.947307 systemd[1]: Started cri-containerd-ad1e027a5135c4f12f5015b4b5d18bb66e44af98ad61ec42187522c508ce2d37.scope - libcontainer container ad1e027a5135c4f12f5015b4b5d18bb66e44af98ad61ec42187522c508ce2d37. Feb 13 15:31:13.013934 containerd[1888]: time="2025-02-13T15:31:13.013878621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-59dbf,Uid:d5f6a5ec-239d-4976-acea-200a963b6960,Namespace:default,Attempt:5,} returns sandbox id \"ad1e027a5135c4f12f5015b4b5d18bb66e44af98ad61ec42187522c508ce2d37\"" Feb 13 15:31:13.062242 kubelet[2332]: E0213 15:31:13.062177 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:13.596708 systemd[1]: run-containerd-runc-k8s.io-c8a9ebad7bfab42598f20c8205f71da2181e8af389b703e377db3c6274d7be81-runc.SsB3Af.mount: Deactivated successfully. Feb 13 15:31:13.975103 kernel: bpftool[3671]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 15:31:14.063342 kubelet[2332]: E0213 15:31:14.063275 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:14.141645 systemd-networkd[1720]: cali743c5ced4f9: Gained IPv6LL Feb 13 15:31:14.225602 containerd[1888]: time="2025-02-13T15:31:14.225545094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:14.227124 containerd[1888]: time="2025-02-13T15:31:14.226840258Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 15:31:14.229105 containerd[1888]: time="2025-02-13T15:31:14.228369520Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:14.231691 containerd[1888]: time="2025-02-13T15:31:14.231661350Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:14.233808 containerd[1888]: time="2025-02-13T15:31:14.233776442Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.827307604s" Feb 13 15:31:14.233953 containerd[1888]: time="2025-02-13T15:31:14.233935097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 15:31:14.235060 containerd[1888]: time="2025-02-13T15:31:14.235038916Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 15:31:14.237702 containerd[1888]: time="2025-02-13T15:31:14.237677162Z" level=info msg="CreateContainer within sandbox \"f2b343447051be2903c0267edd26184521a0f8f6ed64501cec4bc6ab33aaba7c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 15:31:14.261763 containerd[1888]: time="2025-02-13T15:31:14.261717081Z" level=info msg="CreateContainer within sandbox \"f2b343447051be2903c0267edd26184521a0f8f6ed64501cec4bc6ab33aaba7c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7fce7b86d1c7d825528784514b913a57a9cecb2a1cb0a90000af53160111dc34\"" Feb 13 15:31:14.264128 containerd[1888]: time="2025-02-13T15:31:14.262733327Z" level=info msg="StartContainer for \"7fce7b86d1c7d825528784514b913a57a9cecb2a1cb0a90000af53160111dc34\"" Feb 13 15:31:14.364001 systemd[1]: Started cri-containerd-7fce7b86d1c7d825528784514b913a57a9cecb2a1cb0a90000af53160111dc34.scope - libcontainer container 7fce7b86d1c7d825528784514b913a57a9cecb2a1cb0a90000af53160111dc34. Feb 13 15:31:14.468567 containerd[1888]: time="2025-02-13T15:31:14.468517277Z" level=info msg="StartContainer for \"7fce7b86d1c7d825528784514b913a57a9cecb2a1cb0a90000af53160111dc34\" returns successfully" Feb 13 15:31:14.512672 systemd-networkd[1720]: vxlan.calico: Link UP Feb 13 15:31:14.512685 systemd-networkd[1720]: vxlan.calico: Gained carrier Feb 13 15:31:14.846058 systemd-networkd[1720]: calif1ffb272ba0: Gained IPv6LL Feb 13 15:31:15.064546 kubelet[2332]: E0213 15:31:15.064485 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:15.612586 systemd-networkd[1720]: vxlan.calico: Gained IPv6LL Feb 13 15:31:16.064680 kubelet[2332]: E0213 15:31:16.064621 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:17.065882 kubelet[2332]: E0213 15:31:17.065715 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:17.566471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3858403698.mount: Deactivated successfully. Feb 13 15:31:18.066576 kubelet[2332]: E0213 15:31:18.066535 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:18.116388 ntpd[1856]: Listen normally on 7 vxlan.calico 192.168.118.0:123 Feb 13 15:31:18.116478 ntpd[1856]: Listen normally on 8 cali743c5ced4f9 [fe80::ecee:eeff:feee:eeee%3]:123 Feb 13 15:31:18.117160 ntpd[1856]: 13 Feb 15:31:18 ntpd[1856]: Listen normally on 7 vxlan.calico 192.168.118.0:123 Feb 13 15:31:18.117160 ntpd[1856]: 13 Feb 15:31:18 ntpd[1856]: Listen normally on 8 cali743c5ced4f9 [fe80::ecee:eeff:feee:eeee%3]:123 Feb 13 15:31:18.117160 ntpd[1856]: 13 Feb 15:31:18 ntpd[1856]: Listen normally on 9 calif1ffb272ba0 [fe80::ecee:eeff:feee:eeee%4]:123 Feb 13 15:31:18.117160 ntpd[1856]: 13 Feb 15:31:18 ntpd[1856]: Listen normally on 10 vxlan.calico [fe80::6491:8ff:fe6c:9252%5]:123 Feb 13 15:31:18.116536 ntpd[1856]: Listen normally on 9 calif1ffb272ba0 [fe80::ecee:eeff:feee:eeee%4]:123 Feb 13 15:31:18.116578 ntpd[1856]: Listen normally on 10 vxlan.calico [fe80::6491:8ff:fe6c:9252%5]:123 Feb 13 15:31:19.066797 kubelet[2332]: E0213 15:31:19.066712 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:19.274022 containerd[1888]: time="2025-02-13T15:31:19.273965984Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:19.275627 containerd[1888]: time="2025-02-13T15:31:19.275537380Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73054493" Feb 13 15:31:19.277140 containerd[1888]: time="2025-02-13T15:31:19.276619988Z" level=info msg="ImageCreate event name:\"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:19.281275 containerd[1888]: time="2025-02-13T15:31:19.279978079Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:19.281275 containerd[1888]: time="2025-02-13T15:31:19.281127740Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 5.045758253s" Feb 13 15:31:19.281275 containerd[1888]: time="2025-02-13T15:31:19.281162013Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 15:31:19.283144 containerd[1888]: time="2025-02-13T15:31:19.283121463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 15:31:19.294871 containerd[1888]: time="2025-02-13T15:31:19.294819822Z" level=info msg="CreateContainer within sandbox \"ad1e027a5135c4f12f5015b4b5d18bb66e44af98ad61ec42187522c508ce2d37\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 15:31:19.314364 containerd[1888]: time="2025-02-13T15:31:19.314260932Z" level=info msg="CreateContainer within sandbox \"ad1e027a5135c4f12f5015b4b5d18bb66e44af98ad61ec42187522c508ce2d37\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"7af530203a2d65963d4fb27e8409ac6dc0d19ea474290d31aa07719b0ce35955\"" Feb 13 15:31:19.315701 containerd[1888]: time="2025-02-13T15:31:19.315651469Z" level=info msg="StartContainer for \"7af530203a2d65963d4fb27e8409ac6dc0d19ea474290d31aa07719b0ce35955\"" Feb 13 15:31:19.366063 systemd[1]: Started cri-containerd-7af530203a2d65963d4fb27e8409ac6dc0d19ea474290d31aa07719b0ce35955.scope - libcontainer container 7af530203a2d65963d4fb27e8409ac6dc0d19ea474290d31aa07719b0ce35955. Feb 13 15:31:19.401063 containerd[1888]: time="2025-02-13T15:31:19.401016297Z" level=info msg="StartContainer for \"7af530203a2d65963d4fb27e8409ac6dc0d19ea474290d31aa07719b0ce35955\" returns successfully" Feb 13 15:31:19.594349 kubelet[2332]: I0213 15:31:19.594283 2332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-59dbf" podStartSLOduration=7.326917791 podStartE2EDuration="13.594256885s" podCreationTimestamp="2025-02-13 15:31:06 +0000 UTC" firstStartedPulling="2025-02-13 15:31:13.015622203 +0000 UTC m=+23.726909174" lastFinishedPulling="2025-02-13 15:31:19.282961292 +0000 UTC m=+29.994248268" observedRunningTime="2025-02-13 15:31:19.589314606 +0000 UTC m=+30.300601595" watchObservedRunningTime="2025-02-13 15:31:19.594256885 +0000 UTC m=+30.305543868" Feb 13 15:31:20.067447 kubelet[2332]: E0213 15:31:20.067340 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:20.686186 update_engine[1866]: I20250213 15:31:20.686111 1866 update_attempter.cc:509] Updating boot flags... Feb 13 15:31:20.810613 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3888) Feb 13 15:31:20.913131 containerd[1888]: time="2025-02-13T15:31:20.913057142Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:20.915902 containerd[1888]: time="2025-02-13T15:31:20.915843226Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 15:31:20.919753 containerd[1888]: time="2025-02-13T15:31:20.918746718Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:20.926133 containerd[1888]: time="2025-02-13T15:31:20.925864663Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:20.926990 containerd[1888]: time="2025-02-13T15:31:20.926944646Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.643787738s" Feb 13 15:31:20.927434 containerd[1888]: time="2025-02-13T15:31:20.926994475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 15:31:20.930455 containerd[1888]: time="2025-02-13T15:31:20.930417321Z" level=info msg="CreateContainer within sandbox \"f2b343447051be2903c0267edd26184521a0f8f6ed64501cec4bc6ab33aaba7c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 15:31:20.964443 containerd[1888]: time="2025-02-13T15:31:20.963447488Z" level=info msg="CreateContainer within sandbox \"f2b343447051be2903c0267edd26184521a0f8f6ed64501cec4bc6ab33aaba7c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9f91b14c8c4c45a901943cf00320ffa416cf7baa9798778c6d5218622616cfa7\"" Feb 13 15:31:20.964443 containerd[1888]: time="2025-02-13T15:31:20.964085569Z" level=info msg="StartContainer for \"9f91b14c8c4c45a901943cf00320ffa416cf7baa9798778c6d5218622616cfa7\"" Feb 13 15:31:21.071099 kubelet[2332]: E0213 15:31:21.069463 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:21.113333 systemd[1]: Started cri-containerd-9f91b14c8c4c45a901943cf00320ffa416cf7baa9798778c6d5218622616cfa7.scope - libcontainer container 9f91b14c8c4c45a901943cf00320ffa416cf7baa9798778c6d5218622616cfa7. Feb 13 15:31:21.145503 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3891) Feb 13 15:31:21.194174 containerd[1888]: time="2025-02-13T15:31:21.193396777Z" level=info msg="StartContainer for \"9f91b14c8c4c45a901943cf00320ffa416cf7baa9798778c6d5218622616cfa7\" returns successfully" Feb 13 15:31:21.288667 kubelet[2332]: I0213 15:31:21.288616 2332 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 15:31:21.289109 kubelet[2332]: I0213 15:31:21.288893 2332 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 15:31:22.074628 kubelet[2332]: E0213 15:31:22.074573 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:23.075033 kubelet[2332]: E0213 15:31:23.074978 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:24.075657 kubelet[2332]: E0213 15:31:24.075602 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:25.076343 kubelet[2332]: E0213 15:31:25.076285 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:26.077324 kubelet[2332]: E0213 15:31:26.077266 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:27.077689 kubelet[2332]: E0213 15:31:27.077634 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:28.078581 kubelet[2332]: E0213 15:31:28.078528 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:28.513908 kubelet[2332]: I0213 15:31:28.513786 2332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-hk2gm" podStartSLOduration=29.991001472 podStartE2EDuration="38.513767098s" podCreationTimestamp="2025-02-13 15:30:50 +0000 UTC" firstStartedPulling="2025-02-13 15:31:12.405902789 +0000 UTC m=+23.117189755" lastFinishedPulling="2025-02-13 15:31:20.928668399 +0000 UTC m=+31.639955381" observedRunningTime="2025-02-13 15:31:21.628207032 +0000 UTC m=+32.339494021" watchObservedRunningTime="2025-02-13 15:31:28.513767098 +0000 UTC m=+39.225054065" Feb 13 15:31:28.520066 systemd[1]: Created slice kubepods-besteffort-podccce863a_fac1_414d_81ee_f30f1b03246f.slice - libcontainer container kubepods-besteffort-podccce863a_fac1_414d_81ee_f30f1b03246f.slice. Feb 13 15:31:28.663089 kubelet[2332]: I0213 15:31:28.663035 2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/ccce863a-fac1-414d-81ee-f30f1b03246f-data\") pod \"nfs-server-provisioner-0\" (UID: \"ccce863a-fac1-414d-81ee-f30f1b03246f\") " pod="default/nfs-server-provisioner-0" Feb 13 15:31:28.664021 kubelet[2332]: I0213 15:31:28.663160 2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ss9p\" (UniqueName: \"kubernetes.io/projected/ccce863a-fac1-414d-81ee-f30f1b03246f-kube-api-access-7ss9p\") pod \"nfs-server-provisioner-0\" (UID: \"ccce863a-fac1-414d-81ee-f30f1b03246f\") " pod="default/nfs-server-provisioner-0" Feb 13 15:31:28.826663 containerd[1888]: time="2025-02-13T15:31:28.825990897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ccce863a-fac1-414d-81ee-f30f1b03246f,Namespace:default,Attempt:0,}" Feb 13 15:31:29.080375 kubelet[2332]: E0213 15:31:29.080238 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:29.161811 systemd-networkd[1720]: cali60e51b789ff: Link UP Feb 13 15:31:29.164295 systemd-networkd[1720]: cali60e51b789ff: Gained carrier Feb 13 15:31:29.167846 (udev-worker)[4128]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:31:29.188194 containerd[1888]: 2025-02-13 15:31:28.948 [INFO][4110] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.26.113-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default ccce863a-fac1-414d-81ee-f30f1b03246f 1244 0 2025-02-13 15:31:28 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.26.113 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="04ab167aa35ba53e425e4b6ff3fbb99ff5611b30ba629d1f4d74519b2c1eb4ab" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.26.113-k8s-nfs--server--provisioner--0-" Feb 13 15:31:29.188194 containerd[1888]: 2025-02-13 15:31:28.948 [INFO][4110] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="04ab167aa35ba53e425e4b6ff3fbb99ff5611b30ba629d1f4d74519b2c1eb4ab" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.26.113-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:31:29.188194 containerd[1888]: 2025-02-13 15:31:29.003 [INFO][4122] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="04ab167aa35ba53e425e4b6ff3fbb99ff5611b30ba629d1f4d74519b2c1eb4ab" HandleID="k8s-pod-network.04ab167aa35ba53e425e4b6ff3fbb99ff5611b30ba629d1f4d74519b2c1eb4ab" Workload="172.31.26.113-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:31:29.188194 containerd[1888]: 2025-02-13 15:31:29.022 [INFO][4122] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="04ab167aa35ba53e425e4b6ff3fbb99ff5611b30ba629d1f4d74519b2c1eb4ab" HandleID="k8s-pod-network.04ab167aa35ba53e425e4b6ff3fbb99ff5611b30ba629d1f4d74519b2c1eb4ab" Workload="172.31.26.113-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b70), Attrs:map[string]string{"namespace":"default", "node":"172.31.26.113", "pod":"nfs-server-provisioner-0", "timestamp":"2025-02-13 15:31:29.003942173 +0000 UTC"}, Hostname:"172.31.26.113", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:31:29.188194 containerd[1888]: 2025-02-13 15:31:29.022 [INFO][4122] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:31:29.188194 containerd[1888]: 2025-02-13 15:31:29.022 [INFO][4122] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:31:29.188194 containerd[1888]: 2025-02-13 15:31:29.022 [INFO][4122] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.26.113' Feb 13 15:31:29.188194 containerd[1888]: 2025-02-13 15:31:29.030 [INFO][4122] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.04ab167aa35ba53e425e4b6ff3fbb99ff5611b30ba629d1f4d74519b2c1eb4ab" host="172.31.26.113" Feb 13 15:31:29.188194 containerd[1888]: 2025-02-13 15:31:29.055 [INFO][4122] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.26.113" Feb 13 15:31:29.188194 containerd[1888]: 2025-02-13 15:31:29.068 [INFO][4122] ipam/ipam.go 489: Trying affinity for 192.168.118.0/26 host="172.31.26.113" Feb 13 15:31:29.188194 containerd[1888]: 2025-02-13 15:31:29.090 [INFO][4122] ipam/ipam.go 155: Attempting to load block cidr=192.168.118.0/26 host="172.31.26.113" Feb 13 15:31:29.188194 containerd[1888]: 2025-02-13 15:31:29.106 [INFO][4122] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.118.0/26 host="172.31.26.113" Feb 13 15:31:29.188194 containerd[1888]: 2025-02-13 15:31:29.106 [INFO][4122] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.118.0/26 handle="k8s-pod-network.04ab167aa35ba53e425e4b6ff3fbb99ff5611b30ba629d1f4d74519b2c1eb4ab" host="172.31.26.113" Feb 13 15:31:29.188194 containerd[1888]: 2025-02-13 15:31:29.111 [INFO][4122] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.04ab167aa35ba53e425e4b6ff3fbb99ff5611b30ba629d1f4d74519b2c1eb4ab Feb 13 15:31:29.188194 containerd[1888]: 2025-02-13 15:31:29.124 [INFO][4122] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.118.0/26 handle="k8s-pod-network.04ab167aa35ba53e425e4b6ff3fbb99ff5611b30ba629d1f4d74519b2c1eb4ab" host="172.31.26.113" Feb 13 15:31:29.188194 containerd[1888]: 2025-02-13 15:31:29.141 [INFO][4122] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.118.3/26] block=192.168.118.0/26 handle="k8s-pod-network.04ab167aa35ba53e425e4b6ff3fbb99ff5611b30ba629d1f4d74519b2c1eb4ab" host="172.31.26.113" Feb 13 15:31:29.188194 containerd[1888]: 2025-02-13 15:31:29.141 [INFO][4122] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.118.3/26] handle="k8s-pod-network.04ab167aa35ba53e425e4b6ff3fbb99ff5611b30ba629d1f4d74519b2c1eb4ab" host="172.31.26.113" Feb 13 15:31:29.188194 containerd[1888]: 2025-02-13 15:31:29.142 [INFO][4122] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:31:29.188194 containerd[1888]: 2025-02-13 15:31:29.142 [INFO][4122] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.118.3/26] IPv6=[] ContainerID="04ab167aa35ba53e425e4b6ff3fbb99ff5611b30ba629d1f4d74519b2c1eb4ab" HandleID="k8s-pod-network.04ab167aa35ba53e425e4b6ff3fbb99ff5611b30ba629d1f4d74519b2c1eb4ab" Workload="172.31.26.113-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:31:29.192360 containerd[1888]: 2025-02-13 15:31:29.145 [INFO][4110] cni-plugin/k8s.go 386: Populated endpoint ContainerID="04ab167aa35ba53e425e4b6ff3fbb99ff5611b30ba629d1f4d74519b2c1eb4ab" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.26.113-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.113-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"ccce863a-fac1-414d-81ee-f30f1b03246f", ResourceVersion:"1244", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 31, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.113", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.118.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:31:29.192360 containerd[1888]: 2025-02-13 15:31:29.146 [INFO][4110] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.118.3/32] ContainerID="04ab167aa35ba53e425e4b6ff3fbb99ff5611b30ba629d1f4d74519b2c1eb4ab" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.26.113-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:31:29.192360 containerd[1888]: 2025-02-13 15:31:29.146 [INFO][4110] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="04ab167aa35ba53e425e4b6ff3fbb99ff5611b30ba629d1f4d74519b2c1eb4ab" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.26.113-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:31:29.192360 containerd[1888]: 2025-02-13 15:31:29.163 [INFO][4110] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="04ab167aa35ba53e425e4b6ff3fbb99ff5611b30ba629d1f4d74519b2c1eb4ab" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.26.113-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:31:29.192756 containerd[1888]: 2025-02-13 15:31:29.163 [INFO][4110] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="04ab167aa35ba53e425e4b6ff3fbb99ff5611b30ba629d1f4d74519b2c1eb4ab" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.26.113-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.113-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"ccce863a-fac1-414d-81ee-f30f1b03246f", ResourceVersion:"1244", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 31, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.113", ContainerID:"04ab167aa35ba53e425e4b6ff3fbb99ff5611b30ba629d1f4d74519b2c1eb4ab", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.118.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"92:f2:8a:5e:5e:19", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:31:29.192756 containerd[1888]: 2025-02-13 15:31:29.184 [INFO][4110] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="04ab167aa35ba53e425e4b6ff3fbb99ff5611b30ba629d1f4d74519b2c1eb4ab" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.26.113-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:31:29.226366 containerd[1888]: time="2025-02-13T15:31:29.225545237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:31:29.226366 containerd[1888]: time="2025-02-13T15:31:29.226296982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:31:29.226366 containerd[1888]: time="2025-02-13T15:31:29.226313861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:31:29.226944 containerd[1888]: time="2025-02-13T15:31:29.226419107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:31:29.263301 systemd[1]: Started cri-containerd-04ab167aa35ba53e425e4b6ff3fbb99ff5611b30ba629d1f4d74519b2c1eb4ab.scope - libcontainer container 04ab167aa35ba53e425e4b6ff3fbb99ff5611b30ba629d1f4d74519b2c1eb4ab. Feb 13 15:31:29.323677 containerd[1888]: time="2025-02-13T15:31:29.323639486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ccce863a-fac1-414d-81ee-f30f1b03246f,Namespace:default,Attempt:0,} returns sandbox id \"04ab167aa35ba53e425e4b6ff3fbb99ff5611b30ba629d1f4d74519b2c1eb4ab\"" Feb 13 15:31:29.325769 containerd[1888]: time="2025-02-13T15:31:29.325407269Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 15:31:30.041524 kubelet[2332]: E0213 15:31:30.039435 2332 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:30.081417 kubelet[2332]: E0213 15:31:30.081308 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:30.908636 systemd-networkd[1720]: cali60e51b789ff: Gained IPv6LL Feb 13 15:31:31.081848 kubelet[2332]: E0213 15:31:31.081810 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:31.485773 systemd[1]: run-containerd-runc-k8s.io-c8a9ebad7bfab42598f20c8205f71da2181e8af389b703e377db3c6274d7be81-runc.oeHcW1.mount: Deactivated successfully. Feb 13 15:31:32.030157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1791007811.mount: Deactivated successfully. Feb 13 15:31:32.082738 kubelet[2332]: E0213 15:31:32.082682 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:33.083107 kubelet[2332]: E0213 15:31:33.083054 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:33.116535 ntpd[1856]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Feb 13 15:31:33.118648 ntpd[1856]: 13 Feb 15:31:33 ntpd[1856]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Feb 13 15:31:34.084888 kubelet[2332]: E0213 15:31:34.084853 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:34.529115 containerd[1888]: time="2025-02-13T15:31:34.529035412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:34.533102 containerd[1888]: time="2025-02-13T15:31:34.531950753Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Feb 13 15:31:34.535094 containerd[1888]: time="2025-02-13T15:31:34.534182649Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:34.573138 containerd[1888]: time="2025-02-13T15:31:34.573031584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:34.574473 containerd[1888]: time="2025-02-13T15:31:34.574432600Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 5.248980353s" Feb 13 15:31:34.574634 containerd[1888]: time="2025-02-13T15:31:34.574612630Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 13 15:31:34.577397 containerd[1888]: time="2025-02-13T15:31:34.577367584Z" level=info msg="CreateContainer within sandbox \"04ab167aa35ba53e425e4b6ff3fbb99ff5611b30ba629d1f4d74519b2c1eb4ab\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 15:31:34.599356 containerd[1888]: time="2025-02-13T15:31:34.599314138Z" level=info msg="CreateContainer within sandbox \"04ab167aa35ba53e425e4b6ff3fbb99ff5611b30ba629d1f4d74519b2c1eb4ab\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"bbf3d19da5851f52dfb6e3352c2840adb15f620cf6577e8e6801a8888e84a8bb\"" Feb 13 15:31:34.599977 containerd[1888]: time="2025-02-13T15:31:34.599942813Z" level=info msg="StartContainer for \"bbf3d19da5851f52dfb6e3352c2840adb15f620cf6577e8e6801a8888e84a8bb\"" Feb 13 15:31:34.641342 systemd[1]: Started cri-containerd-bbf3d19da5851f52dfb6e3352c2840adb15f620cf6577e8e6801a8888e84a8bb.scope - libcontainer container bbf3d19da5851f52dfb6e3352c2840adb15f620cf6577e8e6801a8888e84a8bb. Feb 13 15:31:34.672888 containerd[1888]: time="2025-02-13T15:31:34.672802772Z" level=info msg="StartContainer for \"bbf3d19da5851f52dfb6e3352c2840adb15f620cf6577e8e6801a8888e84a8bb\" returns successfully" Feb 13 15:31:35.086522 kubelet[2332]: E0213 15:31:35.086418 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:36.086717 kubelet[2332]: E0213 15:31:36.086659 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:37.087151 kubelet[2332]: E0213 15:31:37.087107 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:38.087935 kubelet[2332]: E0213 15:31:38.087885 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:39.088719 kubelet[2332]: E0213 15:31:39.088666 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:40.089565 kubelet[2332]: E0213 15:31:40.089512 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:41.090627 kubelet[2332]: E0213 15:31:41.090589 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:42.091033 kubelet[2332]: E0213 15:31:42.090979 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:43.092054 kubelet[2332]: E0213 15:31:43.092013 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:44.092916 kubelet[2332]: E0213 15:31:44.092857 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:45.093523 kubelet[2332]: E0213 15:31:45.093465 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:46.094528 kubelet[2332]: E0213 15:31:46.094412 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:47.095354 kubelet[2332]: E0213 15:31:47.095297 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:48.095739 kubelet[2332]: E0213 15:31:48.095682 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:49.095929 kubelet[2332]: E0213 15:31:49.095860 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:50.039151 kubelet[2332]: E0213 15:31:50.039096 2332 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:50.120314 kubelet[2332]: E0213 15:31:50.120189 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:50.133542 containerd[1888]: time="2025-02-13T15:31:50.133500566Z" level=info msg="StopPodSandbox for \"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\"" Feb 13 15:31:50.133956 containerd[1888]: time="2025-02-13T15:31:50.133626115Z" level=info msg="TearDown network for sandbox \"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\" successfully" Feb 13 15:31:50.133956 containerd[1888]: time="2025-02-13T15:31:50.133642275Z" level=info msg="StopPodSandbox for \"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\" returns successfully" Feb 13 15:31:50.140924 containerd[1888]: time="2025-02-13T15:31:50.140872999Z" level=info msg="RemovePodSandbox for \"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\"" Feb 13 15:31:50.167813 containerd[1888]: time="2025-02-13T15:31:50.167549300Z" level=info msg="Forcibly stopping sandbox \"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\"" Feb 13 15:31:50.167813 containerd[1888]: time="2025-02-13T15:31:50.167696592Z" level=info msg="TearDown network for sandbox \"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\" successfully" Feb 13 15:31:50.175880 containerd[1888]: time="2025-02-13T15:31:50.175607737Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:31:50.175880 containerd[1888]: time="2025-02-13T15:31:50.175725045Z" level=info msg="RemovePodSandbox \"46134515933e921813abbe9fb3de15aa14eebe80008fd62107991acaee9f2e55\" returns successfully" Feb 13 15:31:50.181990 containerd[1888]: time="2025-02-13T15:31:50.179859443Z" level=info msg="StopPodSandbox for \"6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786\"" Feb 13 15:31:50.181990 containerd[1888]: time="2025-02-13T15:31:50.180007667Z" level=info msg="TearDown network for sandbox \"6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786\" successfully" Feb 13 15:31:50.181990 containerd[1888]: time="2025-02-13T15:31:50.180024582Z" level=info msg="StopPodSandbox for \"6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786\" returns successfully" Feb 13 15:31:50.183711 containerd[1888]: time="2025-02-13T15:31:50.183657072Z" level=info msg="RemovePodSandbox for \"6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786\"" Feb 13 15:31:50.183814 containerd[1888]: time="2025-02-13T15:31:50.183718779Z" level=info msg="Forcibly stopping sandbox \"6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786\"" Feb 13 15:31:50.183882 containerd[1888]: time="2025-02-13T15:31:50.183830612Z" level=info msg="TearDown network for sandbox \"6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786\" successfully" Feb 13 15:31:50.188315 containerd[1888]: time="2025-02-13T15:31:50.188267841Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:31:50.189752 containerd[1888]: time="2025-02-13T15:31:50.188328862Z" level=info msg="RemovePodSandbox \"6fa01d1a8c3428ebedb5732a1f650d0977ca889d321bf73ebdccf69ddc5f9786\" returns successfully" Feb 13 15:31:50.189752 containerd[1888]: time="2025-02-13T15:31:50.188796553Z" level=info msg="StopPodSandbox for \"c2062018143225935d831555441151f166411d89d5c3a73413352e64c89d0626\"" Feb 13 15:31:50.189752 containerd[1888]: time="2025-02-13T15:31:50.188996409Z" level=info msg="TearDown network for sandbox \"c2062018143225935d831555441151f166411d89d5c3a73413352e64c89d0626\" successfully" Feb 13 15:31:50.189752 containerd[1888]: time="2025-02-13T15:31:50.189013969Z" level=info msg="StopPodSandbox for \"c2062018143225935d831555441151f166411d89d5c3a73413352e64c89d0626\" returns successfully" Feb 13 15:31:50.189752 containerd[1888]: time="2025-02-13T15:31:50.189378565Z" level=info msg="RemovePodSandbox for \"c2062018143225935d831555441151f166411d89d5c3a73413352e64c89d0626\"" Feb 13 15:31:50.189752 containerd[1888]: time="2025-02-13T15:31:50.189403863Z" level=info msg="Forcibly stopping sandbox \"c2062018143225935d831555441151f166411d89d5c3a73413352e64c89d0626\"" Feb 13 15:31:50.189752 containerd[1888]: time="2025-02-13T15:31:50.189481076Z" level=info msg="TearDown network for sandbox \"c2062018143225935d831555441151f166411d89d5c3a73413352e64c89d0626\" successfully" Feb 13 15:31:50.192401 containerd[1888]: time="2025-02-13T15:31:50.192358442Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c2062018143225935d831555441151f166411d89d5c3a73413352e64c89d0626\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:31:50.192510 containerd[1888]: time="2025-02-13T15:31:50.192421665Z" level=info msg="RemovePodSandbox \"c2062018143225935d831555441151f166411d89d5c3a73413352e64c89d0626\" returns successfully" Feb 13 15:31:50.192911 containerd[1888]: time="2025-02-13T15:31:50.192866128Z" level=info msg="StopPodSandbox for \"0111c65653a34a4775b455233cef655a6171f473130bb8a37089cc787e1e08b5\"" Feb 13 15:31:50.193028 containerd[1888]: time="2025-02-13T15:31:50.192967402Z" level=info msg="TearDown network for sandbox \"0111c65653a34a4775b455233cef655a6171f473130bb8a37089cc787e1e08b5\" successfully" Feb 13 15:31:50.193101 containerd[1888]: time="2025-02-13T15:31:50.193041002Z" level=info msg="StopPodSandbox for \"0111c65653a34a4775b455233cef655a6171f473130bb8a37089cc787e1e08b5\" returns successfully" Feb 13 15:31:50.195020 containerd[1888]: time="2025-02-13T15:31:50.194997085Z" level=info msg="RemovePodSandbox for \"0111c65653a34a4775b455233cef655a6171f473130bb8a37089cc787e1e08b5\"" Feb 13 15:31:50.196091 containerd[1888]: time="2025-02-13T15:31:50.195154862Z" level=info msg="Forcibly stopping sandbox \"0111c65653a34a4775b455233cef655a6171f473130bb8a37089cc787e1e08b5\"" Feb 13 15:31:50.196091 containerd[1888]: time="2025-02-13T15:31:50.195242192Z" level=info msg="TearDown network for sandbox \"0111c65653a34a4775b455233cef655a6171f473130bb8a37089cc787e1e08b5\" successfully" Feb 13 15:31:50.198519 containerd[1888]: time="2025-02-13T15:31:50.198474595Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0111c65653a34a4775b455233cef655a6171f473130bb8a37089cc787e1e08b5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:31:50.198598 containerd[1888]: time="2025-02-13T15:31:50.198526089Z" level=info msg="RemovePodSandbox \"0111c65653a34a4775b455233cef655a6171f473130bb8a37089cc787e1e08b5\" returns successfully" Feb 13 15:31:50.199757 containerd[1888]: time="2025-02-13T15:31:50.199727820Z" level=info msg="StopPodSandbox for \"f9384dcdeebd925b8031a46bcebeef6dd9a5bf492ccf93dd7f189836d41c7871\"" Feb 13 15:31:50.199853 containerd[1888]: time="2025-02-13T15:31:50.199831471Z" level=info msg="TearDown network for sandbox \"f9384dcdeebd925b8031a46bcebeef6dd9a5bf492ccf93dd7f189836d41c7871\" successfully" Feb 13 15:31:50.199920 containerd[1888]: time="2025-02-13T15:31:50.199851079Z" level=info msg="StopPodSandbox for \"f9384dcdeebd925b8031a46bcebeef6dd9a5bf492ccf93dd7f189836d41c7871\" returns successfully" Feb 13 15:31:50.200209 containerd[1888]: time="2025-02-13T15:31:50.200183941Z" level=info msg="RemovePodSandbox for \"f9384dcdeebd925b8031a46bcebeef6dd9a5bf492ccf93dd7f189836d41c7871\"" Feb 13 15:31:50.200285 containerd[1888]: time="2025-02-13T15:31:50.200210449Z" level=info msg="Forcibly stopping sandbox \"f9384dcdeebd925b8031a46bcebeef6dd9a5bf492ccf93dd7f189836d41c7871\"" Feb 13 15:31:50.200343 containerd[1888]: time="2025-02-13T15:31:50.200306051Z" level=info msg="TearDown network for sandbox \"f9384dcdeebd925b8031a46bcebeef6dd9a5bf492ccf93dd7f189836d41c7871\" successfully" Feb 13 15:31:50.203232 containerd[1888]: time="2025-02-13T15:31:50.203192822Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f9384dcdeebd925b8031a46bcebeef6dd9a5bf492ccf93dd7f189836d41c7871\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:31:50.203319 containerd[1888]: time="2025-02-13T15:31:50.203237845Z" level=info msg="RemovePodSandbox \"f9384dcdeebd925b8031a46bcebeef6dd9a5bf492ccf93dd7f189836d41c7871\" returns successfully" Feb 13 15:31:50.203614 containerd[1888]: time="2025-02-13T15:31:50.203557855Z" level=info msg="StopPodSandbox for \"c62379b0e0a89e264f2d6158ea14067b20260d10c62f4009b6a5629675c32a69\"" Feb 13 15:31:50.203696 containerd[1888]: time="2025-02-13T15:31:50.203668931Z" level=info msg="TearDown network for sandbox \"c62379b0e0a89e264f2d6158ea14067b20260d10c62f4009b6a5629675c32a69\" successfully" Feb 13 15:31:50.203696 containerd[1888]: time="2025-02-13T15:31:50.203685158Z" level=info msg="StopPodSandbox for \"c62379b0e0a89e264f2d6158ea14067b20260d10c62f4009b6a5629675c32a69\" returns successfully" Feb 13 15:31:50.204013 containerd[1888]: time="2025-02-13T15:31:50.203991968Z" level=info msg="RemovePodSandbox for \"c62379b0e0a89e264f2d6158ea14067b20260d10c62f4009b6a5629675c32a69\"" Feb 13 15:31:50.204115 containerd[1888]: time="2025-02-13T15:31:50.204094150Z" level=info msg="Forcibly stopping sandbox \"c62379b0e0a89e264f2d6158ea14067b20260d10c62f4009b6a5629675c32a69\"" Feb 13 15:31:50.204218 containerd[1888]: time="2025-02-13T15:31:50.204171829Z" level=info msg="TearDown network for sandbox \"c62379b0e0a89e264f2d6158ea14067b20260d10c62f4009b6a5629675c32a69\" successfully" Feb 13 15:31:50.207337 containerd[1888]: time="2025-02-13T15:31:50.207309491Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c62379b0e0a89e264f2d6158ea14067b20260d10c62f4009b6a5629675c32a69\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:31:50.207433 containerd[1888]: time="2025-02-13T15:31:50.207356022Z" level=info msg="RemovePodSandbox \"c62379b0e0a89e264f2d6158ea14067b20260d10c62f4009b6a5629675c32a69\" returns successfully" Feb 13 15:31:50.207724 containerd[1888]: time="2025-02-13T15:31:50.207700774Z" level=info msg="StopPodSandbox for \"0e914a0e62ab2c715489e22ce7216965150a576638a81af3992c00ef0b0fd526\"" Feb 13 15:31:50.207923 containerd[1888]: time="2025-02-13T15:31:50.207891138Z" level=info msg="TearDown network for sandbox \"0e914a0e62ab2c715489e22ce7216965150a576638a81af3992c00ef0b0fd526\" successfully" Feb 13 15:31:50.207923 containerd[1888]: time="2025-02-13T15:31:50.207914112Z" level=info msg="StopPodSandbox for \"0e914a0e62ab2c715489e22ce7216965150a576638a81af3992c00ef0b0fd526\" returns successfully" Feb 13 15:31:50.208215 containerd[1888]: time="2025-02-13T15:31:50.208192470Z" level=info msg="RemovePodSandbox for \"0e914a0e62ab2c715489e22ce7216965150a576638a81af3992c00ef0b0fd526\"" Feb 13 15:31:50.208349 containerd[1888]: time="2025-02-13T15:31:50.208216805Z" level=info msg="Forcibly stopping sandbox \"0e914a0e62ab2c715489e22ce7216965150a576638a81af3992c00ef0b0fd526\"" Feb 13 15:31:50.208399 containerd[1888]: time="2025-02-13T15:31:50.208359058Z" level=info msg="TearDown network for sandbox \"0e914a0e62ab2c715489e22ce7216965150a576638a81af3992c00ef0b0fd526\" successfully" Feb 13 15:31:50.212533 containerd[1888]: time="2025-02-13T15:31:50.212496484Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0e914a0e62ab2c715489e22ce7216965150a576638a81af3992c00ef0b0fd526\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:31:50.212690 containerd[1888]: time="2025-02-13T15:31:50.212598740Z" level=info msg="RemovePodSandbox \"0e914a0e62ab2c715489e22ce7216965150a576638a81af3992c00ef0b0fd526\" returns successfully" Feb 13 15:31:50.213058 containerd[1888]: time="2025-02-13T15:31:50.213029472Z" level=info msg="StopPodSandbox for \"287a2e31e254699d73db7f7068c7a7d1c18a77ac84212b45296cdc13e6198009\"" Feb 13 15:31:50.213184 containerd[1888]: time="2025-02-13T15:31:50.213158381Z" level=info msg="TearDown network for sandbox \"287a2e31e254699d73db7f7068c7a7d1c18a77ac84212b45296cdc13e6198009\" successfully" Feb 13 15:31:50.213249 containerd[1888]: time="2025-02-13T15:31:50.213179474Z" level=info msg="StopPodSandbox for \"287a2e31e254699d73db7f7068c7a7d1c18a77ac84212b45296cdc13e6198009\" returns successfully" Feb 13 15:31:50.213560 containerd[1888]: time="2025-02-13T15:31:50.213537364Z" level=info msg="RemovePodSandbox for \"287a2e31e254699d73db7f7068c7a7d1c18a77ac84212b45296cdc13e6198009\"" Feb 13 15:31:50.213668 containerd[1888]: time="2025-02-13T15:31:50.213563010Z" level=info msg="Forcibly stopping sandbox \"287a2e31e254699d73db7f7068c7a7d1c18a77ac84212b45296cdc13e6198009\"" Feb 13 15:31:50.213717 containerd[1888]: time="2025-02-13T15:31:50.213653837Z" level=info msg="TearDown network for sandbox \"287a2e31e254699d73db7f7068c7a7d1c18a77ac84212b45296cdc13e6198009\" successfully" Feb 13 15:31:50.220868 containerd[1888]: time="2025-02-13T15:31:50.220822503Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"287a2e31e254699d73db7f7068c7a7d1c18a77ac84212b45296cdc13e6198009\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:31:50.220971 containerd[1888]: time="2025-02-13T15:31:50.220875111Z" level=info msg="RemovePodSandbox \"287a2e31e254699d73db7f7068c7a7d1c18a77ac84212b45296cdc13e6198009\" returns successfully" Feb 13 15:31:50.221585 containerd[1888]: time="2025-02-13T15:31:50.221457784Z" level=info msg="StopPodSandbox for \"7dfd9e05d94a6c710fdcc2ec9fc32b71f2ec1dfd37bef64567534e836f51cd3d\"" Feb 13 15:31:50.221679 containerd[1888]: time="2025-02-13T15:31:50.221634792Z" level=info msg="TearDown network for sandbox \"7dfd9e05d94a6c710fdcc2ec9fc32b71f2ec1dfd37bef64567534e836f51cd3d\" successfully" Feb 13 15:31:50.221679 containerd[1888]: time="2025-02-13T15:31:50.221652852Z" level=info msg="StopPodSandbox for \"7dfd9e05d94a6c710fdcc2ec9fc32b71f2ec1dfd37bef64567534e836f51cd3d\" returns successfully" Feb 13 15:31:50.221975 containerd[1888]: time="2025-02-13T15:31:50.221950990Z" level=info msg="RemovePodSandbox for \"7dfd9e05d94a6c710fdcc2ec9fc32b71f2ec1dfd37bef64567534e836f51cd3d\"" Feb 13 15:31:50.222139 containerd[1888]: time="2025-02-13T15:31:50.221978320Z" level=info msg="Forcibly stopping sandbox \"7dfd9e05d94a6c710fdcc2ec9fc32b71f2ec1dfd37bef64567534e836f51cd3d\"" Feb 13 15:31:50.222201 containerd[1888]: time="2025-02-13T15:31:50.222121474Z" level=info msg="TearDown network for sandbox \"7dfd9e05d94a6c710fdcc2ec9fc32b71f2ec1dfd37bef64567534e836f51cd3d\" successfully" Feb 13 15:31:50.242274 containerd[1888]: time="2025-02-13T15:31:50.242158578Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7dfd9e05d94a6c710fdcc2ec9fc32b71f2ec1dfd37bef64567534e836f51cd3d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:31:50.242428 containerd[1888]: time="2025-02-13T15:31:50.242281568Z" level=info msg="RemovePodSandbox \"7dfd9e05d94a6c710fdcc2ec9fc32b71f2ec1dfd37bef64567534e836f51cd3d\" returns successfully" Feb 13 15:31:50.242770 containerd[1888]: time="2025-02-13T15:31:50.242679512Z" level=info msg="StopPodSandbox for \"1441ff552043c3e280e84ba9d0bc6aa7ea3bec9192028af9e43301dd10366345\"" Feb 13 15:31:50.242875 containerd[1888]: time="2025-02-13T15:31:50.242845959Z" level=info msg="TearDown network for sandbox \"1441ff552043c3e280e84ba9d0bc6aa7ea3bec9192028af9e43301dd10366345\" successfully" Feb 13 15:31:50.242932 containerd[1888]: time="2025-02-13T15:31:50.242871655Z" level=info msg="StopPodSandbox for \"1441ff552043c3e280e84ba9d0bc6aa7ea3bec9192028af9e43301dd10366345\" returns successfully" Feb 13 15:31:50.243223 containerd[1888]: time="2025-02-13T15:31:50.243195410Z" level=info msg="RemovePodSandbox for \"1441ff552043c3e280e84ba9d0bc6aa7ea3bec9192028af9e43301dd10366345\"" Feb 13 15:31:50.243223 containerd[1888]: time="2025-02-13T15:31:50.243222270Z" level=info msg="Forcibly stopping sandbox \"1441ff552043c3e280e84ba9d0bc6aa7ea3bec9192028af9e43301dd10366345\"" Feb 13 15:31:50.243353 containerd[1888]: time="2025-02-13T15:31:50.243299486Z" level=info msg="TearDown network for sandbox \"1441ff552043c3e280e84ba9d0bc6aa7ea3bec9192028af9e43301dd10366345\" successfully" Feb 13 15:31:50.248464 containerd[1888]: time="2025-02-13T15:31:50.248417011Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1441ff552043c3e280e84ba9d0bc6aa7ea3bec9192028af9e43301dd10366345\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:31:50.248580 containerd[1888]: time="2025-02-13T15:31:50.248471666Z" level=info msg="RemovePodSandbox \"1441ff552043c3e280e84ba9d0bc6aa7ea3bec9192028af9e43301dd10366345\" returns successfully" Feb 13 15:31:50.248890 containerd[1888]: time="2025-02-13T15:31:50.248868994Z" level=info msg="StopPodSandbox for \"5594f31a95453176efd5fcfe64d4da181254b168de14cf3c490db6f4a10b5a60\"" Feb 13 15:31:50.249003 containerd[1888]: time="2025-02-13T15:31:50.248974819Z" level=info msg="TearDown network for sandbox \"5594f31a95453176efd5fcfe64d4da181254b168de14cf3c490db6f4a10b5a60\" successfully" Feb 13 15:31:50.249003 containerd[1888]: time="2025-02-13T15:31:50.248994672Z" level=info msg="StopPodSandbox for \"5594f31a95453176efd5fcfe64d4da181254b168de14cf3c490db6f4a10b5a60\" returns successfully" Feb 13 15:31:50.250436 containerd[1888]: time="2025-02-13T15:31:50.249370275Z" level=info msg="RemovePodSandbox for \"5594f31a95453176efd5fcfe64d4da181254b168de14cf3c490db6f4a10b5a60\"" Feb 13 15:31:50.250436 containerd[1888]: time="2025-02-13T15:31:50.249460799Z" level=info msg="Forcibly stopping sandbox \"5594f31a95453176efd5fcfe64d4da181254b168de14cf3c490db6f4a10b5a60\"" Feb 13 15:31:50.250436 containerd[1888]: time="2025-02-13T15:31:50.249519804Z" level=info msg="TearDown network for sandbox \"5594f31a95453176efd5fcfe64d4da181254b168de14cf3c490db6f4a10b5a60\" successfully" Feb 13 15:31:50.251947 containerd[1888]: time="2025-02-13T15:31:50.251907982Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5594f31a95453176efd5fcfe64d4da181254b168de14cf3c490db6f4a10b5a60\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:31:50.252033 containerd[1888]: time="2025-02-13T15:31:50.251954167Z" level=info msg="RemovePodSandbox \"5594f31a95453176efd5fcfe64d4da181254b168de14cf3c490db6f4a10b5a60\" returns successfully" Feb 13 15:31:50.252354 containerd[1888]: time="2025-02-13T15:31:50.252277784Z" level=info msg="StopPodSandbox for \"394f10ec7761b00f05e2f3cd55b3945dc0f3d13d154fe6e57a457ab76c57d0cc\"" Feb 13 15:31:50.253891 containerd[1888]: time="2025-02-13T15:31:50.253826365Z" level=info msg="TearDown network for sandbox \"394f10ec7761b00f05e2f3cd55b3945dc0f3d13d154fe6e57a457ab76c57d0cc\" successfully" Feb 13 15:31:50.253891 containerd[1888]: time="2025-02-13T15:31:50.253878719Z" level=info msg="StopPodSandbox for \"394f10ec7761b00f05e2f3cd55b3945dc0f3d13d154fe6e57a457ab76c57d0cc\" returns successfully" Feb 13 15:31:50.254281 containerd[1888]: time="2025-02-13T15:31:50.254257663Z" level=info msg="RemovePodSandbox for \"394f10ec7761b00f05e2f3cd55b3945dc0f3d13d154fe6e57a457ab76c57d0cc\"" Feb 13 15:31:50.254350 containerd[1888]: time="2025-02-13T15:31:50.254282823Z" level=info msg="Forcibly stopping sandbox \"394f10ec7761b00f05e2f3cd55b3945dc0f3d13d154fe6e57a457ab76c57d0cc\"" Feb 13 15:31:50.254395 containerd[1888]: time="2025-02-13T15:31:50.254357068Z" level=info msg="TearDown network for sandbox \"394f10ec7761b00f05e2f3cd55b3945dc0f3d13d154fe6e57a457ab76c57d0cc\" successfully" Feb 13 15:31:50.257635 containerd[1888]: time="2025-02-13T15:31:50.257606495Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"394f10ec7761b00f05e2f3cd55b3945dc0f3d13d154fe6e57a457ab76c57d0cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:31:50.257740 containerd[1888]: time="2025-02-13T15:31:50.257652724Z" level=info msg="RemovePodSandbox \"394f10ec7761b00f05e2f3cd55b3945dc0f3d13d154fe6e57a457ab76c57d0cc\" returns successfully" Feb 13 15:31:50.258051 containerd[1888]: time="2025-02-13T15:31:50.257986635Z" level=info msg="StopPodSandbox for \"43cfc2a1cd891f872194322b76bd7ee6a5f76f87624550d7661ad2db6fdf58dd\"" Feb 13 15:31:50.258146 containerd[1888]: time="2025-02-13T15:31:50.258131012Z" level=info msg="TearDown network for sandbox \"43cfc2a1cd891f872194322b76bd7ee6a5f76f87624550d7661ad2db6fdf58dd\" successfully" Feb 13 15:31:50.258203 containerd[1888]: time="2025-02-13T15:31:50.258147276Z" level=info msg="StopPodSandbox for \"43cfc2a1cd891f872194322b76bd7ee6a5f76f87624550d7661ad2db6fdf58dd\" returns successfully" Feb 13 15:31:50.258553 containerd[1888]: time="2025-02-13T15:31:50.258411815Z" level=info msg="RemovePodSandbox for \"43cfc2a1cd891f872194322b76bd7ee6a5f76f87624550d7661ad2db6fdf58dd\"" Feb 13 15:31:50.258653 containerd[1888]: time="2025-02-13T15:31:50.258552712Z" level=info msg="Forcibly stopping sandbox \"43cfc2a1cd891f872194322b76bd7ee6a5f76f87624550d7661ad2db6fdf58dd\"" Feb 13 15:31:50.258719 containerd[1888]: time="2025-02-13T15:31:50.258664074Z" level=info msg="TearDown network for sandbox \"43cfc2a1cd891f872194322b76bd7ee6a5f76f87624550d7661ad2db6fdf58dd\" successfully" Feb 13 15:31:50.261209 containerd[1888]: time="2025-02-13T15:31:50.261180776Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"43cfc2a1cd891f872194322b76bd7ee6a5f76f87624550d7661ad2db6fdf58dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:31:50.261302 containerd[1888]: time="2025-02-13T15:31:50.261226387Z" level=info msg="RemovePodSandbox \"43cfc2a1cd891f872194322b76bd7ee6a5f76f87624550d7661ad2db6fdf58dd\" returns successfully" Feb 13 15:31:51.121155 kubelet[2332]: E0213 15:31:51.121107 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:52.121711 kubelet[2332]: E0213 15:31:52.121665 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:53.122699 kubelet[2332]: E0213 15:31:53.122655 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:54.123331 kubelet[2332]: E0213 15:31:54.123261 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:55.124530 kubelet[2332]: E0213 15:31:55.124472 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:56.125290 kubelet[2332]: E0213 15:31:56.125231 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:57.125652 kubelet[2332]: E0213 15:31:57.125606 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:58.125872 kubelet[2332]: E0213 15:31:58.125813 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:59.126444 kubelet[2332]: E0213 15:31:59.126386 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:31:59.182674 kubelet[2332]: I0213 15:31:59.182612 2332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=25.931713139 podStartE2EDuration="31.18259543s" podCreationTimestamp="2025-02-13 15:31:28 +0000 UTC" firstStartedPulling="2025-02-13 15:31:29.324921821 +0000 UTC m=+40.036208788" lastFinishedPulling="2025-02-13 15:31:34.575804091 +0000 UTC m=+45.287091079" observedRunningTime="2025-02-13 15:31:34.697319434 +0000 UTC m=+45.408606422" watchObservedRunningTime="2025-02-13 15:31:59.18259543 +0000 UTC m=+69.893882431" Feb 13 15:31:59.193066 systemd[1]: Created slice kubepods-besteffort-poda034da10_0c57_4e5f_a554_9c2313c7b085.slice - libcontainer container kubepods-besteffort-poda034da10_0c57_4e5f_a554_9c2313c7b085.slice. Feb 13 15:31:59.377046 kubelet[2332]: I0213 15:31:59.376845 2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sx2s\" (UniqueName: \"kubernetes.io/projected/a034da10-0c57-4e5f-a554-9c2313c7b085-kube-api-access-5sx2s\") pod \"test-pod-1\" (UID: \"a034da10-0c57-4e5f-a554-9c2313c7b085\") " pod="default/test-pod-1" Feb 13 15:31:59.377046 kubelet[2332]: I0213 15:31:59.376908 2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f22a3aa2-8356-4967-8b03-6649c36b65f2\" (UniqueName: \"kubernetes.io/nfs/a034da10-0c57-4e5f-a554-9c2313c7b085-pvc-f22a3aa2-8356-4967-8b03-6649c36b65f2\") pod \"test-pod-1\" (UID: \"a034da10-0c57-4e5f-a554-9c2313c7b085\") " pod="default/test-pod-1" Feb 13 15:31:59.552497 kernel: FS-Cache: Loaded Feb 13 15:31:59.643579 kernel: RPC: Registered named UNIX socket transport module. Feb 13 15:31:59.643715 kernel: RPC: Registered udp transport module. Feb 13 15:31:59.643746 kernel: RPC: Registered tcp transport module. Feb 13 15:31:59.644510 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 15:31:59.644584 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 15:32:00.118316 kernel: NFS: Registering the id_resolver key type Feb 13 15:32:00.118674 kernel: Key type id_resolver registered Feb 13 15:32:00.118713 kernel: Key type id_legacy registered Feb 13 15:32:00.127259 kubelet[2332]: E0213 15:32:00.127127 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:00.160465 nfsidmap[4346]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 13 15:32:00.165031 nfsidmap[4347]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 13 15:32:00.432383 containerd[1888]: time="2025-02-13T15:32:00.431941723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:a034da10-0c57-4e5f-a554-9c2313c7b085,Namespace:default,Attempt:0,}" Feb 13 15:32:00.616700 (udev-worker)[4335]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:32:00.617974 systemd-networkd[1720]: cali5ec59c6bf6e: Link UP Feb 13 15:32:00.619364 systemd-networkd[1720]: cali5ec59c6bf6e: Gained carrier Feb 13 15:32:00.641017 containerd[1888]: 2025-02-13 15:32:00.513 [INFO][4349] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.26.113-k8s-test--pod--1-eth0 default a034da10-0c57-4e5f-a554-9c2313c7b085 1347 0 2025-02-13 15:31:30 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.26.113 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="2ed2d7f9612d05aa1deab0d6b8bc7cdfc4ea3768e7e99fe83df0ce4beacbf489" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.26.113-k8s-test--pod--1-" Feb 13 15:32:00.641017 containerd[1888]: 2025-02-13 15:32:00.513 [INFO][4349] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2ed2d7f9612d05aa1deab0d6b8bc7cdfc4ea3768e7e99fe83df0ce4beacbf489" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.26.113-k8s-test--pod--1-eth0" Feb 13 15:32:00.641017 containerd[1888]: 2025-02-13 15:32:00.548 [INFO][4359] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2ed2d7f9612d05aa1deab0d6b8bc7cdfc4ea3768e7e99fe83df0ce4beacbf489" HandleID="k8s-pod-network.2ed2d7f9612d05aa1deab0d6b8bc7cdfc4ea3768e7e99fe83df0ce4beacbf489" Workload="172.31.26.113-k8s-test--pod--1-eth0" Feb 13 15:32:00.641017 containerd[1888]: 2025-02-13 15:32:00.562 [INFO][4359] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2ed2d7f9612d05aa1deab0d6b8bc7cdfc4ea3768e7e99fe83df0ce4beacbf489" HandleID="k8s-pod-network.2ed2d7f9612d05aa1deab0d6b8bc7cdfc4ea3768e7e99fe83df0ce4beacbf489" Workload="172.31.26.113-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004d1f40), Attrs:map[string]string{"namespace":"default", "node":"172.31.26.113", "pod":"test-pod-1", "timestamp":"2025-02-13 15:32:00.548419132 +0000 UTC"}, Hostname:"172.31.26.113", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:32:00.641017 containerd[1888]: 2025-02-13 15:32:00.563 [INFO][4359] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:32:00.641017 containerd[1888]: 2025-02-13 15:32:00.563 [INFO][4359] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:32:00.641017 containerd[1888]: 2025-02-13 15:32:00.563 [INFO][4359] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.26.113' Feb 13 15:32:00.641017 containerd[1888]: 2025-02-13 15:32:00.566 [INFO][4359] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2ed2d7f9612d05aa1deab0d6b8bc7cdfc4ea3768e7e99fe83df0ce4beacbf489" host="172.31.26.113" Feb 13 15:32:00.641017 containerd[1888]: 2025-02-13 15:32:00.571 [INFO][4359] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.26.113" Feb 13 15:32:00.641017 containerd[1888]: 2025-02-13 15:32:00.578 [INFO][4359] ipam/ipam.go 489: Trying affinity for 192.168.118.0/26 host="172.31.26.113" Feb 13 15:32:00.641017 containerd[1888]: 2025-02-13 15:32:00.581 [INFO][4359] ipam/ipam.go 155: Attempting to load block cidr=192.168.118.0/26 host="172.31.26.113" Feb 13 15:32:00.641017 containerd[1888]: 2025-02-13 15:32:00.584 [INFO][4359] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.118.0/26 host="172.31.26.113" Feb 13 15:32:00.641017 containerd[1888]: 2025-02-13 15:32:00.584 [INFO][4359] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.118.0/26 handle="k8s-pod-network.2ed2d7f9612d05aa1deab0d6b8bc7cdfc4ea3768e7e99fe83df0ce4beacbf489" host="172.31.26.113" Feb 13 15:32:00.641017 containerd[1888]: 2025-02-13 15:32:00.586 [INFO][4359] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2ed2d7f9612d05aa1deab0d6b8bc7cdfc4ea3768e7e99fe83df0ce4beacbf489 Feb 13 15:32:00.641017 containerd[1888]: 2025-02-13 15:32:00.593 [INFO][4359] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.118.0/26 handle="k8s-pod-network.2ed2d7f9612d05aa1deab0d6b8bc7cdfc4ea3768e7e99fe83df0ce4beacbf489" host="172.31.26.113" Feb 13 15:32:00.641017 containerd[1888]: 2025-02-13 15:32:00.610 [INFO][4359] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.118.4/26] block=192.168.118.0/26 handle="k8s-pod-network.2ed2d7f9612d05aa1deab0d6b8bc7cdfc4ea3768e7e99fe83df0ce4beacbf489" host="172.31.26.113" Feb 13 15:32:00.641017 containerd[1888]: 2025-02-13 15:32:00.610 [INFO][4359] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.118.4/26] handle="k8s-pod-network.2ed2d7f9612d05aa1deab0d6b8bc7cdfc4ea3768e7e99fe83df0ce4beacbf489" host="172.31.26.113" Feb 13 15:32:00.641017 containerd[1888]: 2025-02-13 15:32:00.610 [INFO][4359] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:32:00.641017 containerd[1888]: 2025-02-13 15:32:00.610 [INFO][4359] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.118.4/26] IPv6=[] ContainerID="2ed2d7f9612d05aa1deab0d6b8bc7cdfc4ea3768e7e99fe83df0ce4beacbf489" HandleID="k8s-pod-network.2ed2d7f9612d05aa1deab0d6b8bc7cdfc4ea3768e7e99fe83df0ce4beacbf489" Workload="172.31.26.113-k8s-test--pod--1-eth0" Feb 13 15:32:00.641017 containerd[1888]: 2025-02-13 15:32:00.613 [INFO][4349] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2ed2d7f9612d05aa1deab0d6b8bc7cdfc4ea3768e7e99fe83df0ce4beacbf489" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.26.113-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.113-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"a034da10-0c57-4e5f-a554-9c2313c7b085", ResourceVersion:"1347", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 31, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.113", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.118.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:32:00.643121 containerd[1888]: 2025-02-13 15:32:00.613 [INFO][4349] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.118.4/32] ContainerID="2ed2d7f9612d05aa1deab0d6b8bc7cdfc4ea3768e7e99fe83df0ce4beacbf489" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.26.113-k8s-test--pod--1-eth0" Feb 13 15:32:00.643121 containerd[1888]: 2025-02-13 15:32:00.613 [INFO][4349] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="2ed2d7f9612d05aa1deab0d6b8bc7cdfc4ea3768e7e99fe83df0ce4beacbf489" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.26.113-k8s-test--pod--1-eth0" Feb 13 15:32:00.643121 containerd[1888]: 2025-02-13 15:32:00.619 [INFO][4349] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2ed2d7f9612d05aa1deab0d6b8bc7cdfc4ea3768e7e99fe83df0ce4beacbf489" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.26.113-k8s-test--pod--1-eth0" Feb 13 15:32:00.643121 containerd[1888]: 2025-02-13 15:32:00.619 [INFO][4349] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2ed2d7f9612d05aa1deab0d6b8bc7cdfc4ea3768e7e99fe83df0ce4beacbf489" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.26.113-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.113-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"a034da10-0c57-4e5f-a554-9c2313c7b085", ResourceVersion:"1347", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 31, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.113", ContainerID:"2ed2d7f9612d05aa1deab0d6b8bc7cdfc4ea3768e7e99fe83df0ce4beacbf489", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.118.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"a6:67:9b:a6:e3:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:32:00.643121 containerd[1888]: 2025-02-13 15:32:00.639 [INFO][4349] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2ed2d7f9612d05aa1deab0d6b8bc7cdfc4ea3768e7e99fe83df0ce4beacbf489" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.26.113-k8s-test--pod--1-eth0" Feb 13 15:32:00.678545 containerd[1888]: time="2025-02-13T15:32:00.678338698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:32:00.678545 containerd[1888]: time="2025-02-13T15:32:00.678430167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:32:00.678545 containerd[1888]: time="2025-02-13T15:32:00.678446877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:00.679210 containerd[1888]: time="2025-02-13T15:32:00.679133002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:00.713281 systemd[1]: Started cri-containerd-2ed2d7f9612d05aa1deab0d6b8bc7cdfc4ea3768e7e99fe83df0ce4beacbf489.scope - libcontainer container 2ed2d7f9612d05aa1deab0d6b8bc7cdfc4ea3768e7e99fe83df0ce4beacbf489. Feb 13 15:32:00.770003 containerd[1888]: time="2025-02-13T15:32:00.769963138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:a034da10-0c57-4e5f-a554-9c2313c7b085,Namespace:default,Attempt:0,} returns sandbox id \"2ed2d7f9612d05aa1deab0d6b8bc7cdfc4ea3768e7e99fe83df0ce4beacbf489\"" Feb 13 15:32:00.775346 containerd[1888]: time="2025-02-13T15:32:00.775298020Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 15:32:01.127719 kubelet[2332]: E0213 15:32:01.127665 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:01.296196 containerd[1888]: time="2025-02-13T15:32:01.296127667Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 15:32:01.302629 containerd[1888]: time="2025-02-13T15:32:01.301965580Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 526.629144ms" Feb 13 15:32:01.302629 containerd[1888]: time="2025-02-13T15:32:01.302020508Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 15:32:01.307957 containerd[1888]: time="2025-02-13T15:32:01.307903502Z" level=info msg="CreateContainer within sandbox \"2ed2d7f9612d05aa1deab0d6b8bc7cdfc4ea3768e7e99fe83df0ce4beacbf489\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 15:32:01.312401 containerd[1888]: time="2025-02-13T15:32:01.312315693Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:01.336619 containerd[1888]: time="2025-02-13T15:32:01.335482700Z" level=info msg="CreateContainer within sandbox \"2ed2d7f9612d05aa1deab0d6b8bc7cdfc4ea3768e7e99fe83df0ce4beacbf489\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"2671e932338a7b22fd192c395f6a2844786cb263721c5fe8251d143ad7a53ce6\"" Feb 13 15:32:01.355987 containerd[1888]: time="2025-02-13T15:32:01.348978051Z" level=info msg="StartContainer for \"2671e932338a7b22fd192c395f6a2844786cb263721c5fe8251d143ad7a53ce6\"" Feb 13 15:32:01.354732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1103242081.mount: Deactivated successfully. Feb 13 15:32:01.471482 systemd[1]: Started cri-containerd-2671e932338a7b22fd192c395f6a2844786cb263721c5fe8251d143ad7a53ce6.scope - libcontainer container 2671e932338a7b22fd192c395f6a2844786cb263721c5fe8251d143ad7a53ce6. Feb 13 15:32:01.574508 containerd[1888]: time="2025-02-13T15:32:01.574438752Z" level=info msg="StartContainer for \"2671e932338a7b22fd192c395f6a2844786cb263721c5fe8251d143ad7a53ce6\" returns successfully" Feb 13 15:32:02.128106 kubelet[2332]: E0213 15:32:02.128044 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:02.204312 systemd-networkd[1720]: cali5ec59c6bf6e: Gained IPv6LL Feb 13 15:32:03.128862 kubelet[2332]: E0213 15:32:03.128813 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:04.129047 kubelet[2332]: E0213 15:32:04.128995 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:05.116375 ntpd[1856]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Feb 13 15:32:05.116799 ntpd[1856]: 13 Feb 15:32:05 ntpd[1856]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Feb 13 15:32:05.129488 kubelet[2332]: E0213 15:32:05.129358 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:06.130183 kubelet[2332]: E0213 15:32:06.130113 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:07.130928 kubelet[2332]: E0213 15:32:07.130872 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:08.131642 kubelet[2332]: E0213 15:32:08.131595 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:09.132101 kubelet[2332]: E0213 15:32:09.132048 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:10.039874 kubelet[2332]: E0213 15:32:10.039736 2332 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:10.133123 kubelet[2332]: E0213 15:32:10.133060 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:11.133628 kubelet[2332]: E0213 15:32:11.133575 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:12.134657 kubelet[2332]: E0213 15:32:12.134553 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:13.134968 kubelet[2332]: E0213 15:32:13.134913 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:14.135872 kubelet[2332]: E0213 15:32:14.135678 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:15.136000 kubelet[2332]: E0213 15:32:15.135946 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:16.137097 kubelet[2332]: E0213 15:32:16.137044 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:17.137476 kubelet[2332]: E0213 15:32:17.137429 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:18.138529 kubelet[2332]: E0213 15:32:18.138474 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:19.139723 kubelet[2332]: E0213 15:32:19.139665 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:20.140313 kubelet[2332]: E0213 15:32:20.140266 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:21.140558 kubelet[2332]: E0213 15:32:21.140497 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:22.141025 kubelet[2332]: E0213 15:32:22.140970 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:23.141394 kubelet[2332]: E0213 15:32:23.141336 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:24.141523 kubelet[2332]: E0213 15:32:24.141463 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:25.142432 kubelet[2332]: E0213 15:32:25.142359 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:26.143571 kubelet[2332]: E0213 15:32:26.143524 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:27.143749 kubelet[2332]: E0213 15:32:27.143690 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:28.144218 kubelet[2332]: E0213 15:32:28.144163 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:29.144459 kubelet[2332]: E0213 15:32:29.144397 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:30.039356 kubelet[2332]: E0213 15:32:30.039301 2332 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:30.145487 kubelet[2332]: E0213 15:32:30.145444 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:31.146009 kubelet[2332]: E0213 15:32:31.145954 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:31.419871 systemd[1]: run-containerd-runc-k8s.io-c8a9ebad7bfab42598f20c8205f71da2181e8af389b703e377db3c6274d7be81-runc.y6W3wL.mount: Deactivated successfully. Feb 13 15:32:32.084321 kubelet[2332]: E0213 15:32:32.084185 2332 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.26.113?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 13 15:32:32.146926 kubelet[2332]: E0213 15:32:32.146756 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:33.147778 kubelet[2332]: E0213 15:32:33.147721 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:34.148589 kubelet[2332]: E0213 15:32:34.148532 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:35.149406 kubelet[2332]: E0213 15:32:35.149269 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:36.150624 kubelet[2332]: E0213 15:32:36.150573 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:37.151685 kubelet[2332]: E0213 15:32:37.151628 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:38.152028 kubelet[2332]: E0213 15:32:38.151968 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:39.152979 kubelet[2332]: E0213 15:32:39.152922 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:32:40.153152 kubelet[2332]: E0213 15:32:40.153087 2332 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"